text
stringlengths 1
1.92M
| id
stringlengths 14
6.21k
| metadata
dict |
---|---|---|
\section{Introduction}
We are currently witnessing the explosive growth of technologies that focus on processing the large amounts of data available in the biomedical sciences. Closely, in parallel, machine learning has been gaining traction in an effort toward analyzing and making sense of said biomedical data. However, effectively using machine learning tools often requires deep knowledge and expertise of both machine learning techniques as well as the application domain. For example, to effectively apply machine learning to a genome-wide association study (GWAS)~\cite{bird2007perceptions, cordell2009detecting}, the practitioner must understand the complex trait being studied (e.g., a particular disease such as prostate cancer), the research surrounding the underlying genetics of the trait, as well as the numerous steps in the machine learning process that are necessary for a successful analysis (e.g., data preprocessing, feature engineering, model selection, etc.). If we can provide off-the-shelf tools that reduce the barrier to entry for using machine learning by non-experts, then such tools could prove beneficial to researchers working in the biomedical sciences. Mapping statistical inferences and models from genetic data analysis to underlying biological processes is an important goal to the field of computational genomics~\cite{ma2002functional}.
In recent years, evolutionary computation (EC) has been proven successful in automating a variety of tasks, and even outperformed several hand-designed solutions in human vs. machine competitions~\cite{hornby2011computer,fredericks2013exploring,forrest2009genetic,spector2008genetic}. As such, we believe there is considerable promise in using EC to automate the analysis of biomedical data. Last year, we introduced the Tree-Based Pipeline Optimization Tool (TPOT)~\cite{Olson2016EvoBio,Olson2016GPTP}, which seeks to automate the process of designing machine learning pipelines using genetic programming (GP)~\cite{banzhaf1998genetic}. We found that TPOT often outperforms a standard machine learning analysis, all the while requiring no {\em a priori} knowledge about the problem it is solving~\cite{OlsonGECCO2016,Olson2016JMLR}. Here, we report on our attempts to specialize TPOT for human genetics research.
Human genetics research poses a unique data analysis challenge due to the effects of non-additive gene-gene interactions (i.e., epistasis) and the large number of genes that must be simultaneously considered as possible predictors of a complex trait~\cite{moore2010bioinformatics}. As a result, simple linear models of complex traits often predict little about the trait, and it is typically impossible to perform an exhaustive combinatorial search of every possible genetic model including two or more genes. For this reason, many researchers leverage {\em a priori} expert knowledge to intelligently reduce and guide the search space when performing a combinatorial search of possible genetic models~\cite{moore2006exploiting}.
In this paper we introduce TPOT-MDR, which uses GP to automate the study of complex diseases in GWAS. TPOT-MDR automatically designs sequences of common operations from genetic analysis studies, such as data filtering and Multifactor Dimensionality Reduction (MDR)~\cite{ritchie2001multifactor, hahn2003multifactor, moore2002new, cho2004multifactor, moore2006flexible, moore2015epistasis}, with the goal of producing a model that best predicts the outcome of a complex trait based solely on their genetics. Furthermore, we enable TPOT-MDR to leverage {\em a priori} expert knowledge through an Expert Knowledge Filter (EKF), which performs feature selection on the GWAS datasets using information from the expert knowledge source.
To demonstrate TPOT-MDR's capabilities, we compare TPOT-MDR to state-of-the-art machine learning methods on a combination of simulated and real-world GWAS datasets. These datasets are all supervised classification datasets with a focus on human disease as the outcome. We find that TPOT-MDR performs significantly better than the state-of-the-art machine learning methods on the GWAS datasets, especially when it is provided the EKF as an optional feature selector. We further analyze the resulting TPOT-MDR model on a real-world GWAS dataset to highlight the interpretability of TPOT-MDR models, which is a feature that is typically lacking in machine learning models. Finally, we release TPOT-MDR as an open source Python software package to be freely used in human genetics research.
\section{Related Work}
For automated machine learning in general, approaches have mainly focused on optimizing subsets of a machine learning pipeline~\cite{hutter2015beyond}, which is otherwise known as hyperparameter optimization. One readily accessible approach is grid search, which applies brute force search within a search space of all possible model parameters to find the best model configuration. Relatively recently, randomized search~\cite{bergstra2012random} and Bayesian optimization~\cite{snoek2012practical} techniques have entered into the foray and have offered more intelligently derived solutions---by adaptively choosing new configurations to train---to the hyperparameter optimization task. Much more recently, a novel bandit-based approach to hyperparameter optimization have outperformed state-of-the-art Bayesian optimization algorithms by 5x to more than an order of magnitude for various deep learning and kernel-based learning problems~\cite{li2016hyperband}. Although TPOT-MDR is an automated machine learning approach, it is more specialized on bioinformatics problems rather than general machine learning.
Narrowing the focus to automated machine learning in bioinformatics, the literature is far more sparse. One such example is~\cite{franken2012inferring}, in which they analyze metabolomics data using a modified Bayesian optimization algorithm integrated with the classification algorithms provided in WEKA, a suite of machine learning software written in Java. The Bayesian optimization provided feature subset selection, which filtered irrelevant and redundant features from the datasets to achieve dimensionality reduction. These techniques lead to an improvement of classification accuracy.
Genetic programming and evolutionary computation methods have also been successfully applied to bioinformatics studies, such as~\cite{Moore2013,urbanowicz2013role}, but they do not focus on designing and tuning a series of standard data analysis operations for a specific dataset. As such, although they are related techniques, they do not fall into the automated machine learning domain.
\section{Methods}
In this section, we briefly review TPOT~\cite{Olson2016EvoBio,OlsonGECCO2016,Olson2016JMLR,Olson2016GPTP} and describe the new pipeline operators that were implemented for TPOT-MDR. Afterwards, we describe the datasets used to evaluate TPOT-MDR and compare it to the state-of-the-art machine learning methods.
\subsection{TPOT Review}
\label{sec:tpot-review}
TPOT uses an evolutionary algorithm to automatically design and optimize a series of standard machine learning operations (i.e., a pipeline) that maximize the final classifier's accuracy on a supervised classification dataset. It achieves this task using a combination of genetic programming (GP)~\cite{banzhaf1998genetic} and Pareto optimization (specifically, NSGA2~\cite{Deb2002}), which optimizes over the trade-off between the number of operations in the pipeline and the accuracy achieved by the pipeline.
TPOT implements four main types of pipeline operators: (1) preprocessors, (2) decomposition, (3) feature selection, and finally (4) models. All the pipeline operators make use of existing implementations in the Python scikit-learn library~\cite{pedregosa2011scikit}. Preprocessors consist of two scaling operators to scale the features and an operator that generates new features via polynomial combinations of numerical features. Decomposition consists of a variant of the principal component analysis (\texttt{RandomizedPCA}). Feature selection implements various strategies that serve to filter down the features by some criteria, such as the linear correlation between the feature and the outcome. Models consist of supervised machine learning models, such as tree-based methods, probabilistic and non-probabilistic models, and k-nearest neighbors.
TPOT combines all the operators described above and assembles machine learning pipelines from them. When a pipeline is evaluated, the entire dataset is passed through the pipeline operations in a sequential manner---scaling the data, performing feature selection, generating predictions from the features, etc.---until the final pipeline operation is reached. Once the dataset has fully traversed the pipeline, the final predictions are used to evaluate the overall classification accuracy of the pipeline. This accuracy score is used as part of the pipeline's fitness criteria in the GP algorithm.
To automatically generate and optimize these machine learning pipelines, TPOT uses a GP algorithm as implemented in DEAP~\cite{fortin2012deap}, which is a Python package for evolutionary algorithms. Oftentimes, GP builds trees of mathematical functions that seek to optimize toward a specified criteria. In TPOT, GP is used to optimize the number and order of pipeline operators as well as each operator's parameters. TPOT follows a standard GP process for 100 generations: random initialization of the initial population (default population size of 100), evaluation of the population on a supervised classification dataset, selection of the most fit individuals on the Pareto front via NSGA2, and variation through uniform mutation (90\% of all individuals per generation) and one-point crossover (5\% of all individuals per generation). For more information on the TPOT optimization process, see~\cite{OlsonGECCO2016}.
\subsection{TPOT-MDR}
\begin{figure*}
\includegraphics[width=\textwidth]{figures/tpot-mdr-pipeline-example.pdf}
\centering
\caption{Example TPOT-MDR pipeline. Each circle represents an operation on the dataset, and each arrow represents the passing of the processed dataset to another operation.}
\label{fig:tpot-mdr-example}
\end{figure*}
TPOT-MDR is a specialized version of TPOT that focuses on genetic analysis studies. It features two new operators that are commonly used genetic analyses of human disease: (1) Multifactor Dimensionality Reduction (MDR) and (2) an Expert Knowledge Filter (EKF).
MDR is a machine learning method for detecting statistical patterns of epistasis by manipulating the feature space of the dataset to more easily identify interactions within the data~\cite{ritchie2001multifactor, hahn2003multifactor, moore2006flexible, moore2015epistasis}. To summarize, MDR is a constructive induction algorithm that combines two or more features to create a single feature that captures the interaction affects among the features. This constructed created feature can be fed back into the dataset as a new feature or used as the final prediction on the dataset.
The motivation behind adding the EKF operator was that, often times, {\em a priori} expert knowledge about a biomedical dataset exists: Perhaps the dataset has been analyzed and annotated in previous studies, a database exists with relevant information about the genes in a dataset, or statistical expert knowledge can be derived from the dataset before the study~\cite{moore2010bioinformatics}. This {\em a priori} expert knowledge can be leveraged to guide the TPOT-MDR search algorithm in deciding what genes to include in the final genetic model.
The EKF operator selects an expert knowledge source from the sources provided and selects the \texttt{N} best features according to the expert knowledge source (where \texttt{N} is constrained to [1, 5]). Since the EKF operator is parameterized to select both the expert knowledge source and the number of top features to retain, TPOT-MDR optimizes (1) whether and where in the pipeline to include the EKF and (2) the parameters of the EKF. Multiple EKF operators can be included in a TPOT-MDR pipeline, as shown in Figure~\ref{fig:tpot-mdr-example}.
Other than the MDR and EKF operators, the only other operators included in TPOT-MDR are a standard univariate feature selection method (\texttt{SelectKBest} in scikit-learn~\cite{pedregosa2011scikit}, with an evolvable number of features to retain, \texttt{N}, where \texttt{N} is constrained to [1, 5]) and a \texttt{CombineDFs} operator that combines two feature sets together into a single feature set. These operators can be chained together to form a series of operations acting on a GWAS dataset, as depicted in Figure~\ref{fig:tpot-mdr-example}. Except for different operator set, the TPOT-MDR optimization process works the same as the original TPOT algorithm as described in Section~\ref{sec:tpot-review}, and was run with a population size of 300 for 300 generations with a per-individual mutation rate of 90\% and per-individual crossover rate of 5\%.
\subsection{Datasets}
We performed an analysis of TPOT-MDR on both simulated datasets and a real world GWAS dataset. The simulated datasets were generated using GAMETES~\cite{urbanowicz2012gametes}, an open source software package designed to generate GWAS datasets with pure epistatic interactions between the features. We simulated 16 different datasets with specific properties to test the scalability of TPOT-MDR. The simulated datasets included 10, 100, 1,000, or 5,000 single-nucleotide polymorphism (SNP) features, each with 2 predictive features and the remaining features generated randomly using an allele frequency between 0.05 and 0.5. Further, we generated datasets with heritabilities (i.e., noise) of 0.05, 0.1, 0.2, or 0.4, where lower heritability entails more noise in the dataset. Notably, all of the GAMETES datasets had a sample size of 2,000 to ensure a reasonably large dataset size.
By scaling the GAMETES dataset feature spaces from 10 to 5,000, we sought to evaluate how well TPOT-MDR could handle increasingly large numbers of non-predictive features. Similarly, by simulating increasing amounts of noise in the dataset, we sought to evaluate how much noise TPOT-MDR could handle before it failed to detect and model the predictive features. As such, this simulated benchmark provides a detailed view of of the strengths and limitations of TPOT-MDR in the GWAS domain.
To validate TPOT-MDR on a real-world dataset, we used a nationally available genetic dataset of 2,286 men of European descent (488 non-aggressive and 687 aggressive cases, 1,111 controls) collected through the Prostate, Lung, Colon, and Ovarian (PLCO) Cancer Screening Trial, a randomized, well-designed, multi-center investigation sponsored and coordinated by the National Cancer Institute (NCI) and their Cancer Genetic Markers of Susceptibility (CGEMS) program. In this study, we focus on prostate cancer aggressiveness as the endpoint, where the prostate cancer is considered aggressive if it was assigned a Gleason score $\geq$ 7 and was in tumor stages III/IV. Between 1993 and 2001, the PLCO Trial recruited men ages 55--74 years to evaluate the effect of screening on disease specific mortality, relative to standard care. All participants signed informed consent documents approved by both the NCI and local institutional review boards. Access to clinical and background data collected through examinations and questionnaires was approved for use by the PLCO. Men were included in the current analysis if they had a baseline PSA measurement before October 1, 2003, completed a baseline questionnaire, returned at least one Annual Study Update (ASU), and had available SNP profile data through the CGEMS data portal\footnote{http://cgems.cancer.gov}. Prior to this study, the CGEMS dataset was filtered to the 219 SNPs associated with biological pathways relevant to aggressive prostate cancer~\cite{Lavender2012Interaction}. We call this dataset the ``CGEMS Prostate Cancer GWAS dataset.''
For all experiments, we used four different statistical expert knowledge sources as input to the EKF operator: the ReliefF~\cite{kononenko1997overcoming}, SURF~\cite{greene2009spatially}, SURF*~\cite{greene2010informative}, and MultiSURF~\cite{granizo2013multiple} algorithms. These algorithms evaluated the entire dataset prior to the experiments and assigned numerical feature importance scores to each feature, which is an indication of how predictive each feature is of the outcome. These numerical scores were provided to the TPOT-MDR EKF operator, and were used to rank the features when filtering the datasets. We computed the statistical expert knowledge sources for all 16 GAMETES datasets and the CGEMS Prostate Cancer GWAS dataset, resulting in 68 unique expert knowledge sources (4 for each experiment).
\subsection{Evaluating TPOT-MDR}
\label{sec:evaluating-tpot-mdr}
We ran four different sets of experiments on the datasets: (1) Extreme Gradient Boosting (XGBoost)\footnote{XGBoost parameters: 500 trees, learning rate 0.0001, and 10 maximum tree depth}~\cite{chen2016xgboost}, (2) Logistic Regression\footnote{The logistic regression regularization parameter was tuned via 10-fold cross validation}~\cite{MachineLearningBook}, (3) TPOT-MDR without the EKF, and (4) TPOT-MDR with the EKF. In Section~\ref{sec:results}, we refer to these experiments as \texttt{XGBoost}, \texttt{Logistic Regression}, \texttt{TPOT (MDR only)}, and \texttt{TPOT (MDR + EKF)}, respectively. For the GAMETES datasets, we additionally compared the four experiments to the baseline of a MDR model constructed with the two known predictive SNP features (called \texttt{MDR (Predictive SNPs)}), which will achieve the maximum possible classification accuracy for the GAMETES datasets without overfitting on the noisy features.
We chose to compare TPOT-MDR to the XGBoost classifier because XGBoost has been established as a widely popular and successful tree-based classifier in the machine learning community, particularly in the Kaggle\footnote{http://www.kaggle.com} machine learning competitions. Further, we compared TPOT-MDR to a logistic regression to demonstrate the capabilities of a standard linear model on GWAS datasets, which will essentially detect only linear associations between the features and the outcome. Finally, we ran TPOT-MDR without the EKF to demonstrate whether the EKF was important for the TPOT-MDR optimization process.
For every dataset and experiment, we performed 30 replicate runs with unique random number seeds (where applicable). This allowed us to evaluate and explore the limits of TPOT-MDR's modeling capabilities on a broad range of GWAS datasets, and demonstrate how it performs in comparison to state-of-the-art machine learning methods. In all cases, the accuracy scores reported are averaged balanced accuracy scores from 10-fold cross-validation, where the balanced accuracy metric is a normalized version of accuracy that accounts for class imbalance by calculating accuracy on a per-class basis then averaging the per-class accuracies~\cite{Velez2007,urbanowicz2015exstracs}. With balanced accuracy, a score of 50\% is equivalent to random guessing, even with imbalanced datasets.
\section{Results}
\label{sec:results}
\begin{figure*}
\includegraphics[width=\textwidth]{figures/tpot-gametes-comparison-annotated.pdf}
\centering
\caption{Comparison of results on the simulated GAMETES GWAS datasets. Each box plot shows the distribution of averaged 10-fold balanced accuracies for each experiment, where the notches indicate the 95\% confidence interval. A 50\% balanced accuracy is equivalent to random guessing. Each panel within the figure corresponds to differing levels of heritability (i.e., dataset noise) and numbers of features in the simulated datasets, ranging from the easiest dataset on the top right (high heritability, small numbers of features) to the hardest dataset bottom left (low heritability, large numbers of features).\\\\Since some of the experiments had little variance in scores, some box plots are too small to determine their color. For clarity, the box plots represent the following experiments, in order from left to right: TPOT (MDR only), XGBoost, Logistic Regression, TPOT (MDR + EKF), and MDR (Predictive SNPs). These experiments are described in Section~\ref{sec:evaluating-tpot-mdr}.}
\label{fig:gametes-comparison}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.4\textwidth]{figures/cgems-comparison.pdf}
\centering
\caption{Comparison of results on the CGEMS prostate cancer GWAS dataset. Each box plot shows the distribution of averaged 10-fold balanced accuracies for each experiment, where the notches indicate the 95\% confidence interval. A 50\% balanced accuracy is equivalent to random guessing.}
\label{fig:cgems-comparison}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{figures/cgems-mdr-grid.pdf}
\centering
\caption{Classification grid for the best MDR model that TPOT-MDR discovered for the CGEMS prostate cancer GWAS dataset. Each of the three grids correspond to one state of the \texttt{PRKCQ\_rs574512} SNP, whereas the cells within each grid correspond to one combination of states between the \texttt{AKT3\_rs12031994} and \texttt{DIABLO\_rs12870} SNPs. Thus, for example, the light grey upper right cell in the leftmost grid corresponds to \texttt{PRKCQ\_rs574512} = 0, \texttt{AKT3\_rs12031994} = 2, and \texttt{DIABLO\_rs12870} = 0.\\\\Dark grey bars and cells indicate aggressive cases (i.e., at risk of aggressive prostate cancer), whereas light grey bars and cells indicate non-aggressive cases (i.e., lower risk of aggressive prostate cancer). The numbers at the top of each bar indicate the number of aggressive and non-aggressive cases that fall within each cell when the entire CGEMS dataset is sorted into the MDR classification grid. If no data points fall into a cell, the cell is left blank.}
\label{fig:cgems-mdr-grid}
\end{figure*}
\subsection{GAMETES Simulated Datasets}
As shown in Figure~\ref{fig:gametes-comparison}, TPOT-MDR without the EKF rarely finds the best genetic model because it only has a univariate feature selector at its disposal. In contrast, TPOT-MDR with the EKF always discovers the best genetic model except when there are thousands of features and high noise. Even in the cases where TPOT-MDR with the EKF fails to find the best genetic model, it still discovers better genetic models than the other methods in this study.
For a baseline, we compared TPOT-MDR to a tuned logistic regression and XGBoost, as described in Section~\ref{sec:evaluating-tpot-mdr}. Figure~\ref{fig:gametes-comparison} shows that logistic regression consistently fails to find a good model and barely performs better than chance in even the easiest GAMETES datasets. This finding demonstrates a key flaw in using linear models for GWAS: Linear models will not detect higher-order interactions within the dataset unless the interactions are explicitly modeled. Similarly, XGBoost can sometimes find a good model for GWAS datasets if the dataset is heavily filtered beforehand (e.g., to 10s of features), but rapidly degrades in performance as more noisy features are added to the dataset.
\subsection{CGEMS Prostate Cancer Dataset}
The CGEMS prostate cancer GWAS dataset has 219 SNPs, 1,175 samples, and likely falls into the ``lower heritability'' spectrum of the GAMETES datasets. Thus, we would expect to see roughly similar performance on the CGEMS dataset as we saw in the GAMETES datasets with 100 features and 0.1 or 0.05 heritability in Figure~\ref{fig:gametes-comparison}.
As predicted, Figure~\ref{fig:cgems-comparison} shows that XGBoost and logistic regression fail to discover the higher-order interactions within the real-world CGEMS dataset. In contrast, TPOT-MDR with and without the EKF managed to consistently find predictive genetic models for the CGEMS dataset. In particular, TPOT-MDR with the EKF found the best genetic models, largely because the expert knowledge sources (ReliefF, SURF, etc.) contained information about the higher-order interactions between the SNPs that TPOT-MDR was able to harness.
To better understand the genetic models that TPOT-MDR discovered, we analyzed the final model from the highest-scoring TPOT-MDR experiment and visualized the pattern of interactions from the MDR model in Figure~\ref{fig:cgems-mdr-grid}. We see patterns suggestive of statistical epistasis within the model, for example, in the leftmost grid a patient's aggressive (dark grey cells) or non-aggressive (light grey cells) status can only be determined by a combination of \texttt{AKT3\_rs12031994} and \texttt{DIABLO\_rs12870}. Similarly, the pattern of aggressive vs. non-aggressive status between \texttt{AKT3\_rs12031994} and \texttt{DIABLO\_rs12870} varies depending on the state of the third SNP, \texttt{PRKCQ\_rs574512}, which suggests a statistical three-way epistatic interaction between the SNPs. If there were no higher-order interactions between the SNPs, then we would expect a patient's aggressive vs. non-aggressive status to vary independently between the SNPs, i.e., we would expect to see horizontal and vertical bands of aggressive or non-aggressive status within the grids. As previous studies have suggested links between these SNPs and aggressive prostate cancer~\cite{Lavender2012Interaction}, we can use these TPOT-MDR findings to further elucidate the SNPs' higher-order interactions and involvement in the development of aggressive prostate cancer in men of European descent.
\section{Discussion}
In this paper, we introduced a new method and tool, TPOT-MDR, for automating the analysis of complex diseases in genome-wide association studies (GWAS). We developed this tool to aid bioinformaticians so they can more efficiently process and analyze the ever-growing databases of biomedical data. To that end, TPOT-MDR is designed to optimize a series of machine learning operations that are commonly used in biomedical studies, such as filtering the features using expert knowledge sources, combining information from different expert knowledge sources, and modeling the higher-order interactions of the features using Multifactor Dimensionality Reduction (MDR) to predict a patient's outcome. Before, bioinformaticians would typically perform and refine these operations by hand, whereas now TPOT-MDR can relieve the bioinformatician of these tedious duties so they can focus on more challenging tasks.
Even though this paper focuses on the application of TPOT-MDR to GWAS datasets, we note that TPOT-MDR is a general machine learning tool that will work with any dataset that has categorical features and a binary outcome. TPOT-MDR has been released as a free, open source Python tool and is available on GitHub\footnote{https://github.com/rhiever/tpot/tree/tpot-mdr}.
In Section~\ref{sec:results}, we evaluated TPOT-MDR on a series of simulated and real-world GWAS datasets and found that TPOT-MDR outperforms linear models and XGBoost across all of the datasets (Figures~\ref{fig:gametes-comparison} and~\ref{fig:cgems-comparison}). These findings are important for several reasons. For one, we demonstrated that simple linear models are ill-suited for the analysis of GWAS datasets owing to their inability to model higher-order interactions within the dataset. We also demonstrated that state-of-the-art tree-based machine learning methods---typically thought to be effective at modeling higher-order feature interactions---are similarly ill-suited for modeling GWAS datasets with large numbers of features. Finally, we highlighted the importance of harnessing {\em a priori} expert knowledge to filter GWAS datasets prior to the modeling step, which could aid state-of-the-art machine learning algorithms such as XGBoost in eliminating extraneous features.
Although the results in Section~\ref{sec:results} suggest that TPOT-MDR is superior to the compared methods on every dataset we used, there are some drawbacks to TPOT-MDR that must be considered. For one, linear models and XGBoost are orders of magnitude faster to train and evaluate than TPOT-MDR. As TPOT-MDR uses genetic programming to optimize the series of filtering and modeling operations on the dataset, a single TPOT-MDR run took roughly 3 hours on the CGEMS dataset, whereas XGBoost and logistic regression each took less than a minute. Given that many GWAS datasets often have thousands to hundreds of thousands of SNP features (compared to the 219 in CGEMS), TPOT-MDR will require more work to improve its run time scalability to larger GWAS datasets. Furthermore, TPOT-MDR is highly dependent on its expert knowledge sources. In these experiments, we used expert knowledge sources that specialize in detecting higher-order epistatic interactions, which proved to be critical in both the simulated and real world datasets. If TPOT-MDR is provided with less informative expert knowledge sources, then it will likely perform worse, which we can observe in Figures~\ref{fig:gametes-comparison} and~\ref{fig:cgems-comparison} (TPOT-MDR without EKF vs. TPOT-MDR with EKF).
As shown in Figure~\ref{fig:gametes-comparison}, XGBoost can sometimes model higher-order interactions when the dataset is heavily filtered beforehand. However, the resulting XGBoost model is not nearly as interpretable as with TPOT-MDR. TPOT-MDR produces a model that we can inspect to study the pattern of feature interactions within the dataset (Figure~\ref{fig:cgems-mdr-grid}), whereas XGBoost provides only a complex ensemble of decision trees. This is an important consideration when building machine learning tools for bioinformatics: More often than not, bioinformaticians do not need a black box model that achieves high prediction accuracy on a real-world dataset. Instead, bioinformaticians seek to build a model that can be used as a microscope for understanding the underlying biology of the system they are modeling. In this regard, the models generated by TPOT-MDR can be invaluable for elucidating the higher-order interactions that are often present in complex biological systems.
In conclusion, TPOT-MDR is a promising step forward in using evolutionary algorithms to automate the design of machine learning workflows for bioinformaticians. We believe that evolutionary algorithms (EAs) are poised to excel in the automated machine learning domain, and specialized tools such as TPOT-MDR highlight the strengths of EAs by showing how easily EA solution representations can be adapted to a particular domain.
\section{Acknowledgements}
We thank the Penn Medicine Academic Computing Services for the use of their computing resources. This work was supported by National Institutes of Health grant AI116794.
\newpage
\bibliographystyle{ACM-Reference-Format}
| proofpile-arXiv_065-7631 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
Bayesian auction design has been an extremely flourishing area in game theory since the seminal work of \cite{myerson1981optimal} and \cite{cremer1988full}.
One of the main focuses is to generate revenue, by selling $m$ heterogenous items to $n$ players.
Each player has, as his private information, a valuation function describing how much he values each subset of the items.
The valuations are drawn from prior distributions. An important assumption in Bayesian mechanism design is that the distributions are commonly known by the seller and the players ---the {\em common prior} assumption.
However, as pointed out by another seminal work \cite{wilson1985game},
such common knowledge is ``rarely present in experiments and never in practice'',
and ``only by repeated weakening of common knowledge
assumptions will the theory approximate reality.''%
\footnote{As also righteously put in \cite{wilson1985game}, ``game theory as we presently know it cannot proceed without the fulcrum of common knowledge.'' Thus, rather than rejecting the common prior assumption, the author of \cite{wilson1985game} raised the removal of this assumption as an important challenge for future studies.}
In an effort to remove this assumption,
\cite{azar2012crowdsourced} first considered {\em crowdsourced Bayesian mechanisms},
which only require that each player privately know the prior distribution (or a refinement of it).
The seller can be ignorant and the prior may not be
common knowledge among the players.
Also without relying on a common prior,
{\em dominant-strategy truthful} (DST) mechanisms
\cite{myerson1981optimal, ronen2001approximating, chawla2010multi, kleinberg2012matroid, yao2015n, cai2016duality, cai2017simple}
only requires that the seller know the prior distribution.
However, as each player's values reflect his private
evaluation about the items based on the information he has,
it is still a demanding requirement that some agent,
whether the seller or a player,
individually possesses good knowledge about {\em all} players' value distributions.
As our main conceptual contribution, in this paper we consider a framework
for auctions where knowledge about the players' value distributions are
{\em arbitrarily scattered} among the players and the seller.
The seller can no longer base his mechanism
on a single agent's knowledge, and must really {\em aggregate} pieces of
information from all players in order to gain a good understanding about the distributions.
More precisely, we focus on unit-demand auctions and additive auctions ---two auction classes widely studied
in the literature \cite{chawla2007algorithmic, hart2012approximate}.
A player's valuation function is specified by $m$ values, one for each item.
For each player~$i$ and item $j$, the value $v_{ij}$ is independently drawn from a distribution~ $\mathcal{D}_{ij}$.
Each player privately knows his own values and some (or none) of the distributions of some other players for some items,
like long-time competitors in the same market.
There is no constraint about who knows which distributions.
It is possible that nobody knows how the whole valuation profile is distributed,
and some value distributions are not known by anybody.
The seller may also know some of the distributions, but he does not know which player knows what.
We introduce directed {\em knowledge graphs} to succinctly describe
the players' knowledge. Each player knows the distributions of his neighbors for specific items.
Different items' knowledge graphs may be totally different, and the structures of the graphs
are not known by anybody.
Interestingly, the intuition behind the information setting that we formalize
has long been considered by philosophers.
In \cite{james1979some},
the author discussed a world where ``everything in the world might be known by somebody, yet not everything by the same knower.''
Under such an unstructured information setting, we
are able to design mechanisms that aggregate all players' (and the seller's)
knowledge and generate good revenue compared with the optimal Bayesian revenue when there is a common prior.
Because our mechanisms literally ``crowdsource'' information from every player to
fill in the jigsaw puzzle of the value distributions,
we continue referring to them as {\em crowdsourced Bayesian mechanisms.}
However, compared with the original notion put forward in \cite{azar2012crowdsourced},
this notion now has a much broader and richer meaning.
We formalize our model in Section~\ref{sec:model}.
Below let us briefly introduce our main results.
\subsection{Main Results}\label{subsec:results}
\paragraph{Crowdsourcing under arbitrary knowledge graphs.}
Our goal is to design {\em 2-step dominant strategy truthful} (2-DST) \cite{azar2012crowdsourced}
crowdsourced mechanisms whose expected revenue approximates
that of the optimal Bayesian incentive compatible (BIC) mechanism, denoted by~$OPT$.%
\footnote{A Bayesian mechanism is BIC if it is a Bayesian Nash equilibrium for all players to report their true values.}
In order for the seller to aggregate the players' knowledge about the distributions,
it is natural for the mechanism to ask each player to report his knowledge to the seller, together with his own values.
A 2-DST mechanism is such that,
(1) no matter what knowledge the players may report about each other, it is {\em dominant} for each player to report his true values; and (2) given that all players report their true values,
it is {\em dominant} for each player to report his true knowledge about others.
This is an extension of dominant-strategy truthfulness and a
natural solution concept
based on mutual knowledge of rationality.
When the players' knowledge is completely unconstrained,
some distributions may not be known by anybody.
In this case, it is easy to see that no crowdsourced Bayesian mechanism
can be a bounded approximation to $OPT$.
Accordingly, we introduce
a new benchmark:
the
optimal BIC mechanism
applied to players and items for whom the distributions are indeed known by somebody, denoted by $OPT_K$.
Note that this is a very natural benchmark when considering players with limited knowledge, and if every distribution is known by somebody then it
is exactly $OPT$.
We have the following theorems, formalized in Section~\ref{sec:k=0}.
\vspace{5pt}
\noindent
{\bf Theorems \ref{thm:unit} and \ref{thm:newbvcg}.} (sketched) {\em
For any knowledge graph,
there is a 2-DST crowdsourced Bayesian mechanism for unit-demand auctions
with revenue
$\geq \frac{OPT_K}{96}$,
and such a mechanism for
additive auctions
with revenue
$\geq \frac{OPT_K}{70}$.
}
\smallskip
To prove Theorem~\ref{thm:unit},
we actually prove a general theorem that converts a large class of Bayesian mechanisms into crowdsourced Bayesian mechanisms.
\vspace{5pt}
\noindent
{\bf Theorem \ref{col:copies}.} (informally stated) {\em Any Bayesian mechanism for unit-demand auctions that is a good approximation in the COPIES setting can be crowdsourced.}
\smallskip
To prove Theorem~\ref{thm:newbvcg},
we have developed a novel approach for using the {\em adjusted revenue}~\cite{yao2015n}
and proved the following technical lemma.
\vspace{5pt}
\noindent
{\bf Lemma \ref{lem:csa:C}.} (informally stated) {\em
The optimal adjusted revenue can be crowdsourced.}
\smallskip
Indeed, although the adjusted revenue has been very helpful in Bayesian auctions,
it was unexpected that
we found an interesting and highly non-trivial way of using it to
analyze crowdsourced Bayesian mechanisms.
\vspace{-5pt}
\paragraph{Crowdsourcing when everything is known by somebody.}
When the amount of knowledge in the system increases, the seller may hope to generate more revenue by aggregating the players' knowledge.
Indeed, if every distribution is known by somebody, then the benchmark $OPT_K$ is exactly $OPT$.
Interestingly, we show that the revenue that can be generated by crowdsourced mechanisms increases gracefully
together with the amount of knowledge.
More precisely, we have the following theorems, formalized in
Section~\ref{sec:partial}.
\smallskip
\noindent
{\bf Theorems \ref{thm:unit-k} and \ref{thm:additive}.} (sketched) {\em
$\forall k\in [n-1]$, when each distribution is known by at least $k$ players,
there is a 2-DST crowdsourced Bayesian mechanism for unit-demand auctions
with revenue $\geq \frac{\tau_k}{24}\cdot OPT$,
and such a mechanism for additive auctions
with revenue $\geq \max\{\frac{1}{11}, \frac{\tau_k}{6+2\tau_k}\} OPT$, where $\tau_k = \frac{k}{(k+1)^{\frac{k+1}{k}}}$.
}
\smallskip
Note that $\tau_1 = \frac{1}{4}$ and $\tau_k \rightarrow 1$
when $k$ gets larger. Also, $k$ can be much smaller than $n$ for $\tau_k$ to be close to 1.
Finally, by exploring the combinatorial structure of the knowledge graph,
we have the following theorem for single-good auctions.
\smallskip
\noindent
{\bf Theorem \ref{thm:myerson}.} (sketched) {\em When the knowledge graph is 2-connected,%
\footnote{A directed graph is 2-connected if for any node $i$, the graph with $i$ and all adjacent edges removed is still strongly connected. For knowledge graphs, this means no player is an ``information hub'', without whom the players will split into two parts such that one part has no information about the other.}
there is a 2-DST crowdsourced Bayesian mechanism for single-good auctions with revenue $\geq (1-\frac{1}{n})OPT$.
}
\subsection{Discussions}
\paragraph{The use of scoring rules.}
Since crowdsourced mechanisms elicit the players' private knowledge about each other's value distributions,
it is not surprising that we will use {\em proper scoring rules} \cite{brier1950verification, gneiting2007strictly} to reward the players for their reported knowledge, as
in information elicitation \cite{miller2005eliciting, radanovic2013robust}.
However, the role of scoring rules here is not as significant.
Indeed, the difficulties
in designing crowdsourced mechanisms are to guarantee that,
even without rewarding the players for their knowledge,
(1) it is dominant for each player to report his true values,
(2) reporting his true knowledge {\em never hurts him}, and
(3) the resulting revenue approximates the desired benchmark.
Following the widely adopted convention that a player tells the truth
as long as lying does not strictly benefit him, we are done when all three properties hold.
Accordingly, in Sections \ref{sec:k=0} and \ref{sec:partial}
we focus on designing crowdsourced mechanisms without rewarding the players for their knowledge.
Scoring rules are used later solely to break the utility-ties and make it {\em strictly better} for a player to
report his true knowledge, and they are used in a
straightforward manner.
In Appendix \ref{sec:buyknowledge}, we demonstrate how to add
scoring rules to our mechanisms.
\paragraph{Extensions of our results.}
In our main results, the seller asks the players to report
the distributions in their entirety, without being concerned with the communication complexity for doing so.
This allows us to focus on the main difficulties in aggregating
the players' knowledge.
In Appendix~\ref{app:efficient}, we show how to modify our mechanisms
so that the players only report a small amount of information about the distributions.
Furthermore, in the main body of this paper we consider auction settings where a player
$i$'s knowledge about another player $i'$ for an item $j$ is exactly the prior distribution
$\mathcal{D}_{i'j}$. This simplifies the description of the knowledge graphs.
In Appendix~\ref{app:refine}, we consider settings where a player may observe private signals about
other players and can further refine the prior.
\paragraph{Future directions.}
In this paper we have strictly weakened the knowledge assumption about the seller and the players
in Bayesian auctions,
and provided important insights about
the relationship between the amount of knowledge in the system and
the achievable revenue.
Since the common prior assumption implies that every player has correct and exact knowledge about all distributions,
in our main results
we do not consider scenarios where players have ``insider'' knowledge.
If the insider knowledge is correct (i.e., is a refinement of the prior),
then our mechanisms' revenue
increases; see Appendix~\ref{app:refine}.
If the insider knowledge may be wrong, the problem is closely related to {\em robust mechanism design}~\cite{bergemann2012robust}, which is a very important topic in game theory but not the focus of this paper.
Still, how to aggregate even the incorrect information that the players may have about each other's distributions
is a very interesting question for future studies.
Designing (approximately) optimal Bayesian mechanisms has been of great interest to computer scientists.
A common prior is clean to work with, but it is well understood that this is a strong assumption.
In some sense,
our results show that this assumption is without much loss of generality
if one is willing to give up some portion of the revenue, and
this portion {\em shrinks smoothly} as the amount of knowledge increases.
An important future direction is to ``crowdsource''
not only DST but also BIC mechanisms.
For example, the BIC mechanisms in \cite{cremer1988full, cai2012algorithmic, cai2012optimal}
are optimal in their own settings, and
it is unclear how to convert them to crowdsourced Bayesian mechanisms.
As pointed out by \cite{cremer1988full},
the common prior assumption seems to be crucial for its mechanism.
\vspace{-5pt}
\subsection{Related Work}\label{sec:related}
\paragraph {Bayesian auction design.}
In his seminal work \cite{myerson1981optimal}, Myerson introduced the first optimal Bayesian mechanism for single-good auctions with independent values,
which also applies to many single-parameter settings \cite{archer2001truthful}.
Since then, there has been a huge literature on designing (approximately) optimal Bayesian mechanisms that are either DST or BIC, such as
\cite{cremer1988full, riley1981optimal, ronen2001approximating, chawla2007algorithmic, hartline2009simple, dobzinski2011optimal, chawla2014approximate}; see \cite{hartline2007profit} for an introduction to this literature.
Mechanisms for multi-parameter settings have been constructed recently.
In \cite{cai2012algorithmic, cai2012optimal}, the authors characterize optimal BIC mechanisms for combinatorial auctions.
For unit-demand auctions, \cite{chawla2010power, chawla2010multi, kleinberg2012matroid, cai2016duality}
construct DST Bayesian mechanisms that are constant approximations.
For additive auctions,
\cite{hart2012approximate, li2013revenue, babaioff2014simple, yao2015n, cai2016duality} provide logarithmic or constant approximations under different conditions.
Moreover, \cite{rubinstein2015simple} and \cite{cai2017simple} construct Bayesian mechanisms for sub-additive valuations.
\vspace{-10pt}
\paragraph{Removing the common prior assumption.}
Following \cite{wilson1985game},
a lot of effort has been made in the literature trying to remove the common prior assumption in game theory \cite{baliga2003market, segal2003optimal, goldberg2006competitive}.
Again, in DST Bayesian mechanisms it suffices to assume that the seller knows the prior distribution.
Many works in this direction try to further
weaken the assumption and
consider settings where {\em the seller may be totally ignorant.}
In {\em prior-free mechanisms} \cite{hartline2008optimal, devanur2009limited}, the distribution is unknown and the seller learns about it from the values of randomly selected players.
In \cite{cole2014sample, dhangwatnotai2015revenue, huang2015making, zhiyi2016side}, the seller observes independent samples from the distribution before the auction begins.
In \cite{chen2013mechanism, chen2015tight}, the players have arbitrary possibilistic belief hierarchies
about each other.
In \cite{chen2017query}, the seller can only access the distributions via specific oracle queries.
In {\em robust mechanism design} \cite{bergemann2012robust, artemov2013robust, bergemann2015informational},
the players have arbitrary probabilistic belief hierarchies.
Finally, as mentioned, an earlier work of the first author of this paper considers a preliminary model for crowdsourced Bayesian auctions~\cite{azar2012crowdsourced}:
rather than allowing the distributions to be arbitrarily scattered among the players and the seller, each player privately knows {\em all} the distributions (or their refinements).
It is a very special case of our model.
Indeed, both the insights and the techniques in this paper are very different.
\vspace{-10pt}
\paragraph{Information elicitation.}
Using proper scoring rules \cite{brier1950verification, cooke1991experts, gneiting2007strictly} as an important tool, a lot of effort has been devoted to {\em information elicitation} \cite{prelec2004bayesian, miller2005eliciting, radanovic2013robust, zhang2014elicitability}.
Here each player reports his private information and his knowledge about the distribution of the other players' private information, and is rewarded based on everybody's report.
Different from auctions, there are no allocations or prices for items, and a player's utility is equal to his reward. Bayesian Nash equilibrium is a widely used solution concept in this literature.
Many studies here rely on a common prior, but mechanisms without this assumption have also been investigated \cite{witkowski2012peer}.
In some sense, our work can be considered as information elicitation in auctions without a common prior.
Note that in information elicitation the players do not have any cost for revealing their knowledge. It would be interesting to include such costs in (their and) our model, to see how the mechanisms will change accordingly.
\vspace{-5pt}
\section{Our Model for Crowdsourced Bayesian Auctions}\label{sec:model}
In this work, we focus on multi-item auctions with $n$ players and $m$ items. A player $i$'s value for an item~$j$, $v_{ij}$, is independently drawn from a distribution $\mathcal{D}_{ij}$.
Let $v_i = (v_{i1},\dots, v_{im})$, $\mathcal{D}_i = \mathcal{D}_{i1}\times\cdots\times\mathcal{D}_{im}$ and $\mathcal{D} = \mathcal{D}_1\times\cdots\times\mathcal{D}_n$.
Player $i$'s value for a subset $S$ of items is $\max_{j\in S} v_{ij}$ in {\em unit-demand} auctions, and is $\sum_{j\in S} v_{ij}$ in {\em additive} auctions.
Our settings are downward closed \cite{hartline2009simple},
the players' utilities (denoted by $u_i$) are quasi-linear, and the players are risk-neutral.
\vspace{-10pt}
\paragraph*{Knowledge graphs.}
It is illustrative to model the players' knowledge graphically.%
\footnote{We could have defined the players' knowledge
using the standard notion in epistemic game theory
\cite{harsanyi1967games, aumann1976agreeing, fhmvbook}:
roughly speaking, the state space consists of all possible distributions of the valuation profile, and
player~$i$ knows $\mathcal{D}_{i'j}$ if he is in an information set
where all distributions have the $(i',j)$-th component equal to $\mathcal{D}_{i'j}$.
However, the knowledge graph is a more succinct representation and is enough for the purpose of this work.}
More precisely, there is a vector of {\em knowledge graphs}, $G = (G_1,\dots, G_m)$, one for each item. Each $G_j$ is a directed graph with
$n$ nodes, one for each player,
and
edge $(i, i')$ is in $G_j$ if and only if player $i$ knows~$\mathcal{D}_{i'j}$.
A knowledge graph does not have self-loops: a player $i$'s knowledge about his own
value distributions is not considered.%
\footnote{Player $i$ may know his own distributions,
but this knowledge is neither used by our mechanisms nor affecting $i$'s strategies.}
There is no constraint about the structure of the knowledge graphs: the same player's distributions for different items may be known by
different players, and different players' distributions for the same item may also be known by different players.
Each player knows his own out-going edges, and neither the players nor the seller knows the whole graph.
We measure the amount of knowledge in the system by the number of players knowing each distribution. More precisely,
for any $k\in \{0,1\dots,n-1\}$, a knowledge graph is {\em $k$-bounded} if each node has in-degree at least $k$: a player's distribution is known by at least $k$ other players.
The vector $G$ is $k$-bounded if all the knowledge graphs are $k$-bounded.
Note that any knowledge graph is at least $0$-bounded,
and ``everything is known by somebody'' when $k\geq 1$.
The common prior assumption implies that all knowledge graphs
are complete directed graphs, or $(n-1)$-bounded, which is the strongest assumption in our model.
The seller's knowledge can be naturally incorporated into the knowledge graphs by
considering him as a special ``player 0''.
All our mechanisms can easily utilize the seller's knowledge, and we will not further discuss this issue.
\vspace{-10pt}
\paragraph*{Crowdsourced Bayesian mechanisms.}
Given the set $N = \{1,\dots, n\}$ of players, the set $M = \{1,\dots, m\}$ of items, and the distribution~$\mathcal{D}$, let $\hat{\mathcal{I}} = (N, M, \mathcal{D})$ be the Bayesian auction instance and
$\mathcal{I} = (N, M, \mathcal{D}, G)$ a corresponding crowdsourced Bayesian instance, where $G$ is a knowledge graph vector.
Different from Bayesian mechanisms, which are given the distribution $\mathcal{D}$ as input,
a crowdsourced Bayesian mechanism has neither $\mathcal{D}$ nor $G$ as input.
Instead, it asks each player $i$ to report a valuation $b_i = (b_{i1},\dots, b_{im})$ and
a {\em knowledge} $K_i = \times_{i'\neq i, j\in [m]} \mathcal{D}^i_{i'j}$ ---a distribution for the valuation subprofile $v_{-i}$. $K_i$ may contain a special symbol ``$\bot$'' at some places,
indicating that $i$ does not know the corresponding distributions.
$K_i$ is $i$'s {\em true knowledge} if $\mathcal{D}^i_{i'j} = \mathcal{D}_{i'j}$ whenever $(i, i')\in G_j$, and $\mathcal{D}^i_{i'j} = \bot$ otherwise.
A crowdsourced mechanism maps a strategy profile $(b_i, K_i)_{i\in [n]}$ to an allocation and a price profile, which may be randomized.
To distinguish whether a mechanism $\mathcal{M}$ is a Bayesian or crowdsourced Bayesian mechanism, we explicitly write $\mathcal{M}(\hat{\mathcal{I}})$ or $\mathcal{M}(\mathcal{I})$.
Again note that the latter does not mean $\mathcal{M}$ has $\mathcal{D}$ or $G$ as input.
The (expected) revenue of a mechanism $\mathcal{M}$ is denoted by $Rev(\mathcal{M})$, and sometimes by $\mathbb{E}_{\mathcal{D}} Rev(\mathcal{M})$ to emphasize the distribution.
A crowdsourced Bayesian mechanism is {\em 2-step dominant strategy truthful} (2-DST) if
\begin{itemize}
\item[(1)] For any player $i$, true valuation $v_i$,
valuation $b_i$, knowledge $K_i$, and strategy subprofile $s_{-i} = (b_j, K_j)_{j\neq i}$ of the other players,
$u_i((v_i, K_i), s_{-i}) \geq u_i((b_i, K_i), s_{-i})$.
\item[(2)]
For any player $i$, true valuation $v_i$, true knowledge $K_i$,
knowledge $K'_i$, and knowledge subprofile $K'_{-i}(v_{-i}) = (K'_j(v_j))_{j\neq i}$ of the other players,
where each $K'_j(v_j)$ is a function of player $j$'s true valuation~$v_j$,
$\mathbb{E}_{v_{-i}\sim \mathcal{D}_{-i}} u_i((v_i, K_i), (v_{-i}, K'_{-i}(v_{-i}))) \geq \mathbb{E}_{v_{-i}\sim \mathcal{D}_{-i}} u_i((v_i, K'_i), (v_{-i}, K'_{-i}(v_{-i})))$.%
\footnote{In Appendix \ref{sec:buyknowledge}, we introduce scoring rules to our mechanisms
so that the inequality is strict whenever $K'_i\neq K_i$.}
\end{itemize}
\vspace{-5pt}
\section{Crowdsourcing Under Arbitrary Knowledge Graphs}
\label{sec:k=0}
\subsection{Our Knowledge-Based Revenue Benchmark}
When the knowledge graphs can be totally arbitrary and may not even be 1-bounded,
some distributions may not be known by anybody.
It is not hard to see that in this case, no crowdsourced Bayesian mechanism can be a bounded approximation
to $OPT$.
Indeed, if all but one value distributions of the players are constantly 0, and
if the only non-zero distribution, denoted by $\mathcal{D}_{ij}$, is unknown by anybody,
then a Bayesian mechanism can find the optimal reserve price based on $\mathcal{D}_{ij}$,
while a crowdsourced Bayesian mechanism can only set the price for player $i$ based on the reported values
of the other players, which are all 0.
Accordingly, for arbitrary knowledge graphs,
we define a natural revenue benchmark: the optimal Bayesian revenue {\em on players and items for which the
distributions are known in the crowdsourced setting.}
More precisely,
let $\hat{\mathcal{I}} = (N, M, \mathcal{D})$ be a Bayesian instance and $\mathcal{I} = (N, M, \mathcal{D}, G)$
a corresponding crowdsourced Bayesian instance.
Let $\mathcal{D}'=\times_{i\in N, j\in M} \mathcal{D}'_{ij}$ be such that $\mathcal{D}'_{ij} = \mathcal{D}_{ij}$ if
there exists a player $i'$ with $(i', i)\in G_j$, and $\mathcal{D}'_{ij}$ is
constantly 0 otherwise.
We refer to $\mathcal{D}'$ as {\em $\mathcal{D}$ projected on $G$}.
Letting $\mathcal{I}' = (N, M, \mathcal{D}')$ be the resulting Bayesian instance, the {\em knowledge-based} revenue benchmark is $OPT_K(\mathcal{I}) \triangleq OPT(\mathcal{I}')$.
This is a demanding benchmark in crowdsourced settings:
it takes into consideration the knowledge of {\em all} players, no matter who knows what.
When everything is known by somebody, even if $G$ is only 1-bounded, we have $\mathcal{I}' = \hat{\mathcal{I}}$ and $OPT_K(\mathcal{I}) = OPT(\hat{\mathcal{I}})$.
Below we show how to approximate $OPT_K$ in unit-demand auctions and additive auctions, starting with the former, where
our mechanisms are easier to describe and analyze.
\subsection{Unit-Demand Auctions}\label{subsec:unit}
For unit-demand auctions, sequential post-price truthful Bayesian mechanisms have been constructed by \cite{chawla2010multi, kleinberg2012matroid}.
If the seller directly asks the players to report both their values and knowledge in these mechanisms, and uses the reported distributions
to set the prices,
then a player may want to withhold his knowledge about the other players.
By doing so, a player may prevent
the seller from selling the items to the others, so that the items are still available when it is his turn to buy.
An immediate idea is to partition the players into two groups: a set of {\em reporters}
who will not receive any item and is only asked to report their knowledge;
and a set of {\em potential buyers} whose knowledge is never used.
Because of the structures of the knowledge graphs and the partition of the players,
the reported knowledge may not cover a potential buyer $i$'s value distributions on all items,
and the seller will not sell to $i$ the items for which his distributions are not reported.
Thus, the technical part is to prove that
the seller generates a good revenue even though the players' knowledge is only partially recovered.
Our mechanism $\mathcal{M}_{CSUD}$ is simple and intuitive, as defined in Mechanism~\ref{alg:generalud},
where mechanism $\mathcal{M}_{UD}$ is the Bayesian mechanism in \cite{kleinberg2012matroid}.
This is not a black-box reduction from arbitrary Bayesian mechanisms:
instead, we prove a {\em projection lemma}
that allows such a reduction from an important class of Bayesian mechanisms.
We have the following theorem,
proved in Appendix~\ref{app:unknown:unit}.
\vspace{-5pt}
\begin{algorithm}[htbp]
\floatname{algorithm}{Mechanism}
\caption{\hspace{-4pt} $\mathcal{M}_{CSUD}$}
\label{alg:generalud}
\begin{algorithmic}[1]
\STATE\label{step1} Each player $i$ reports to the seller a valuation $b_i = (b_{ij})_{j\in M}$ and a knowledge $K_i = (\mathcal{D}^i_{i'j})_{i'\neq i, j\in M}$.
\STATE Randomly partition the players into two sets, $N_1$ and $N_2$,
where each player is independently put in each set with probability $\frac{1}{2}$.
\STATE Set $N_3 = \emptyset$.
\FOR {players $i\in N_1$ lexicographically}
\STATE\label{step5} For each player $i'\in N_2$ and item $j\in M$,
if $\mathcal{D}'_{i'j}$ has not been defined yet and $\mathcal{D}^i_{i'j} \neq \bot$,
then set $\mathcal{D}'_{i'j} = \mathcal{D}^i_{i'j}$
and add player $i'$ to $N_3$.
\ENDFOR
\STATE \label{step12} For each $i\in N_3$ and $j\in M$
such that $\mathcal{D}'_{ij}$ is not defined,
set $\mathcal{D}'_{ij}\equiv 0$ (i.e., 0 w.p. 1) and $b_{ij}=0$.
\STATE Run mechanism $\mathcal{M}_{UD}$ on the unit-demand Bayesian auction $(N_3, M, (\mathcal{D}'_{ij})_{i\in N_3, j\in M})$,
with the players' values being $(b_{ij})_{i\in N_3, j\in M}$.
Let $x' = (x'_{ij})_{i\in N_3,\, j\in M}$ be the resulting allocation where $x'_{ij}\in \{0, 1\}$, and let $p' = (p'_i)_{i\in N_3}$ be the prices.
Without loss of generality,
$x'_{ij} = 0$ if $\mathcal{D}'_{ij}\equiv 0$.\label{step13_1}
\STATE For each player $i\not\in N_3$, $i$ gets no item and his price is $p_i =0$.
\STATE For each player $i\in N_3$, $i$ gets item $j$ if $x'_{ij}=1$, and his price is $p_i = p'_i$.
\end{algorithmic}
\end{algorithm}
\begin{theorem}\label{thm:unit}
Mechanism $\mathcal{M}_{CSUD}$ for unit-demand auctions is 2-DST and,
for any instances $\hat{\mathcal{I}} = (N, M, \mathcal{D})$ and
$\mathcal{I} = (N, M, \mathcal{D}, G)$,
$\mathbb{E}_{v\sim \mathcal{D}} Rev(\mathcal{M}_{CSUD}(\mathcal{I})) \geq \frac{OPT_K(\mathcal{I})}{96}$.
\end{theorem}
\vspace{-5pt}
\vspace{-5pt}
\begin{lemma}\label{ud:truthful}
Mechanism ${\cal M}_{CSUD}$ is 2-DST.
\end{lemma}
\vspace{-10pt}
\begin{proof}[Proof sketch]
The key here is that
the use of the players' values and the use of their knowledge are disentangled:
for players in $N_1$, the mechanism only uses their knowledge but not their values; and the opposite holds for players in $N_2$.
If a player $i$ ends up in $N_2$, then whether he is in $N_3$ or not does not depend on his own strategy.
As mechanism $\mathcal{M}_{UD}$ is DST and player $i$ is assigned to $N_2$ with positive probability,
it is dominant for him to report his true values in ${\cal M}_{CSUD}$, no matter what the reported knowledge profile $(K_1, \dots, K_n)$ is.
Moreover, if a player $i$ ends up in $N_1$, then he is guaranteed to get no item and pay 0, thus reporting his true knowledge never hurts him.
\end{proof}
\paragraph{Remark.} Given how we have disentangled the usage of a player's value and the usage of his knowledge,
and because we are not rewarding a player for reporting his knowledge,
a player's knowledge does not affect his own utility in the current mechanism.
So reporting his true knowledge neither hurts nor benefits a player.
In the appendix, we use scoring rules to reward a player $i$'s knowledge.
Given that the other players report their true values, their reported values
are distributed exactly according to the true distributions.
Thus player $i$'s utility will be strictly larger
when he reports his true knowledge than when he lies.
\medskip
To analyze the revenue of $\mathcal{M}_{CSUD}$, note that it runs the
Bayesian mechanism on a smaller Bayesian instance: $\hat{\mathcal{I}}$ projected to
the set of player-item pairs $(i, j)$ such that $i\in N_3$ and $\mathcal{D}_{ij}$ has been reported.
To understand how much revenue is lost by the projection,
we consider
the COPIES instance~\cite{chawla2010multi}, $\hat{\mathcal{I}}^{CP} = (N^{CP}, M^{CP}, \mathcal{D}^{CP})$, which was used to analyze
Bayesian mechanisms.
$\hat{\mathcal{I}}^{CP}$ is obtained from $\hat{\mathcal{I}}$ by replacing each player with $m$ copies and each item with $n$ copies,
where a player $i$'s copy $j$ only wants item $j$'s copy $i$, with the value distributed according to $\mathcal{D}_{ij}$.
Thus $\hat{\mathcal{I}}^{CP}$ is a single-parameter auction,
with $N^{CP} = N\times M$, $M^{CP}=M\times N$, and $\mathcal{D}^{CP} = \times_{(i, j)\in N^{CP}} \mathcal{D}_{ij}$.
We now lower-bound the optimal Bayesian revenue
of the {\em projected COPIES} instance.
More specifically, for any subset $NM\subseteq N\times M$,
let $\hat{\mathcal{I}}^{CP}_{NM}$ be $\hat{\mathcal{I}}^{CP}$ projected to $NM$,
and let $OPT(\hat{\mathcal{I}}^{CP})_{NM}$ be the revenue of the optimal Bayesian mechanism for $\hat{\mathcal{I}}^{CP}$ obtained from players in $NM$.
\begin{lemma} [{\em The projection lemma}]\label{lem:proj}
For any $\hat{\mathcal{I}}$ and $NM\subseteq N\times M$,
$OPT(\hat{\mathcal{I}}^{CP}_{NM}) \geq OPT(\hat{\mathcal{I}}^{CP})_{NM}$.
\end{lemma}
We elaborate the related definitions and prove Lemma \ref{lem:proj} in Appendix \ref{app:unknown:unit}.
For mechanism $\mathcal{M}_{CSUD}$, the subset $NM$ needed in the projection lemma is exactly the set of player-item pairs $(i,j)$ such that $i\in N_3$ and $\mathcal{D}_{ij}$
is reported.
Theorem~\ref{thm:unit} holds by combining
the projection lemma,
the random partitioning in the mechanism, and
existing results on the COPIES setting in Bayesian auctions
\cite{kleinberg2012matroid, cai2016duality}.
Note that Lemma \ref{lem:proj}
is only concerned with COPIES instances. Using this lemma and similar to our proof of Theorem \ref{thm:unit},
any Bayesian mechanism
$\mathcal{M}$ whose revenue can be properly lower-bounded by the COPIES instance
can be converted to a crowdsourced
Bayesian mechanism in a black-box way.
It is interesting that the COPIES setting serves as a bridge between
Bayesian and crowdsourced Bayesian mechanisms.
More precisely, we have the following theorem, with the proof omitted.
\begin{theorem}\label{col:copies}
Let $\mathcal{M}$ be any DST Bayesian mechanism such that
$Rev({\cal M}(\hat{\cal I}))
\geq \alpha OPT(\hat{\cal I}^{CP})$
for some $\alpha>0$.
There exists a 2-DST crowdsourced Bayesian mechanism
that uses $\mathcal{M}$ as a black-box and is a $\frac{\alpha}{16}$-approximation to $OPT_K$.
\end{theorem}
By Theorem \ref{col:copies},
the mechanisms in \cite{chawla2007algorithmic} and \cite{chawla2010multi} automatically imply corresponding crowdsourced Bayesian mechanisms.
Finally, for single-good auctions, by replacing mechanism $\mathcal{M}_{UD}$ with Myerson's mechanism, the crowdsourced Bayesian mechanism is a 4-approximation to $OPT_K$.
\subsection{Additive Auctions}
\label{sec:partial:additive}
Crowdsourced Bayesian mechanisms for additive auctions are harder to construct and analyze than for unit-demand auctions.
For example, randomly partitioning the players as before
may cause a significant revenue loss,
because the revenue of additive auctions may come from selling a subset of items as a bundle
to a player $i$.
Even if player $i$'s value distribution for each item is reported with constant probability,
the probability that his value distributions for all items in the bundle are reported can be very low,
thus the mechanism can rarely sell the bundle to $i$ at the optimal price.
Also, the seller can no longer ``throw away'' player-item pairs whose distributions are not reported and work on the projected instance $\mathcal{I}'$ ---recall that $\mathcal{I}' = (N, M, \mathcal{D}')$ where $\mathcal{D}'$ is $\mathcal{D}$ projected on~$G$.
Indeed, when the players are not partitioned into reporters and potential buyers, doing so
will cause a player to
withhold his knowledge about others so that they are thrown away.
To simultaneously achieve truthfulness and a good revenue guarantee,
our mechanism is very stingy and never throws away any information.
If a player $i$'s value distribution for an item $j$ is reported by others,
then $j$ may be sold to $i$ via
the {\em $\beta$-Bundling} mechanism in \cite{yao2015n}, denoted by $Bund$;
while if $i$'s distribution for $j$ is not reported, then $j$ may still be sold to $i$ via the second-price mechanism.
Indeed, our mechanism treats the players neither solely based on the original Bayesian instance $\hat{\mathcal{I}}$
not solely based on the projected instance $\mathcal{I}'$;
rather, it works on a {\em hybrid} of the two.
Our mechanism $\mathcal{M}_{CSA}$ is still simple, as defined in Mechanism \ref{alg:newbvcg}.
However, significant effort is needed in order to analyze its revenue.
Although the Bayesian mechanism $Bund$ is a constant approximation to the optimal Bayesian revenue,
some items that are sold by $Bund$ under $\mathcal{I}'$ may end up being sold by $\mathcal{M}_{CSA}$ using second-price,
and
the revenue of $\mathcal{M}_{CSA}$ cannot be lower-bounded by that of $Bund$ under $\mathcal{I}'$.
To overcome this difficulty, we develop a novel way to use the {\em adjusted revenue}~\cite{yao2015n}
in our analysis; see Lemmas \ref{lem:csa:A} and \ref{lem:csa:C} in Appendix~\ref{app:unknown:add}, where we also recall the related definitions.
As we show, the adjusted revenue in properly
chosen information settings and the revenue of the second-price sale combined together eventually provide a lower-bound to the revenue of $\mathcal{M}_{CSA}$.
More precisely, we have the following theorem, proved in Appendix~\ref{app:unknown:add}.
\begin{algorithm}
\floatname{algorithm}{Mechanism}
\caption{\hspace{-4pt} $\mathcal{M}_{CSA}$}
\label{alg:newbvcg}
\begin{algorithmic}[1]
\STATE Each player $i$ reports a valuation $b_i = (b_{ij})_{j\in M}$ and a knowledge $K_i = (\mathcal{D}^i_{i'j})_{i'\neq i, j\in M}$.
\STATE For each item $j$, set $i^*(j)= \argmax_i b_{ij}$ (ties broken lexicographically) and $p_j = \max_{i\neq i^*} b_{ij}$.
\FOR{each player $i$}
\STATE Let $M_i = \{j \ | \ i^*(j) = i\}$ be player $i$'s {\em winning set.}
\STATE Partition $M$ into $M^1_{i}$ and $M^2_{i}$ as follows:
$\forall j \in M^1_{i}$,
some $i'$ has reported $\mathcal{D}^{i'}_{ij}\neq \bot$
(if there are more than one reporters, take the lexicographically first);
and $\forall j \in M^2_{i}$, $\mathcal{D}^{i'}_{ij}=\bot$ for all $i'$.
\STATE $\forall j\in M^1_{i}$, set $\mathcal{D}'_{ij} = \mathcal{D}^{i'}_{ij}$;
and $\forall j\in M^2_{i}$, set $\mathcal{D}'_{ij}\equiv 0$.
\STATE Compute the optimal entry fee $e_i$ and reserve prices
$(p'_{j})_{j\in M^1_{i}}$ according to mechanism $Bund$ with respect to $(\mathcal{D}'_{i}, \beta_{i})$,
where $\beta_{ij} = \max_{i'\neq i} b_{i'j}$ $\forall j\in M$.
By the definition of $Bund$, we always have $p'_j \geq \beta_{ij}$ for each $j$.
If $e_i=0$ then it is possible that $p'_j >\beta_{ij}$ for some $j$;
while if $e_i>0$ then $p'_{j} = \beta_{ij}$ for every $j$. \label{step:M'csbvcg8}
\STATE Sell $M_i^1\cap M_i$ to player $i$ according to $Bund$.
That is, if $e_i>0$ then do the following:
if $\sum_{j\in M^1_{i}\cap M_i} b_{ij}\geq e_i
+ \sum_{j\in M^1_{i}\cap M_i} p'_j$,
player $i$ gets $M^1_{i}\cap M_i$ with price
$e_i + \sum_{j\in M^1_{i}\cap M_i} p'_j$;
otherwise the items in $M^1_{i}\cap M_i$ are not sold.
If $e_i=0$ then do the following: for each item $j\in M^1_{i}\cap M_i$,
if $b_{ij}\geq p'_j$, player $i$ gets item $j$ with price $p'_j$;
otherwise item $j$ is not sold.\label{step:M'csbvcg9}
\STATE In addition, sell each item $j$ in $M^2_{i}\cap M_i$ to player $i$
with price $p_j (=\beta_{ij})$. \label{step:M'csbvcg10}
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{theorem}\label{thm:newbvcg}
\sloppy
Mechanism $\mathcal{M}_{CSA}$ for additive auctions is 2-DST and,
for any instances $\hat{\mathcal{I}} = (N, M, \mathcal{D})$ and
$\mathcal{I} = (N, M, \mathcal{D}, G)$,
$\mathbb{E}_{v\sim \mathcal{D}} Rev(\mathcal{M}_{CSA}(\mathcal{I})) \geq \frac{OPT_K(\mathcal{I})}{70}$.
\end{theorem}
Note that in mechanism $\mathcal{M}_{CSA}$, when computing the entry fee and the reserve prices for player~$i$ according to mechanism $Bund$,
the value distributions $\mathcal{D}'_i$ are from the projected Bayesian instance~$\mathcal{I}'$,
while the threshold prices $\beta_i$ are defined by the players' values from the original Bayesian instance $\hat{\mathcal{I}}$.
Moreover, player $i$'s winning set $M_i$ is defined by his values from $\hat{\mathcal{I}}$,
while only $i$'s values from the reported distributions are used in the bundling sale.
Indeed, the mechanism has carefully mixed up the original Bayesian instance and the projected instance, in order to
achieve truthfulness and good revenue at the same time.
Finally, we believe that by following the framework of \cite{cai2016duality}, one may be able to do a better analysis
for our mechanism $\mathcal{M}_{CSA}$ and prove a better approximation ratio for it.
However, the analysis is far from being a black-box application of existing results
and requires the ``hybridization'' of the counterpart of the adjusted revenue there.
\vspace{-5pt}
\section{Crowdsourcing When Everything Is Known by Somebody}
\label{sec:partial}
When the knowledge graph vector $G$ is $k$-bounded with $k\geq 1$, ``everything is known by somebody'' and $OPT_K = OPT$.
Both mechanisms in Section \ref{sec:k=0} of course apply here, but we can do better when $k$ gets larger: that is, when the amount of knowledge in the system increases.
More specifically, for any $k\in [n-1]$, let $\tau_k=\frac{k}{(k+1)^{\frac{k+1}{k}}}$.
Note that $\tau_k$ is increasing in $k$, $\tau_1 = \frac{1}{4}$ and $\tau_k\rightarrow 1$.
\subsection{Unit-Demand Auctions}
\label{sec:partial:ud}
The case of unit-demand auctions is easy:
our mechanism $\mathcal{M}'_{CSUD}$ is almost the same as mechanism $\mathcal{M}_{CSUD}$,
except that it randomly partitions the players into $N_1$ and $N_2$
in a different way.
The probability that each player is assigned to $N_1$
is now $q = 1-(k+1)^{-\frac{1}{k}}$, and the probability to $N_{2}$ is $1-q$.
When $k=1$, we have $q=\frac{1}{2}$ and mechanism $\mathcal{M}'_{CSUD}$ is exactly $\mathcal{M}_{CSUD}$.
The probability~$q$ is chosen to achieve the maximum probability for each distribution to be reported, and the latter
is exactly $\tau_k$.
We have omitted the detailed description of the new mechanism and only state the theorem below,
which is proved in Appendix~\ref{app:known:unit}.
\begin{theorem}
\label{thm:unit-k}
$\forall k\in [n-1]$, any unit-demand auction instances $\hat{\mathcal{I}} = (N, M, \mathcal{D})$ and $\mathcal{I} = (N, M, \mathcal{D}, G)$ where $G$ is $k$-bounded,
mechanism $\mathcal{M}'_{CSUD}$ is 2-DST and
$\mathbb{E}_{v\sim \mathcal{D}} Rev(\mathcal{M}'_{CSUD}(\mathcal{I})) \geq \frac{\tau_k}{24} \cdot OPT(\hat{\mathcal{I}}).$
\end{theorem}
\paragraph*{Remark.}
As $k$ gets larger (although can still be much smaller than $n$),
the approximation ratio of $\mathcal{M}'_{CSUD}$ approaches $24$,
the best known approximation to $OPT$ by DST Bayesian mechanisms~\cite{cai2016duality}.
Moreover, by Lemma 5 of \cite{chawla2010multi}, $\mathcal{M}'_{CSUD}$ is a $\frac{\tau_{k}}{6}$-approximation to the optimal deterministic DST Bayesian mechanism.
\subsection{Additive Auctions}
\label{sec:partial:additive}
Additive auctions here
are again more difficult than unit-demand auctions,
but still easier than the case when the knowledge graphs can be totally arbitrary.
When $k\geq 1$, all distributions will be reported in our mechanism $\mathcal{M}_{CSA}$.
Thus no item is sold according to the second-price mechanism,
and $\mathcal{M}_{CSA}$'s outcome is the same as the $\beta$-Bundling mechanism of \cite{yao2015n} applied to the original Bayesian instance $\hat{\mathcal{I}}$.
To improve the approximation ratio when $k\geq 1$, following \cite{cai2016duality} we can divide
the $\beta$-Bundling mechanism into the ``bundling part'' and the ``individual sale part''.
The former is referred to as the {\em Bundle VCG} mechanism, denoted by $BVCG$;
and the latter is the {\em Individual 1-Lookahead} mechanism, denoted by $\mathcal{M}_{1LA}$, which sells each item separately using the 1-Lookahead mechanism of \cite{ronen2001approximating}.
Mechanism $\mathcal{M}_{1LA}$ can also be replaced by
the {\it Individual Myerson} mechanism, denoted by $IM$,
which sells each item separately using Myerson's mechanism.
By choosing the mechanism that generates a higher expected revenue between $IM$ and $BVCG$,
\cite{cai2016duality} provides a Bayesian mechanism that is an 8-approximation to $OPT$.
For crowdsourced Bayesian auctions,
we can easily ``crowdsource'' mechanism $BVCG$ following mechanism $\mathcal{M}_{CSA}$.
The resulting mechanism is denoted by $\mathcal{M}_{CSBVCG}$ and
defined in Appendix~\ref{app:known:additive} (Mechanism~\ref{alg:bvcg}).
Moreover, we can easily ``crowdsource'' mechanism $IM$, similar to mechanism $\mathcal{M}'_{CSUD}$.
The resulting mechanism is denoted by $\mathcal{M}_{CSIM}$ and also defined in the appendix.
Because the seller does not know the prior $\mathcal{D}$,
he cannot compute the expected revenue of the two
crowdsourced Bayesian mechanisms and choose the better one.
Instead, we let him choose between the two mechanisms randomly, according to a probability distribution depending on $k$.
However, we can do even better.
Indeed, although in Bayesian auctions the mechanism $IM$ is optimal for individual item-sale and outperforms mechanism $\mathcal{M}_{1LA}$,
in crowdsourced Bayesian auctions there is a tradeoff between the two.
In order for the players to report their knowledge truthfully for mechanism $IM$,
we need to randomly partition them into reporters and potential buyers,
thus each distribution is only recovered with probability $\tau_k$.
In contrast, no partition is needed for aggregating the players' knowledge in mechanism $\mathcal{M}_{1LA}$,
and we can recover all distributions simultaneously with probability 1.
The resulting crowdsourced mechanism, $\mathcal{M}_{CS1LA}$, is defined in the appendix.
As mechanism $\mathcal{M}_{1LA}$ is a 2-approximation to mechanism $IM$,
sometimes it is actually more advantageous to use $\mathcal{M}_{CS1LA}$ rather than $\mathcal{M}_{CSIM}$,
depending on the value of $k$.
Properly combining the above gadgets together,
our mechanism $\mathcal{M}'_{CSA}$ is defined as follows:
when $k\leq 7$, it runs $\mathcal{M}_{CSBVCG}$ with probability $\frac{2}{11}$ and
$\mathcal{M}_{CS1LA}$ with probability $\frac{9}{11}$;
when $k> 7$, it runs $\mathcal{M}_{CSBVCG}$ with probability $\frac{\tau_k}{3+\tau_k}$ and
$\mathcal{M}_{CSIM}$ with probability $\frac{3}{3+\tau_k}$.
The choice of the two cases is to achieve the best approximation ratio for each $k$.
We have the following theorem, proved in Appendix~\ref{app:known:additive}.
\begin{theorem}
\label{thm:additive}
\sloppy
$\forall k \in [n-1]$, any additive auction instances $\hat{\mathcal{I}} = (N, M, \mathcal{D})$ and $\mathcal{I} = (N, M, \mathcal{D}, G)$ where $G$ is $k$-bounded, $\mathcal{M}'_{CSA}$ is 2-DST and
$\mathop{\mathbb{E}}_{v\sim \mathcal{D}}
Rev(\mathcal{M}'_{CSA}(\mathcal{I})) \geq \max\{\frac{1}{11}, \frac{\tau_k}{6+2\tau_k}\} OPT(\hat{\mathcal{I}})$.
\end{theorem}
\subsection{Single-Good Auctions}
\label{sec:warm:myerson}
As we have seen, the amount of revenue our mechanisms generate increases with $k$, the amount of knowledge in the system.
If the knowledge graph is only $k$-bounded for some small $k$, but reflects certain combinatorial structures,
good revenue may also be generated by leveraging such structures.
In this subsection
we consider single-good auctions, so a player's value $v_i$ is a single number rather than a vector.
Following Lemma \ref{lem:add:IM} in Appendix \ref{app:known:additive},
for any $k\geq 1$, when there is a single item and the knowledge graph is $k$-bounded, mechanism $\mathcal{M}_{CSIM}$ is a $\tau_k$-approximation to
the optimal Bayesian mechanism of Myerson \cite{myerson1981optimal}.
Below
we construct a crowdsourced Bayesian mechanism that is {\em nearly optimal}
under a natural structure of the knowledge graph.
More precisely, recall that a directed graph is {\em strongly connected} if there is a directed path from any node~$i$ to any other node~$i'$.
Intuitively, in a knowledge graph this means that for any two players Alice and Bob, Alice knows a guy who knows a guy ... who knows Bob.
Also recall that a directed graph is {\em 2-connected} if it remains strongly connected after removing any single node and the adjacent edges.
In a knowledge graph, this means there does not exist a crucial player as an ``information hub'', without whom the players will split into two parts, with one part having no information about the other.
It is easy to see that strong connectedness and 2-connectedness respectively imply 1-boundedness and 2-boundedness, but not vice versa. In fact, a graph of $n$ nodes can be $(\lfloor\frac{n}{2}\rfloor-1)$-bounded without being connected.
When the knowledge graph is 2-connected, we construct the {\em Crowdsourced Myerson} mechanism $\mathcal{M}_{CSM}$ in Mechanism~\ref{alg:myerson}.
Recall that Myerson's mechanism maps
each player $i$'s reported value~$b_i$ to the {\em (ironed) virtual value}, $\phi_i(b_i; \mathcal{D}_i)$.
It runs the second-price mechanism with reserve price~0 on virtual values and maps the resulting ``virtual price''
back to the winner's value space, as his price.
\begin{algorithm}[htbp]
\floatname{algorithm}{Mechanism}
\caption{\hspace{-4pt} $\mathcal{M}_{CSM}$}
\label{alg:myerson}
\begin{algorithmic}[1]
\STATE Each player $i$ reports a value $b_i$ and a knowledge $K_i = (\mathcal{D}^i_j)_{j\in N\setminus\{i\}}$.
\STATE Randomly choose a player $a$, let $S = \{j \ | \ \mathcal{D}^a_j\neq \bot\}$, $N' = N\setminus(\{a\}\cup S)$, and
$\mathcal{D}'_j = \mathcal{D}^a_j \ \forall j\in S$. \label{csm:2}
\STATE If $S = \emptyset$, the item is unsold, the mechanism sets price $p_i=0$ for each $ i\in N$ and stop here. \label{step3}
\STATE Set $i^* = \argmax_{j\in S} \phi_j(b_j; \mathcal{D}'_j)$, with ties broken lexicographically.
\WHILE{$N'\neq \emptyset$}
\STATE Set $S' = \{j \ | \ j\in N', \ \exists i'\in S\setminus\{i^*\} \mbox{ s.t. } \mathcal{D}^{i'}_j\neq \bot\}$. \label{step6}
\STATE If $S'=\emptyset$ then go to Step \ref{step11}. \label{step7}
\STATE For each $j\in S'$, set $\mathcal{D}'_j = \mathcal{D}^{i'}_j$,
where $i'$ is the first player in $S\setminus\{i^*\}$ with $\mathcal{D}^{i'}_j\neq \bot$.
\STATE Set $S = \{i^*\}\cup S'$ and $N' = N'\setminus S'$.
\STATE Set $i^* = \argmax_{j\in S} \phi_j(b_j; \mathcal{D}'_j)$,
with ties broken lexicographically. \label{step10}
\ENDWHILE
\STATE Set $\phi_{second} = \max_{j\in N \setminus (\{a, i^*\}\cup N')} \phi_j(b_j; \mathcal{D}'_j)$
and the price $p_i = 0$ for each player $i$. \label{step11}
\STATE If $\phi_{i^*}(b_{i^*}; \mathcal{D}'_{i^*}) < 0$ then the item is unsold;
otherwise, the item is sold to player $i^*$ and
$p_{i^*} = \phi^{-1}_{i^*}(\max\{\phi_{second}, 0\}; \mathcal{D}'_{i^*})$.\label{step13a}
\end{algorithmic}
\end{algorithm}
To help understanding our mechanism, we illustrate in Figure~\ref{fig:CSM} of Appendix \ref{app:proofwarm} the sets of players involved in the first round.
We have the following theorem, proved in
the appendix.
\begin{theorem}
\label{thm:myerson}
For any single-good auction instances $\hat{\mathcal{I}} = (N, M, \mathcal{D})$ and $\mathcal{I} = (N, M, \mathcal{D}, G)$ where $G$ is 2-connected,
$\mathcal{M}_{CSM}$ is 2-DST and
$\mathbb{E}_{v\sim \mathcal{D}} Rev(\mathcal{M}_{CSM}(\mathcal{I})) \geq (1-\frac{1}{n})OPT(\hat{{\mathcal{I}}})$.
\end{theorem}
\vspace{-10pt}
\begin{proof}[Proof ideas]
The mechanism again disentangles the use of the players' values and
the use of their knowledge.
Indeed,
when computing a player's virtual value in Step \ref{step10},
his knowledge has not been used yet.
If he is player $i^*$ then his knowledge will not be used in the next round either.
Only when a player is removed from $S$ ---that is, when it is guaranteed that he will not get the item,
will his knowledge be used. This is why it never hurts a player to report his true knowledge.
Now consider the revenue when the players report their true values and true knowledge.
Note that $|S|\geq 2$ in Step \ref{csm:2} due to 2-connectedness, so the mechanism does not stop in Step~\ref{step3}.
In the iterative steps, because player $i^*$ is excluded from the set of reporters,
we need that there is still a reporter who knows a distribution for players in~$N'$:
that is, there is an edge from $(N\setminus N')\setminus\{i^*\}$ to~$N'$, and player $i^*$ is not an ``information hub'' between $N\setminus N'$ and $N'$. This is again guaranteed by 2-connectedness (note that strong connectedness alone is not enough).
Accordingly, $\mathcal{M}_{CSM}$ does not stop until $N'=\emptyset$ and all players' distributions have been reported (excluding, perhaps, that of player $a$).
Therefore $\mathcal{M}_{CSM}$ manages to run Myerson's mechanism after randomly excluding a player $a$, and the revenue guarantee follows.
\end{proof}
\vspace{-15pt}
\paragraph*{Remark.}
If the seller knows at least two distributions, the mechanism can use him as the starting point and the revenue will be exactly $OPT$.
Since no crowdsourced mechanism can be a $(\frac{1}{2}+\delta)$-approximation for any constant $\delta>0$
when $n=2$ \cite{azar2012crowdsourced}, our result is tight.
Interestingly,
after obtaining our result, we found that 2-connected graphs
have been explored several times in the game theory literature \cite{bach2014pairwise, renault1998repeated}, for totally different problems.
\medskip
For additive auctions, when the knowledge graphs are 2-connected, instead of using mechanism
$\mathcal{M}_{CS1LA}$ or $\mathcal{M}_{CSIM}$, one can use $\mathcal{M}_{CSM}$
for each item $j$.
We thus have the following corollary, where the mechanism $\mathcal{M}''_{CSA}$ runs $\mathcal{M}_{CSM}$ with probability $\frac{3}{4}$ and $\mathcal{M}_{CSBVCG}$ with probability~$\frac{1}{4}$.
\begin{corollary}\label{col:additive}
For any additive auction instances $\hat{\mathcal{I}} = (N, M, \mathcal{D})$ and $\mathcal{I} = (N, M, \mathcal{D}, G)$ where each $G_j$ is 2-connected,
mechanism $\mathcal{M}''_{CSA}$ is 2-DST and
$\mathop\mathbb{E}_{\mathcal{D}}Rev(\mathcal{M}'_{CSA}({\mathcal{I}}))
\geq \frac{1}{8}(1-\frac{1}{n})
OPT(\hat{{\mathcal{I}}})$.
\end{corollary}
It would be very interesting to see if other combinatorial structures of knowledge graphs
can be leveraged in crowdsourced Bayesian mechanisms and facilitate the aggregation of the players' knowledge.
\section*{Acknowledgements}
The first author thanks Matt Weinberg for reading a draft of this paper and for helpful discussions. The authors thank Constantinos Daskalakis, J\'{a}nos Flesch, Hu Fu, Pinyan Lu, Silvio Micali, Rafael Pass, Andr\'{e}s Perea, Elias Tsakas, several anonymous reviewers, and the participants of seminars at Stony Brook University, Shanghai Jiaotong University, Shanghai University of Finance and Economics, Maastricht University, MIT, and IBM Thomas J. Watson Research Center for helpful comments.
This work is partially supported by NSF CAREER Award No. 1553385.
\newpage
| proofpile-arXiv_065-7636 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Acknowledgment}
\end{document}
\section{Optimum Beam Synthesis}
In the following we will develop a strategy to synthesis arbitrary beams based on the formulation an optimization problem.
Furthermore, we show how different constraints can be used to model the restrictions of different systems.
\subsection{Objective function}
The array factor $A(\boldsymbol{u}, \boldsymbol{a})$ of an antenna array is defined as
\begin{equation}
A(\boldsymbol{u}, \boldsymbol{a}) = \boldsymbol{a}^T \boldsymbol{p}(\boldsymbol{u})~,~\left[\boldsymbol{p}(\boldsymbol{u})\right]_n = e^{j\frac{2\pi}{\lambda}x_n(\boldsymbol{u})},
\end{equation}
where $\boldsymbol{a}$ is the beamforming vector, $\boldsymbol{u}$ is the spatial direction combining the azimuth and elevation angle.
The scalar $x_n(\boldsymbol{u})$
is the distance from the location of antenna element $n$ to the plane defined by the normal vector $\boldsymbol{u}$ and a reference point. A common choice for the reference point is the position of the first
antenna, in this case $x_1(\boldsymbol{u}) = 0$.
The objective of synthesizing an arbitrary beam pattern can be formulated as a weighted $L^p$ norm
between the desired pattern $D(\boldsymbol{u})$ and the absolute value of the actual array factor $\vert A(\boldsymbol{u}, \boldsymbol{a})\vert$
\begin{equation}
f(\boldsymbol{a}) = \left( \int W^p(\boldsymbol{u}) \left\vert\left\vert A(\boldsymbol{u}, \boldsymbol{a})\right\vert - D(\boldsymbol{u})\right\vert^p d\boldsymbol{u}\right)^{\frac{1}{p}},
\end{equation}
where $W(\boldsymbol{u})$ is the weighting.
This objective function itself is convex over its domain, but the constraints on $\boldsymbol{a}$ shown in the following subsections lead to a non-convex optimization problem.
This problem formulation ignores the phase of the array factor, since we require only the magnitude to be of a specific shape.
By only optimizing over the array factor we don't take the pattern of the antennas into account.
As described in \cite{Scholnik2016} to account for an antenna pattern it is only necessary
to divide $D(\boldsymbol{u})$ and $W(\boldsymbol{u})$
by the pattern of the antenna elements.
\subsection{Constraints}
We consider two different hybrid beamforming designs. These are the systems currently considered in literature \cite{Palacios2016, HBFMAG}.
In the first case, all $M$ antennas are divided into groups of size $M_C$. Each subgroup consists of one \ac*{RF} chain, an $M_C$ signal
splitter followed by a phase shifter and a \ac*{PA} at each antenna (see Figure \ref{fig:SystemModel} (a)). In total there are $M_{\text{RFE}}$ \ac*{RF} chains. This restricts the beamforming vector $\boldsymbol{a}$ to have the form
\begin{equation}
\boldsymbol{a} = \boldsymbol{W}^s \boldsymbol{\alpha}^s =
\begin{bmatrix}
\boldsymbol{w}^s_1 & \boldsymbol{0} & \cdots & \boldsymbol{0} \\
\boldsymbol{0} & \boldsymbol{w}^s_2 & \ddots & \boldsymbol{0} \\
\vdots & \vdots & \ddots &\vdots \\
\boldsymbol{0} & \cdots & \boldsymbol{0} & \boldsymbol{w}^s_{M_{\text{RFE}}} \\
\end{bmatrix}
\begin{bmatrix}
\alpha^s_1 \\ \alpha^s_2 \\ \vdots \\ \alpha^s_{M_{\text{RFE}}}
\end{bmatrix},
\end{equation}
where $\boldsymbol{\alpha}^s \in \mathbb{R}^{M_{\text{RFE}} \times 1}$ and the vectors $\boldsymbol{w}^s_i$ models the analog phase shifting of group $i$ and therefore has the form
\begin{equation}
\boldsymbol{w}^s_i =
\begin{bmatrix}
e^{j\theta^s_{1,i}} & e^{j\theta^s_{2,i}} & \cdots & e^{j\theta^s_{{M_C},i}}
\end{bmatrix}^T.
\end{equation}
In the second case, each of the \ac*{RF} chain is connected to an $M$ signal splitter followed by a phase shifter for each antenna
(see Figure \ref{fig:SystemModel} (b)). At each antenna, the phase shifted signal from each \ac*{RF} chain is combined and
then amplified by a \ac*{PA} followed by the antenna transmission.
With this system architecture the beamforming vector $\boldsymbol{a}$ can be decomposed into
\begin{equation}
\begin{gathered}
\boldsymbol{a} = \boldsymbol{W}^f \boldsymbol{\alpha}^f =
\begin{bmatrix}
\boldsymbol{w}^f_1 & \boldsymbol{w}^f_2 & \cdots & \boldsymbol{w}^f_{M_{\text{RFE}}}
\end{bmatrix}
\boldsymbol{\alpha}^f
\\ =
\begin{bmatrix}
e^{j\theta^f_{1, 1}} & e^{j\theta^f_{1, 2}} & \cdots & e^{j\theta^f_{1, M_{\text{RFE}}}} \\
e^{j\theta^f_{2, 1}} & e^{j\theta^f_{2, 2}} & \cdots & e^{j\theta^f_{2, M_{\text{RFE}}}} \\
\vdots & \vdots & \ddots &\vdots \\
e^{j\theta^f_{M, 1}} & e^{j\theta^f_{M, 2}} & \cdots & e^{j\theta^f_{M, M_{\text{RFE}}}} \\
\end{bmatrix}
\begin{bmatrix}
\alpha^f_1 \\ \alpha^f_2 \\ \vdots \\ \alpha^f_{M_{\text{RFE}}}
\end{bmatrix}
\end{gathered},
\end{equation}
with $\boldsymbol{\alpha}^f \in \mathbb{R}^{M_{\text{RFE}} \times 1}$.
To limit the maximum output power of the \ac*{PA}s, we need to include the following constraints
\begin{equation}
[\boldsymbol{a}]_m \leq 1 ~ \forall m = \{1, 2, \cdots, M\}.
\end{equation}
It is important to keep in mind that this restriction is after the hybrid beamforming, therefore, it is a
nonlinear constraint restricting output-power of the \ac*{PA}.
Another way to bound the output power is a sum power constraint of the form
\begin{equation}
\vert \vert \boldsymbol{a} \vert\vert^2 \leq 1.
\end{equation}
It is also possible that the resolution of the phase shifters is limited. This means that the values of $\theta^s_{i,j}$ are from a
finite set of possibilities
\begin{equation}
\theta^s_{i,j} = -\pi + k_{i,j}\frac{2\pi}{K}~~\forall i,j~\text{and}~k_{i,j} \in \{0, 1, \cdots, K-1\},
\end{equation}
where $K$ is the number of possible phases.
A possible phase shift in the digital domain needs to be taken into account. In the case
without quantization, this phase shift is redundant with the analog phase shift.
Therefore, in addition to the scaling $\boldsymbol{\alpha}^f$ or $\boldsymbol{\alpha}^s$, we need to
take a phase shift $\boldsymbol{\xi}^f$ or $\boldsymbol{\xi}^s$ into account.
For the case of sub-array hybrid beamforming with limited resolution \ac*{RF} phase shifters the beamforming vector $\boldsymbol{a}$ takes the form
\begin{equation}
\boldsymbol{a} = \boldsymbol{W}^s\left(\boldsymbol{\alpha}^s \circ \boldsymbol{\xi}^s\right),
\end{equation}
where $\boldsymbol{\xi}^s$ are the digital phase shifts defined as
\begin{equation}
\boldsymbol{\xi}^s = [e^{j\xi_1^s}, e^{j\xi_2^s}, \cdots, e^{j\xi^s_{M_{\text{RFE}}}}]^T.
\end{equation}
The formulation for the fully-connected case does also contain addition phase shifts in the digital baseband signals.
\subsection{Problem Formulation}
\begin{figure}
\centering
\includegraphics{./Beamforming/pics/tradeoff.eps}
\caption{Illustration of the trade-off associated with the beam pattern synthesis.}
\label{fig:tradeoff}
\end{figure}
Combining the objective function with the constraints associated with the hardware capabilities lead to the following optimization problem
\begin{equation}
\begin{array}{l}
\min f(\boldsymbol{a}) \\
\text{s.t.}~\boldsymbol{g}(\boldsymbol{a}) \leq \boldsymbol{0} ~,~\boldsymbol{h}(\boldsymbol{a}) = \boldsymbol{0},
\end{array}
\end{equation}
where $\boldsymbol{g}(\boldsymbol{a})$ and $\boldsymbol{h}(\boldsymbol{a})$ are the constraints modelling the desired hardware capabilities.
It is important to mention that beam synthesis is a similar procedure as digital filter design, therefore we us the terminology of digital filter design.
The weighting $W(\boldsymbol{u})$, the desired pattern $D(\boldsymbol{u})$ and the choice of $p$ in $f(\boldsymbol{a})$, determine
which point in the trade-off gain, passband ripple and transition width is going to be targeted as shown in Fig. \ref{fig:tradeoff}.
\section{Conclusion}
The developed approach can synthesize any beam-pattern for hybrid-beamforming systems. The numerical examples showed that
a sufficient solution to the underlying optimization problem can be found with reasonable computational complexity. The numeric examples also demonstrated that
it is possible to adapt the approach to any type of constraint arising in the context of hybrid beamforming and wireless communication.
If we compare the beams synthesized with the method introduced in this method to the ones in \cite{Palacios2016} we can achieve a significant smaller overlap
7.66 \%, 6.54 \% and 6.63 \% compared to 34.4 \$, 8.20 \%, and 14.4 \%. This beams are designed for a hierarchical beam search, thus the max side lope is a especially
important criteria. Here our result of -10.3 dB, -10.1 dB and -12.7 dB is also significantly better than -2.16 dB, -4.04 dB and -8.79 dB.
\section*{Acknowledgment}
The research leading to these results received funding from the European Commission H2020 programme under grant agreement no 671650 (5G PPP mmMAGIC project).
\section{Introduction}
To satisfy the ever increasing data rate demand, the use of the available bandwidth in the
\ac*{mmWave} frequency range is considered to be an essential part of the next generation mobile broadband standard \cite{FIVEDISRUPTIVE}.
To attain a similar link budget, the effective antenna aperture of a \ac*{mmWave} system must be comparable to current
systems operating at a lower carrier frequency.
Since the antenna gain, and thus the directivity
increases with the aperture, an antenna array is the only solution to achieve a high effective aperture, while maintaining a $360^\circ$ coverage.
The antenna array combined with the large bandwidth is a big challenge for the hardware implementation as
the power consumption limits the design space. Analog or hybrid beamforming are considered to be
possible solutions to reduce the power consumption. These solutions are based on the concept of phased array antennas.
In this type of systems the signal of multiple antennas are phase shifted, combined and afterwards converted into the analog baseband followed by an A/D conversion.
If the signals are converted to only one digital signal we speak of analog beamforming, otherwise hybrid beamforming is used.
For the transmission the digital signal is converted to a analog baseband signal, followed by a up-conversion. Afterwards,
the signal is split into multiple signals, separately phase shifted, ampflied and then transmitted at the antennas.
To utilize the full potential of the system, it is essential that the beams of Tx and Rx are aligned.
Therefore, a trial and error procedure is used to align the beams of Tx and Rx \cite{WIGIGSTDORIGINAL, 80211ayBF}.
This beam search procedure does either utilize beams of different width with additional feedback or many beams of the same width with
only one feedback stage \cite{Palacios2016}. In both cases the beams with specific width, maximum gain and flatness need to be designed.
Based on requirements on the beam shape, this work formulates an optimization problem similar to \cite{Scholnik2016, Morabito2012}.
Afterwards the optimization problem is solved numerically.
This work includes the specific constraints of hybrid beamforming and low resolution phase shifters.
In \cite{Palacios2016}, the authors approximate a digital beamforming vector by a hybrid one. We generate our beam by approximating a desired beam instead.
The superscript $s$ and $f$ are used to distinguish between sub-array and fully-connected hybrid beamforming.
Bold small $\boldsymbol{a}$ and capital letters $\boldsymbol{A}$ are used to represent vectors and matrices. The notation
$[\boldsymbol{a}]_n$ is the $n$th element of the vector $\boldsymbol{a}$.
The superscript $T$ and $H$ represent the transpose and hermitian operators.
The symbol $\circ$ is the Hadamard product.
\begin{figure*}[!t]
\centering
\normalsize
\input{./Introduction/pics/OverviewModel_windows.tex}
\vspace*{-0.4cm}
\caption{System model of hybrid beamforming transmitter with $M$ antennas and $M_{\text{RFE}}$ RF-chains for the sub-array (a) and the fully-connected (b) case.}
\label{fig:SystemModel}
\hrulefill
\vspace*{-0.5cm}
\end{figure*}
\section{Numerical results}
To compare the designed beams we need to first define some metrics to quantify the difference between them.
Some of these metrics are similar to the ones defined in \cite{Donno2016}.
The first one is the \textit{average gain} in the desired direction. Directly connected to the average gain is the \textit{maximum ripple} of the array factor in the desired directions.
For more reliable results, the transition region is excluded from the search of the maximum ripple.
A very important criteria to evaluate the performance of a beam for initial access is the \textit{overlap of adjacent beams} of the same width. Here we evaluate the
area at which the distance between two beams is less than 5 dB relative to the total area of one beam.
The last measure is the \textit{maximum side-lobe} relative to the average gain in the desired directions.
In the following, beams synthesized by the described method are shown.
For all systems, the transmitter is equipped with $M_{\text{RFE}} = 4$ \ac*{RF}-chains, connected to 64 Antenna elements, forming an \ac*{ULA} with half-wavelength inter-element spacing.
Since the antenna array is one dimensional, it is sufficient to look at only one spatial direction. All plots refer to angle $\psi = \frac{\lambda}{2}\sin(\phi)$, where
$\phi$ is the geometric angle between a line connecting all antennas and the direction of a planar wavefront.
For each system, three beams of width $b = \pi, \pi/2, \pi/4$ are synthesized. But it is important to mention that the beams in Fig. \ref{FIG:BEAMPATTERNENQ} and
\ref{FIG:BEAMPATTERNFNQ} are not designed to be used simultaneously. In contrast, the beams in Fig. \ref{FIG:BEAMPATTERNFQ} and \ref{FIG:BEAMPATTERNFQIMDEA} can be simultaneously used.
For an \ac*{ULA}, the spatial direction $\boldsymbol{u}$ is fully represented by $\psi$, therefore $W(\boldsymbol{u})$, $D(\boldsymbol{u})$ and $A(\boldsymbol{u}, \boldsymbol{a})$ depend only on $\psi$.
Since the magnitude of each element of $\boldsymbol{a}$
is less or equal to one, if a perfect flat beam without sidelobes could be constructed, it would have the array-factor $D_{\text{max}} = \sqrt{N2\pi/b}$.
As also described in \cite{Scholnik2016}, such a beam cannot be realized, therefore $D(\psi)$ is equal to $\beta D_{\text{max}}$ at the desired directions and equal to zero, elsewhere.
The parameter $\beta$ ensures the feasibility of a solution.
The weighting of different parts of the beam pattern $W(\psi)$ is uniformly set to 1, except for a small transition region enclosing the desired directions.
For all systems, we set $p = 4$ in the objective function to ensure equal gain and side lobe ripples. The integral of the objective function over all spatial directions in the
objective function is approximated by a finite sum. To ensure a sufficient approximation, the interval is split into 512 elements. As described in \cite{Scholnik2016},
the computational complexity can be significantly reduced by reformulating the problem to use FFT/IFFTs to calculate $A(\psi, \boldsymbol{a})$ and the derivatives of the objective function.
For each system, the optimization process was started by considering several initializations. Since the used \ac*{NLP} and \ac*{MINLP} solvers only guarantee to find
a local minimum for a non-convex problem, the results were compared and the implementation leading towards the minimum objective function was selected.
The metrics to compare the performance of different beams is shown in Table \ref{TAB:measuretab} alongside a reference to the respective Fig..
In Fig. \ref{FIG:BEAMPATTERNENQ} and \ref{FIG:BEAMPATTERNFNQ} the synthesized beams for sub-array and
fully-connected hybrid beamforming are shown. For (a), (b) and (c) the gain penalty $\beta$ was selected to be 3 dB, 2 dB and 2 dB, respectively.
Compared to the fully-connected case, sub-array hybrid
beamforming is characterized by more gain ripples and higher sidelobe energy, while having the same transition width.
In Fig. \ref{FIG:BEAMPATTERNFQ} and \ref{FIG:BEAMPATTERNFQIMDEA} fully-connected hybrid beamforming with quantized phase shifters was applied.
The beams are designed with the method described in Fig. \ref{FIG:BEAMPATTERNFQIMDEA}.
The beam in both figures is optimized to simultaneously transmit us both shown beams at each stage (a), (b) and (c). The power constraint for this case is also different,
in this case only the sum power is constraint to be less or equal to 1. For our evaluation we used the same constraints.
In Fig. \ref{FIG:BEAMPATTERNFQIMDEA}, and, especially in (a) there are multiple points where both beams almost overlap.
In these directions an estimation of the link quality achieved with both beams is going to be very similar.
This can possibly lead to a wrong decision and, in its turn, to large errors in a multi-stage beam
training procedure. On the contrary, the solution evaluated in Fig. \ref{FIG:BEAMPATTERNFQ} offers a sharper transition. The
stop directions attenuation is also close to uniform to enable a predictable performance. The only disadvantage is the larger ripples inside the center main beam.
The shortcomings which are observed in Fig. \ref{FIG:BEAMPATTERNFQIMDEA} are introduced during the generation of $\boldsymbol{a}$.
As described in \cite{Palacios2016} this method approximates a version of $\boldsymbol{a}_d$ generated with the assumption of full digital beamforming.
Since for a low number of \ac*{RF}-chains this vector cannot be well approximated, the resulting beam pattern does not correspond well to the
desired one. It is also important to mention that there is no one-to-one mapping between the error in approximating $\boldsymbol{a}_d$ and
the errors of the corresponding beam.
As shown in \cite{Palacios2016}, the method works well if $\boldsymbol{a}_d$ can be well approximated by a larger number of \ac*{RF} chains.
\begin{figure}
\vspace*{-0.5cm}
\centering
\subfloat[][]{
\begin{tikzpicture}
\begin{polaraxis}[
scale only axis=true,
width = 0.25*\columnwidth,
height = 0.25*\columnwidth,
rotate=-90,
grid=both,
xticklabel=$\pgfmathprintnumber{\tick}^{\circ}$,
xtick={0, 45, 135, 180, 225, 315},
minor xtick={90, 270},
x dir=reverse,
xticklabel style={anchor=-\tick-90, font=\tiny},
xtick style={font=\tiny},
ytick={-40,-30,-20,-10,0},
ymin=-40, ymax=5,
y coord trafo/.code=\pgfmathparse{#1+40},
y coord inv trafo/.code=\pgfmathparse{#1-40},
ylabel style={font=\tiny, yshift=-0.11*\columnwidth, xshift=-0.20*\columnwidth},
ylabel={gain [dBr]},
yticklabel style={anchor=east, xshift=-0.17*\columnwidth, font=\tiny},
y axis line style={yshift=-0.17*\columnwidth,},
ytick style={yshift=-0.17*\columnwidth,font=\tiny},
]
\addplot [no markers, thick, red] table [x=y, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_1_Beam_1_hbf.txt};
\addplot [no markers, thick, blue] table [x=y, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_1_Beam_2_hbf.txt};
\end{polaraxis}
\end{tikzpicture}
}
\subfloat[][]{
\begin{tikzpicture}
\begin{polaraxis}[
scale only axis=true,
width = 0.25*\columnwidth,
height = 0.25*\columnwidth,
grid=both,
rotate=-90,
xticklabel=$\pgfmathprintnumber{\tick}^{\circ}$,
xtick={0, 45, 135, 180, 225, 315},
minor xtick={90, 270},
x dir=reverse,
xticklabel style={anchor=-\tick-90, font=\tiny},
xtick style={font=\tiny},
ytick={-40,-30,-20,-10,0},
ymin=-40, ymax=5,
y coord trafo/.code=\pgfmathparse{#1+40},
y coord inv trafo/.code=\pgfmathparse{#1-40},
yticklabels={,,},
ymajorticks=true,
yminorticks=true,
]
\addplot [no markers, thick, red] table [x=y, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_2_Beam_1_hbf.txt};
\addplot [no markers, thick, blue] table [x=y, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_2_Beam_2_hbf.txt};
\end{polaraxis}
\end{tikzpicture}
}
\subfloat[][]{
\begin{tikzpicture}
\begin{polaraxis}[
scale only axis=true,
width = 0.25*\columnwidth,
height = 0.25*\columnwidth,
grid=both,
rotate=-90,
xticklabel=$\pgfmathprintnumber{\tick}^{\circ}$,
xtick={0, 45, 135, 180, 225, 315},
minor xtick={90, 270},
x dir=reverse,
xticklabel style={anchor=-\tick-90, font=\tiny},
xtick style={font=\tiny},
ytick={-40,-30,-20,-10,0},
ymin=-40, ymax=5,
y coord trafo/.code=\pgfmathparse{#1+40},
y coord inv trafo/.code=\pgfmathparse{#1-40},
yticklabels={,,},
ymajorticks=true,
yminorticks=true,
]
\addplot [no markers, thick, red] table [x=y, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_3_Beam_1_hbf.txt};
\addplot [no markers, thick, blue] table [x=y, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_3_Beam_2_hbf.txt};
\end{polaraxis}
\end{tikzpicture}
}
\caption{Beams of different width of a sub-array hybrid beamforming array.}
\label{FIG:BEAMPATTERNENQ}
\vspace*{-0.25cm}
\end{figure}
\begin{figure}
\vspace*{-0.5cm}
\centering
\subfloat[][]{
\begin{tikzpicture}
\begin{polaraxis}[
scale only axis=true,
width = 0.25*\columnwidth,
height = 0.25*\columnwidth,
rotate=-90,
grid=both,
xticklabel=$\pgfmathprintnumber{\tick}^{\circ}$,
xtick={0, 45, 135, 180, 225, 315},
minor xtick={90, 270},
x dir=reverse,
xticklabel style={anchor=-\tick-90, font=\tiny},
xtick style={font=\tiny},
ytick={-40,-30,-20,-10,0},
ymin=-40, ymax=5,
y coord trafo/.code=\pgfmathparse{#1+40},
y coord inv trafo/.code=\pgfmathparse{#1-40},
ylabel style={font=\tiny, yshift=-0.11*\columnwidth, xshift=-0.20*\columnwidth},
ylabel={gain [dBr]},
yticklabel style={anchor=east, xshift=-0.17*\columnwidth, font=\tiny},
y axis line style={yshift=-0.17*\columnwidth,},
ytick style={yshift=-0.17*\columnwidth,font=\tiny},
]
\addplot [no markers, thick, red] table [x=y, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_1_Beam_1_hbf_full.txt};
\addplot [no markers, thick, blue] table [x=y, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_1_Beam_2_hbf_full.txt};
\end{polaraxis}
\end{tikzpicture}
}
\subfloat[][]{
\begin{tikzpicture}
\begin{polaraxis}[
scale only axis=true,
width = 0.25*\columnwidth,
height = 0.25*\columnwidth,
grid=both,
rotate=-90,
xticklabel=$\pgfmathprintnumber{\tick}^{\circ}$,
xtick={0, 45, 135, 180, 225, 315},
minor xtick={90, 270},
x dir=reverse,
xticklabel style={anchor=-\tick-90, font=\tiny},
xtick style={font=\tiny},
ytick={-40,-30,-20,-10,0},
ymin=-40, ymax=5,
y coord trafo/.code=\pgfmathparse{#1+40},
y coord inv trafo/.code=\pgfmathparse{#1-40},
yticklabels={,,},
ymajorticks=true,
yminorticks=true,
]
\addplot [no markers, thick, red] table [x=y, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_2_Beam_1_hbf_full.txt};
\addplot [no markers, thick, blue] table [x=y, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_2_Beam_2_hbf_full.txt};
\end{polaraxis}
\end{tikzpicture}
}
\subfloat[][]{
\begin{tikzpicture}
\begin{polaraxis}[
scale only axis=true,
width = 0.25*\columnwidth,
height = 0.25*\columnwidth,
grid=both,
rotate=-90,
xticklabel=$\pgfmathprintnumber{\tick}^{\circ}$,
xtick={0, 45, 135, 180, 225, 315},
minor xtick={90, 270},
x dir=reverse,
xticklabel style={anchor=-\tick-90, font=\tiny},
xtick style={font=\tiny},
ytick={-40,-30,-20,-10,0},
ymin=-40, ymax=5,
y coord trafo/.code=\pgfmathparse{#1+40},
y coord inv trafo/.code=\pgfmathparse{#1-40},
yticklabels={,,},
ymajorticks=true,
yminorticks=true,
]
\addplot [no markers, thick, red] table [x=y, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_3_Beam_1_hbf_full.txt};
\addplot [no markers, thick, blue] table [x=y, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_3_Beam_2_hbf_full.txt};
\end{polaraxis}
\end{tikzpicture}
}
\caption{Beams of different width of a fully-connected hybrid beamforming array.}
\label{FIG:BEAMPATTERNFNQ}
\vspace*{-0.25cm}
\end{figure}
\begin{figure}
\vspace*{-0.5cm}
\centering
\subfloat[][]{
\begin{tikzpicture}
\begin{polaraxis}[
scale only axis=true,
width = 0.25*\columnwidth,
height = 0.25*\columnwidth,
rotate=-90,
grid=both,
xticklabel=$\pgfmathprintnumber{\tick}^{\circ}$,
xtick={0, 45, 135, 180, 225, 315},
minor xtick={90, 270},
x dir=reverse,
xticklabel style={anchor=-\tick-90, font=\tiny},
xtick style={font=\tiny},
ytick={-30,-20,-10,0,10},
ymin=-30, ymax=10,
y coord trafo/.code=\pgfmathparse{#1+30},
y coord inv trafo/.code=\pgfmathparse{#1-30},
ylabel style={font=\tiny, yshift=-0.11*\columnwidth, xshift=-0.20*\columnwidth},
ylabel={gain [dB]},
yticklabel style={anchor=east, xshift=-0.17*\columnwidth, font=\tiny},
y axis line style={yshift=-0.17*\columnwidth,},
ytick style={yshift=-0.17*\columnwidth,font=\tiny},
]
\addplot [no markers, thick, red] table [x expr=\thisrow{y}+90, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_1_Beam_1_hbf_full_quant_2bit.txt};
\addplot [no markers, thick, blue] table [x expr=\thisrow{y}+90, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_1_Beam_2_hbf_full_quant_2bit.txt};
\end{polaraxis}
\end{tikzpicture}
}
\subfloat[][]{
\begin{tikzpicture}
\begin{polaraxis}[
scale only axis=true,
width = 0.25*\columnwidth,
height = 0.25*\columnwidth,
grid=both,
rotate=-90,
xticklabel=$\pgfmathprintnumber{\tick}^{\circ}$,
xtick={0, 45, 135, 180, 225, 315},
minor xtick={90, 270},
x dir=reverse,
xticklabel style={anchor=-\tick-90, font=\tiny},
xtick style={font=\tiny},
ytick={-30,-20,-10,0,10},
ymin=-30, ymax=10,
y coord trafo/.code=\pgfmathparse{#1+30},
y coord inv trafo/.code=\pgfmathparse{#1-30},
yticklabels={,,},
ymajorticks=true,
yminorticks=true,
]
\addplot [no markers, thick, red] table [x expr=\thisrow{y}+90, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_2_Beam_2_hbf_full_quant_2bit.txt};
\addplot [no markers, thick, blue] table [x expr=\thisrow{y}+90, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_2_Beam_1_hbf_full_quant_2bit.txt};
\end{polaraxis}
\end{tikzpicture}
}
\subfloat[][]{
\begin{tikzpicture}
\begin{polaraxis}[
scale only axis=true,
width = 0.25*\columnwidth,
height = 0.25*\columnwidth,
grid=both,
rotate=-90,
xticklabel=$\pgfmathprintnumber{\tick}^{\circ}$,
xtick={0, 45, 135, 180, 225, 315},
minor xtick={90, 270},
x dir=reverse,
xticklabel style={anchor=-\tick-90, font=\tiny},
xtick style={font=\tiny},
ytick={-30,-20,-10,0,10},
ymin=-30, ymax=10,
y coord trafo/.code=\pgfmathparse{#1+30},
y coord inv trafo/.code=\pgfmathparse{#1-30},
yticklabels={,,},
ymajorticks=true,
yminorticks=true,
]
\addplot [no markers, thick, red] table [x expr=\thisrow{y}+180, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_3_Beam_2_hbf_full_quant_2bit.txt};
\addplot [no markers, thick, blue] table [x expr=\thisrow{y}+180, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_3_Beam_1_hbf_full_quant_2bit.txt};
\end{polaraxis}
\end{tikzpicture}
}
\caption{Beams of different width optimized for sidelobe attenuation and with 2 bit quantization of the phase shifters of a fully-connected hybrid beamforming.}
\label{FIG:BEAMPATTERNFQ}
\vspace*{-0.25cm}
\end{figure}
\begin{figure}
\vspace*{-0.5cm}
\centering
\subfloat[][]{
\begin{tikzpicture}
\begin{polaraxis}[
scale only axis=true,
width = 0.25*\columnwidth,
height = 0.25*\columnwidth,
rotate=-90,
grid=both,
xticklabel=$\pgfmathprintnumber{\tick}^{\circ}$,
xtick={0, 45, 135, 180, 225, 315},
minor xtick={90, 270},
x dir=reverse,
xticklabel style={anchor=-\tick-90, font=\tiny},
xtick style={font=\tiny},
ytick={-30,-20,-10,0,10},
ymin=-30, ymax=10,
y coord trafo/.code=\pgfmathparse{#1+30},
y coord inv trafo/.code=\pgfmathparse{#1-30},
ylabel style={font=\tiny, yshift=-0.11*\columnwidth, xshift=-0.20*\columnwidth},
ylabel={gain [dB]},
yticklabel style={anchor=east, xshift=-0.17*\columnwidth, font=\tiny},
y axis line style={yshift=-0.17*\columnwidth,},
ytick style={yshift=-0.17*\columnwidth,font=\tiny},
]
\addplot [no markers, thick, red] table [x expr=\thisrow{y}+90, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_1_Beam_1_hbf_full_imdea.txt};
\addplot [no markers, thick, blue] table [x expr=\thisrow{y}+90, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_1_Beam_2_hbf_full_imdea.txt};
\end{polaraxis}
\end{tikzpicture}
}
\subfloat[][]{
\begin{tikzpicture}
\begin{polaraxis}[
scale only axis=true,
width = 0.25*\columnwidth,
height = 0.25*\columnwidth,
grid=both,
rotate=-90,
xticklabel=$\pgfmathprintnumber{\tick}^{\circ}$,
xtick={0, 45, 135, 180, 225, 315},
minor xtick={90, 270},
x dir=reverse,
xticklabel style={anchor=-\tick-90, font=\tiny},
xtick style={font=\tiny},
ytick={-30,-20,-10,0,10},
ymin=-30, ymax=10,
y coord trafo/.code=\pgfmathparse{#1+30},
y coord inv trafo/.code=\pgfmathparse{#1-30},
yticklabels={,,},
ymajorticks=true,
yminorticks=true,
]
\addplot [no markers, thick, red] table [x expr=\thisrow{y}+90, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_2_Beam_1_hbf_full_imdea.txt};
\addplot [no markers, thick, blue] table [x expr=\thisrow{y}+90, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_2_Beam_2_hbf_full_imdea.txt};
\end{polaraxis}
\end{tikzpicture}
}
\subfloat[][]{
\begin{tikzpicture}
\begin{polaraxis}[
scale only axis=true,
width = 0.25*\columnwidth,
height = 0.25*\columnwidth,
grid=both,
rotate=-90,
xticklabel=$\pgfmathprintnumber{\tick}^{\circ}$,
xtick={0, 45, 135, 180, 225, 315},
minor xtick={90, 270},
x dir=reverse,
xticklabel style={anchor=-\tick-90, font=\tiny},
xtick style={font=\tiny},
ytick={-30,-20,-10, 0, 10},
ymin=-30, ymax=10,
y coord trafo/.code=\pgfmathparse{#1+30},
y coord inv trafo/.code=\pgfmathparse{#1-30},
yticklabels={,,},
ymajorticks=true,
yminorticks=true,
]
\addplot [no markers, thick, red] table [x expr=\thisrow{y}+90, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_3_Beam_1_hbf_full_imdea.txt};
\addplot [no markers, thick, blue] table [x expr=\thisrow{y}+90, y=x] {./SimulationResults/PlotData/MR_64_N_16_Stage_3_Beam_2_hbf_full_imdea.txt};
\end{polaraxis}
\end{tikzpicture}
}
\caption{Beams of different width of fully-connected hybrid beamforming array with phase quantization according to \cite{Palacios2016}.}
\label{FIG:BEAMPATTERNFQIMDEA}
\vspace*{-0.25cm}
\end{figure}
\begin{table}
\renewcommand{\arraystretch}{1.3}
\caption{Comparison of the designed beams.}
\label{TAB:measuretab}
\centering
\begin{tabular}{|p{2.565cm}|p{1cm}|p{1cm}|p{1cm}|p{1.1cm}|}
\hline
Beam & avg. gain dB & max ripple dB & overlap in \% & max side-lope dB\\ \hline \hline
Fig. \ref{FIG:BEAMPATTERNENQ} (a) & 18.2 & 4.00 & 2.44 & -17.4 \\ \hline
Fig. \ref{FIG:BEAMPATTERNENQ} (b) & 21.7 & 2.89 & 3.22 & -16.2 \\ \hline
Fig. \ref{FIG:BEAMPATTERNENQ} (c) & 26.3 & 2.76 & 7.21 & -16.3 \\ \hline
Fig. \ref{FIG:BEAMPATTERNFNQ} (a) & 18.2 & 2.04 & 2.63 & -22.6 \\ \hline
Fig. \ref{FIG:BEAMPATTERNFNQ} (b) & 22. 0 & 2.10 & 2.63 & -22.8 \\ \hline
Fig. \ref{FIG:BEAMPATTERNFNQ} (c) & 24.8 & 2.35 & 5.26 & -23.3 \\ \hline
Fig. \ref{FIG:BEAMPATTERNFQ} (a) & 2.52 & 3.90 & 7.66 & -10.3 \\ \hline
Fig. \ref{FIG:BEAMPATTERNFQ} (b) & 5.50 & 3.01 & 6.54 & -10.1 \\ \hline
Fig. \ref{FIG:BEAMPATTERNFQ} (c) & 8.23 & 1.47 & 6.63 & -12.7 \\ \hline
Fig. \ref{FIG:BEAMPATTERNFQIMDEA} (a) & 2.22 & 8.82 & 34.4 & -2.16 \\ \hline
Fig. \ref{FIG:BEAMPATTERNFQIMDEA} (b) & 5.04 & 7.25 & 8.20 & -4.04 \\ \hline
Fig. \ref{FIG:BEAMPATTERNFQIMDEA} (c) & 8.02 & 1.49 & 14.4 & -8.97 \\ \hline
\end{tabular}
\end{table} | proofpile-arXiv_065-7643 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:introduction}
There are many well-established theories to describe open quantum many-particle systems consisting of, e.g.,~quasi-free charge carriers in semiconductor heterostructures~\cite{hohenester_density-matrix_1997,hoyer_influence_2003}, cavity photons~\cite{kira_cluster-expansion_2008,mootz_sequential_2012,chow_emission_2014}, phonons~\cite{lorke_influence_2006,kabuss_threshold_2013}, ultracold Bose-gases~\cite{witthaut_beyond_2011,trimborn_decay_2011}, polaritons~\cite{tignon_unified_2000}, and spins~\cite{kapetanakis_spin_2008} in an approximate way. The theories in the preceding references are related to mean-field theories and its successive improvements, like the cluster expansion~\cite{fricke_transport_1996,fricke_improved_1997,hoyer_cluster_2004} (CE), in which equations of motion (EoM) for the mean single-particle occupations and their correlations are derived, whereas higher-order correlations are neglected.
Besides the approximations that are necessary to describe the interacting system itself, many of the referenced models also require additional approximations to include dissipative processes resulting from the systems coupling to an external bath. The von Neumann Lindblad equation is a common procedure to take the influence of the exterior bath on the system into account \cite{breuer_theory_2002,carmichael_dissipation_1999,may_exciton_2003}, provided that the Born-Markov-approximation is justified\cite{nakatani_quantum_2010}.
The experimental progress in the field of cavity quantum electrodynamics in semiconductors \cite{reithmaier_strong_2004,wiersig_direct_2009,nomura_laser_2010,reitzenstein_semiconductor_2012,khitrova_vacuum_2006} shows that there are many interesting systems in which the basic assumption of mean field theories, a large Hilbert space and weak interaction, is not valid. Many of these systems are sufficiently small to be described by their exact wave function or density matrix, formulated in the Hilbert space of all possible many-particle configurations \cite{carmele_antibunching_2010,ritter_emission_2010} without the need for an approximate theory. Despite the efforts that are made to improve the theories from both sides (exact description of relatively small systems \cite{gies_3_2012,carmele_antibunching_2010,richter_numerically_2015} and approximate description of relatively large systems \cite{leymann_expectation_2014,florian_equation--motion_2013,richter_few-photon_2009,mascarenhas_matrix-product-operator_2015,weimer_variational_2015}), there is still a gap between those systems that are small enough to be described exactly and those that are large enough to fulfill the requirements of approximate theories like the CE.
In this article, we describe pitfalls in the choice of the basis states that may occur, when applying approximate theories on small systems within or close to the mentioned gap. We use three examples to contrast approaches that are based on a formulation in single-particle states with approaches that use many-particle configuration states as a basis.
Although the formulations are equivalent, the choice of basis states can decide about further steps. In our three examples, we show that the choice of basis states can suggest misleading approximations or determine the modeling of dissipative processes, which leads to deviations of the results, in the two formulations that go beyond simple approximation errors.
The remainder of this paper is organized in the following way: In Sec.~\ref{sec:issue}, we discuss, based on an extended Jaynes-Cummings model \cite{jaynes_comparison_1963,shore_jaynes-cummings_1993} (JCM), the effects of a fallacious mean-field factorization scheme, implied by a description in single-particle states.
In Sec.~\ref{sec:lt}, we consider an open system treated in the von Neumann Lindblad (vNL) formalism. We demonstrate that the basis states in which the collapse operators and with it the dissipator of the vNL equation are constructed can actually influence the modeling of the system. In the first example concerning the vNL (Sec.~\ref{ssec:hole}), the choice of basis states limits the possibilities to adjust the model to the experimental situation. Whereas, in the second example concerning the vNL formalism (Sec.~\ref{ssec:ssd}), the basis states determine whether two parts of the system are affected by the environment independently or intertwined and one system part is dephased by the dissipative decay of the other. Finally we recapitulate how the dissipator in the vNL can be constructed from a system plus reservoir approach\cite{carmichael_dissipation_1999} and resolve the misconception, that has led to results depending on the choice of basis states (Sec.~\ref{ssec.sytempres}).
The last section \ref{sec:last} summarizes and concludes the paper. In the appendix, we give details of the EoM and the parameter space of the semiconductor JCM (App.~\ref{app:sjcm}). Furthermore we outline the derivation of the analytic solution for the open system (App.~\ref{app:deph}), and present the effects of an additional external pump on the open system (App.~\ref{app:pump}).
\section{Fallacious factorization}
\label{sec:issue}
To illustrate a conceptual problem that can arise from a Hartree-Fock-like factorization of expectation values, we consider a model with Jaynes-Cummings interaction, introduced in \cite{richter_few-photon_2009}, with the Hamiltonian
\begin{align}
H =& \omega b^\dagger b + \varepsilon_e e^\dagger e + \varepsilon_h h^\dagger h - (g h e b^\dagger + \mathrm{h.c.}),
\end{align}
where $b^{(\dagger)}$ annihilates (creates) a cavity mode photon with frequency $\omega$ and $e^{(\dagger)}/h^{(\dagger)}$ annihilates (creates) an electron/hole, with energy $\varepsilon_e/\varepsilon_h$, respectively. The dipole matrix element $g$ can be chosen real, and all parameters are specified in units of $\hbar$.
This Hamiltonian describes a two-level quantum dot (QD) embedded in a semiconductor environment, coupled to a single cavity mode. It would be identical to the JCM, if one restricts the electronic states to fully correlated electrons and holes, i.e.,~restricting the electronic states to a single exciton (electron-hole pair). However, in order to describe the semiconductor properties of a QD, an independent occupation of the electron and hole states is allowed in this model, which we term the semiconductor JCM. Both systems perform a coherent exchange between the cavity photons and the exciton, called Rabi oscillations \cite{shore_jaynes-cummings_1993}. The calculation of the time evolution using EoM for the expectation values produces a hierarchy of coupled equations. We will show that a factorization of many-particle expectation values into single-particle expectation values, often used to truncate hierarchies of EoM, is not only unnecessary in this exemplary model, but also leads to conceptually wrong conclusions.
To derive the EoM, we follow the approach of \cite{richter_few-photon_2009}, in which the photons are not described by creation and annihilation operators, but by the photon probability distribution. This allows for a variant of the CE in which the photonic part is treated exactly, and is termed the photon probability CE by the authors of \cite{richter_few-photon_2009}. The expectation values of interest are the hole $f_h=\mean{h^\dagger h}$ and the electron $f_e=\mean{e^\dagger e}$ occupation, the occupation of the Fock-states with $n$ photons $p_n = \mean{\ketbra{n}{n}}$, and the imaginary part of the photon-assisted polarization \mbox{$\psi_n = \operatorname{Im}\mean{\ketbra{n+1}{n}h e}$}. From the Heisenberg equation for the generalized occupations $f^{e}_n = \mean{\ketbra{n}{n}{e^\dagger e}}$ and $f^{h}_n = \mean{\ketbra{n}{n}{h^\dagger h}}$, with $f^{e/h} = \sum_n f^{e/h}_n$, we obtain the time derivatives
\begin{align}
\mathrm{d}_t f^{e/h}_n =& 2g\sqrt{n+1}~\psi_n\label{eq:time_fn},\\
\mathrm{d}_t p_n = &2g\sqrt{n+1}~\psi_n - 2g \sqrt{n}~\psi_{n-1},\label{eq:time_pn}\\
\mathrm{d}_t\psi_n = &-g\sqrt{n+1}~(p_{n+1}-f^h_{n+1} -f^e_{n+1})\nonumber\\
-& g\sqrt{n+1}~\left(C_{n+1}^X - C_n^X \right), \label{eq:time_psin}
\end{align}
where the diagonal terms are zero since the cavity is chosen to be in resonance with the QD. The EoM for the photon-assisted polarization couples to the higher-order term
\begin{align}
C_n^X = \big\langle \ketbra{n}{n} e^\dagger e h^\dagger h \big\rangle
\end{align}
that describes the electron-hole correlation. In the JCM, in which electrons and holes are perfectly correlated, the many-particle term $C_n^X$ can be expressed exactly by the already known single electron expectation values $f_n^e$, thus closing the hierarchy. In a semiconductor environment, the assumption of perfectly correlated electrons and holes is not valid~\cite{berstermann_correlation_2007}. Therefore, the term $C_n^X$ can no longer be expressed by a single single-particle term.
\paragraph*{Guided by the single-particle basis,} one can proceed with an approximate treatment of $C^X_n$. The first order of the photon probability CE results in the factorization
\begin{align}
\begin{split}
C_n^X = \mean{\ketbra{n}{n}e^\dagger e h^\dagger h } &\approx \frac{\mean{\ketbra{n}{n} e^\dagger e}\mean{\ketbra{n}{n}h^\dagger h}}{\mean{\ketbra{n}{n}}} \\
&= \frac{f^e_nf_n^h}{p_n},
\end{split}
\end{align}
which is related to a neglect of the electron-hole correlation
\begin{align}
\delta=\mean{e^\dagger e h^\dagger h} - \mean{e^\dagger e}\mean{h^\dagger h}\label{eq:purecorr},
\end{align}
and corresponds to the Hartree-Fock approximation. With the applied factorization one obtains a closed set of EoM and is able to calculate the dynamics of the electronic and photonic occupations $N = \mean{b^\dagger b} = \sum np_n$, and the photon autocorrelation function~\cite{glauber_quantum_1963} at zero delay time $\tau= 0$
\begin{align}
g^{(2)}(0) = \frac{\mean{b^\dagger b^\dagger b b}}{\mean{b^\dagger b}^2} = \frac{\sum (n^2 - n)p_n}{(\sum n p_n)^2}.
\end{align}
\begin{figure}[]
\centering
\includegraphics[width=0.48\textwidth]{Fig1_PPCEcontraEV_gamma0.pdf}
\caption{Time evolution of the semiconductor JCM with exact and factorized EoM (Hartree-Fock), for the mean photon number $N$ (a) and the photon autocorrelation $g^{(2)}(0)$ (b); Initial conditions: $p_1=1$, $f^e, f^h = 0.3, 0.1$.}
\label{fig:eom}
\end{figure}
Figure~\ref{fig:eom} shows the dynamics of the semiconductor JCM for a cavity with an initially prepared single-photon Fock state and $f^e,f^h = 0.3,0.1$. The dimensionless charge $C = f^h - f^e$ of the QD is preserved by the Hamiltonian. It is proposed in \cite{richter_few-photon_2009} that, with a fixed number of total initial excitations (i.e.,~$p_n=const$ and $f^e+f^h=const$), the charge $C$ determines the maximum amplitude of the photon autocorrelation function $g^{(2)}_\mathrm{max}(0)$. Figure \ref{fig:correlation}(a) shows the dependence of $g^{(2)}_\mathrm{max}(0)$ on $C$, when the system is initially prepared in a single-photon Fock state $p_1=1$ and $f^e+f^h=1$ for the exact (see next paragraph) and the factorized system. The curves have their maximum at $C=0$, which suits the notion that the probability that an electron can recombine with a matching hole, i.e,~the ability of the system to oscillate, is directly connected to $C$. The exact and the approximate curve deviate, since the electron-hole correlation $\delta$ is forced to be zero for all times in the factorized version of the EoM, whereas $\delta=0$ is only an initial condition in the exact EoM (see next paragraph).
\begin{figure}[]
\centering
\includegraphics[width=0.48\textwidth]{Fig2_dep_proj.pdf}
\caption{The dependence of the maximum amplitude of $g^{(2)}(0)$ on the charge $C$ and the oscillation ability $O$. (a): $g^{(2)}_\mathrm{max}(0)$ in dependence of $C$ for the exact and factorized (Hartree-Fock) EoM. (b): $g^{(2)}_\mathrm{max}(0)$ (green area) in dependence of $C$ and $O$, and the electron-hole correlation $\delta$ (blue/red contour plot in the $C$-$O$-plane) which increases with $O$ from $\delta=-\nicefrac{1}{4}$ to $\delta=\nicefrac{1}{4}$. The special case of $\delta=0$ is marked by the black curves. The initial conditions are $\delta=0$, $f^e+f^h=1$ and $p_1=1$. To fix the additional free parameter we have chosen the initial probabilities for $\ket{G}$ and $\ket{X_s}$ to be equal, which is equivalent to the restriction to a fixed number of total excitations.}
\label{fig:correlation}
\end{figure}
\paragraph*{When the system is described in many-particle configurations,} it becomes apparent that the previous conclusion is only an artifact of the focus on single-particle properties, which manifests in the factorization of the term $C_n^X$\footnote{Actually calculating the time derivative of $C_n^X$ reveals that it couples only to the known quantities $\psi_n$ (App.~\ref{app:sjcm}).}. Factorizing the term $C_n^X$ forces the electron-hole correlation $\delta$ to be zero and introduces a constraint to the system that eventually results in the artificial connection between $g^{(2)}_\mathrm{max}(0)$ and $C$.
\begin{figure}[]
\centering
\includegraphics[width=0.4\textwidth]{Fig3_SJCM_states.pdf}
\caption{Illustration of the electronic configurations $\ket{i}$ of the semiconductor JCM. The original JCM consists of the states within the dashed box, which are the only ones appearing in the interaction part of the Hamiltonian in Eq.~(\ref{eq:sjcm_config}).}
\label{fig:configs}
\end{figure}
A reformulation in terms of many-particle configurations, depicted in Fig.~\ref{fig:configs}, reveals which electronic states of the QD take part in the Rabi oscillations. The state of the system is determined by four coefficients $c_i=\mean{\ketbra{i}{i}}$ ~\footnote{In the incoherent regime all considered observables, only depend on the absolute value of the coefficients. So w.l.o.g.\ they can be chosen real. The free choice of the coefficients is reduced by condition $\operatorname{Tr}\rho=1$, resulting in three independent coefficients. In the formulation using creation and annihilation operators the three degrees of freedom are $f^e$, $f^h$ and $C^X$.}, corresponding to the many-particle configuration states $\ket{i}$. Examining the Hamiltonian formulated in this basis
\begin{align}
H &= \omega b^\dagger b + \sum \varepsilon_i \ketbra{i}{i} -\left(g\ketbra{G}{X_s}b^\dagger + \mathrm{h.c.}\right)\label{eq:sjcm_config}
\end{align}
reveals that only $\ket{G}$ and $\ket{X_s}$ take part in the Rabi oscillations. This finding suggests the definition of a new quantity, the oscillation ability $O = \mean{\ketbra{G}{G}} + \mean{\ketbra{X_s}{X_s}}$, which actually determines the amplitude of the Rabi oscillations, rather than the charge $C = \mean{\ketbra{+_s}{+_s}} - \mean{\ketbra{-_s}{-_s}}$. Even in the case of $C=0$ and fixed excitations, one could have no Rabi oscillations at all, if the initial electronic state of the system is equally distributed between the configurations $\ket{+_s}$ and $\ket{-_s}$. Figure~\ref{fig:correlation}(b) shows $g^{(2)}_\mathrm{max}(0)$ in dependence of $O$ and $C$ for a constant amount of total excitations (see App.~\ref{app:parameter}). The amplitude of $g^{(2)}(0)$ increases with $O$, \emph{independent} of $C$\,~\footnote{The domain of $O$ is determined by $C$ (Eq.~(\ref{eq:domain}), Fig.~\ref{fig:correlation}~(b)) and therefore, the maximum amplitude of $g^{(2)}(0)$ is indirectly affected by $C$.}. The correlation between electrons and holes $\delta$ is depicted as a contour plot at the bottom of Fig.~\ref{fig:correlation} (b), varying from anticorrelated to fully correlated electrons and holes with increasing $O$. The special case for the Hartree-Fock factorization $\delta=0$ is marked by the three black curves. Following this path in parameter space, one regains the artificial dependence of the maximum amplitude of $g^{(2)}(0)$ on the charge $C$. This dependence, projected in the $C$-$g^{(2)}(0)$-plane, is identical to the black curve in Fig~\ref{fig:correlation}~(a).
\paragraph*{In conclusion,} we have demonstrated that in the semiconductor JCM, not the charge of the QD $C$ but the oscillation ability $O$ determines the maximum of $g^{(2)}$. The constraint in configuration space introduced by the factorization scheme can lead to a misconception about the systems dynamic, in our case it is the connection between the charge of the QD and its ability to perform Rabi oscillations. The effect of this constraint in configuration space is especially drastic in our case, since a connection between observables of the system is derived, $g^{(2)}_{max}(0)=f(O,C)$, in contrast to a case where the dependence of an observable on an external parameter is derived, e.g.\ the input-output characteristics $\mean{b^{\dagger}b}=f(Pump_{\textnormal{ext}})$ of a laser.
One can avoid problems and misconceptions like this by describing the finite states of the carriers localized in the QD in the basis of its many-particle configurations as we have demonstrated here. When a formulation in single-particle states is desirable one should include all correlations between the localized single-particle states [App.~\ref{app:sjcm}] since correlations between single-particle states are strong in finite systems \cite{leymann_expectation_2014}. There are many approaches on QD-(cavity) systems described in the literature, that either find a formulation that includes all possible many-particle configurations of the system \cite{ritter_emission_2010,richter_numerically_2015} or if this is not possible use hybrid factorization schemes related to the cluster expansion. These factorization schemes are hybrid approaches in the sense that the correlations between the carriers localized in a QD are fully included, while correlations between other system parts are treated approximately by factorization e.g.\ correlations between, different QDs \cite{leymann_sub-_2015,jahnke_giant_2016,foerster_computer-aided_2017}, QD and delocalized wetting-layer states \cite{kuhn_hybrid_2015}, and QD states and continuum states of the light-field in free space \cite{florian_equation--motion_2013}.
\section{Open systems and the construction of the dissipator}
\label{sec:lt}
In the previous section, we have discussed misleading results that arise from an approximation scheme that truncates the hierarchy of EoM.
In this section we demonstrate that when open quantum system are described in Born-Markov approximation using the
vNL equation
\begin{align}
\mathrm{d}_t \rho = &i[\rho,H] + \sum_i \gamma_i \left(L_i \rho L_i^\dagger -\frac{1}{2} L_i^\dagger L_i \rho -\frac{1}{2} \rho L_i^\dagger L_i\right) \nonumber\\
=& i[\rho,H] + \mathcal{D} (\rho)\label{eq:lindblad}
\end{align}
it can make a significant difference whether the dissipator $\mathcal{D}$ is constructed in a single-particle or in a many-particle configuration basis.
We demonstrate that a misleading assumption can already be incorporated in the construction of the EoM, thus producing questionable results even if the basic EoM for $\rho$ is then solved without further approximations.
\subsection{Hole capture}
\label{ssec:hole}
As a first introductory example, we consider the hole capture of a semiconductor QD. To model the hole capture from delocalized wetting layer states, the model illustrated in Fig.~\ref{fig:configs} is augmented by further localized states. For cylindrical QDs these states are the p-shell states, which are energetically higher than the s-shell states. Restricting the model to one spin direction and one state in the p-shell results in four single-particle states that can be occupied by up to four carriers. This model, consisting of 16 possible many-particle configurations (see Ref.~\cite{gies_3_2012} for details), is the basis for many models used to describe semiconductor QDs \cite{ritter_emission_2010,leymann_sub-_2015,jahnke_giant_2016}.
The excitation of the QDs is facilitated by electron and hole capture from the quasi-continuous wetting layer states into the p-shell.
To describe the hole capture in the single-particle basis, one uses a single collapse operator, $L = h_p^\dagger$ in Eq.~\eqref{eq:lindblad}, that creates a hole in the p-shell. Assigned to this process is a hole capture rate $\Gamma_h$. This formulation treats the hole capture in the p-shell independently of the occupation of the other states.
However, the carriers are captured due to phonon and Coulomb scattering of the delocalized wetting layer carriers into the localized QD states and the single-particle-energies of the QD states are renormalized by the Coulomb interaction. Since the scattering rates depend on the energies of the final state, the hole capture rate of a positively charged QD is lower than the one of a negatively charged QD \cite{steinhoff_treatment_2012}, as illustrated in Fig.~\ref{fig:capture}.
To model the hole capture in a way that takes different capture rates into account, one needs to construct a collapse operator for each transition between two many-particle configurations in which a hole is created, with rates depending on the configurations. Two exemplary transitions are illustrated in Fig.~\ref{fig:capture}, which create a hole in the p-shell, and correspond to the operators $L_1 = \ketbra{++}{+_s}$ and $L_2 = \ketbra{X_p}{-_p}$, with the rates $\Gamma_h^+ < \Gamma_h^-$, respectively.
This example illustrates two possible ways to construct a transition of a carrier, triggered by the environment, within the dissipator $\mathcal{D}$: (i) using single-particle creation and annihilation operators, resulting in a single collapse operator (e.g.,~$L = h_p^\dagger$ for the hole capture). (ii) using a set of different collapse operators formulated as transition operators between many-particle configurations (e.g.,~$L_1$ and $L_2$). This formulation allows for a direct distinction between different many-particle configurations.
Note that the dissipator in (i) can be also obtained using configuration operators, $L=h_p^\dagger = \sum_{ij}\ketbra{i}{j}$ (with $i,j$ chosen so that $L$ creates a p-shell hole). Accordingly, a combination of creation and annihilation operators can regain a distinction between the configurations as in (ii). However, these alternative ways would result in a rather clumsy notation.
\paragraph*{The conclusion} of this introductory example is that the dissipator $\mathcal{D}$ can be constructed in two diffrent ways and that it appears necessary to formulate the dissipator in the basis of many-particle configurations.
\begin{figure}[]
\centering
\includegraphics[width=0.47\textwidth]{Fig4_e_h_capture.pdf}
\caption{Illustration of the transition from the many-particle configuration $\ket{+_s}$ to \mbox{$\ket{\!+\!+}$} and $\ket{-_p}$ to $\ket{X_p}$, corresponding to the collapse operators $L_1$ and $L_2$, respectively. Both transitions result in the capture of a hole in p-shell of the QD. The capture rate of a positively charged hole depends on the QD's charge ($\Gamma_h^+ < \Gamma_h^-$).}
\label{fig:capture}
\end{figure}
\subsection{Non-local dephasing}
\label{ssec:ssd}
In this example, we show that the two ways to construct the dissipator $\mathcal{D}$, described in Sec.~\ref{ssec:hole}, lead to different results even when the rates for the different collapse operators formulated in the many-particle basis, are equal. We emphasize that the same set of operators is used in both constructions of $\mathcal{D}$
and that the only difference is how the operators enter the dissipator.
Such a situation is the vacuum Rabi oscillation of an electron-hole pair in the s-shell in resonance with a high quality cavity mode, in presence of the spontaneous decay of an electron-hole pair in the p-shell, as illustrated in Fig.~\ref{fig:decay} (a). The basis states of the Hilbert space for this system are $\ket{n,i}$, where $n$ is the number of cavity photons and $i$ denotes the electronic configuration of the QD. With an initially empty cavity and a QD prepared in the biexciton state, four electronic configurations are coupled by the vNL equation: The ground state configuration $\ket{G}$, the s-exciton $\ket{X_s}$, the p-exciton $\ket{X_p}$ and the biexciton $\ket{X\!X}$ configuration, as illustrated in Fig.~\ref{fig:all_states}.
\begin{figure}[]
\centering
\includegraphics[width=0.485\textwidth]{Fig5_dephasing_model_illustration.pdf}
\caption{(a): Illustration of the system dynamics in the single-particle basis. (b): Illustration of the transitions between the many-particle configurations revealing that there are two Rabi cycles, connected by the decay of the p-exciton.}
\label{fig:decay}
\end{figure}
The Jaynes-Cummings interaction Hamiltonian reads
\begin{align}
H_{JC} = &-\left(gh_s e_sb^\dagger + \mathrm{h.c.}\right) \nonumber\\
= &-\left(g\left(\ketbra{G}{X_s} + \ketbra{X_p}{X\!X}\right)b^\dagger + \mathrm{h.c.}\right)
\end{align}
in the single-particle and the configuration basis respectively.
The dissipator generates the spontaneous loss of excitons in the p- and s-shell, with the rates $\Gamma$ and $\beta$, respectively. In contrast to the hole capture in Sec.~\ref{ssec:hole}, the decay rates in the p-shell are independent of the oscillatory state of the s-exciton. Unlike the Hamilton operator, which is independent of the formulation, the effect of the dissipator $\mathcal{D}$ depends on its formulation since the collapse operators enter nonlinearly. In the single-particle basis, the loss of the p-shell exciton is generated by $L_{\textrm{sp}} = h_p e_p$ (formulation (i)). The same operator can be constructed by a sum of configuration operators
\begin{align}
L_{\textrm{sp}} = \ketbra{G}{X_p} + \ketbra{X_s}{X\!X},
\label{eq:Lsp}
\end{align}
which is still formulation (i). In the many-particle formulation, the spontaneous loss of p-shell excitons is generated by two collapse operators
\begin{align}
L_{G} = \ketbra{G}{X_p}\quad \textnormal{and}\quad L_{X} = \ketbra{X_s}{X\!X},
\label{eq:LGLX}
\end{align}
with equal rates $\gamma_{G} = \gamma_{X} = \Gamma$ (formulation (ii)). The same holds for the spontaneous exciton loss in the s-shell, with the loss rate $\beta$ and the collapse operators chosen accordingly.
\begin{figure}[]
\centering
\includegraphics[width=0.47\textwidth]{Fig6_all_states.pdf}
\caption{Illustration of the many-particle configurations $\ket{G}$, \mbox{$\ket{X_s}$}, $\ket{X_p}$, and $\ket{X\!X}$, which are the basis states for the QD-model exhibiting non-local dephasing.}
\label{fig:all_states}
\end{figure}
\paragraph*{In the single-particle basis,} the dynamics of the s- and the p-shell are decoupled, which can be seen in the EoM for the single-particle operator expectation values
\begin{align}
\begin{split}
\mathrm{d}_t \mean{e^\dagger_p e_p} &= -\Gamma \mean{e^\dagger_p e_p},\\
\mathrm{d}_t \mean{e^\dagger_s e_s} &= -\beta \mean{e^\dagger_s e_s} +\underbrace{2g\psi}_{\mathrm{Rabi}^+},\\
\mathrm{d}_t \psi &= -\beta\psi +\underbrace{g \left(\mean{b^\dagger b } - \mean{e_s^\dagger e_s}\right)}_{\mathrm{Rabi}^-},\\
\mathrm{d}_t \mean{b^\dagger b }&=\hspace{12ex}-\underbrace{2g\psi}_{\mathrm{Rabi}^+},
\end{split}\label{eq:rabi}
\end{align}
with $\psi$ being the imaginary part of photon-assisted polarization ($\psi = \operatorname{Im}{\mean{h_se_sb^\dagger}}$) and $\mathrm{Rabi}^{\pm}$ marking the terms responsible for the Rabi-oscillations. The p-shell occupation decays exponentially with rate $\Gamma$, the s-shell occupation oscillates with the vacuum Rabi-frequency $2g$ and decays with rate $\beta$, and the polarization is subject to the dephasing introduced by the spontaneous losses $\beta$ in the s-shell. Fig.~\ref{fig:decay} (a) illustrates the dynamics of the single-particle occupations of the system.
\paragraph*{In the configuration basis,} the required quantities to formulate the EoM are the occupations of the basis states ($X\!X^n$, $X^n_p$, $X^n_s$, $G^n$) with photon number $n$, e.g.,~$G^n=\mean{\ketbra{n,G}{n,G}}$ and the photon-assisted polarizations between bi- and p-exciton $\psi_X^n = \operatorname{Im}(\mean{\ketbra{n,X\!X}{n+1,X_p}})$ and between s-exciton and ground state $\psi_s^n = \operatorname{Im}(\mean{\ketbra{n,X_s}{n+1,G}})$. Since we start with an empty cavity, the EoM are restricted to the first photon block $(n=0,1)$ and read
\begin{align}
\begin{split}
\mathrm{d}_t X\!X^0&= -(\Gamma +\beta)X\!X^0 +2g\psi^0_X ,\\
\mathrm{d}_t \psi^0_X&= -g X\!X^0 -(\Gamma+\nicefrac{\beta}{2})\psi^0_X + g X^1_p,\\
\mathrm{d}_t X^1_p&= -2g \psi^0_X -\Gamma X^1_p ,\\
\mathrm{d}_t X^0_s&= \Gamma X\!X^0 -\beta X^0_s +2g \psi^0_s,\\
\mathrm{d}_t \psi^0_s&= \{\Gamma,0\}\psi^0_X -g X^0_s - \nicefrac{\beta}{2}\psi^0_s + g G^1,\\
\mathrm{d}_t G^1&= \Gamma X^1_p -2g\psi^0_s,\\
\mathrm{d}_t X^0_p&= \beta X\!X^0 -\Gamma X^0_p,\\
\mathrm{d}_t G^0&= \beta X^0_s +\Gamma X^0_p.\\
\end{split}\label{eq:diffconfig}
\end{align}
The curled brackets $\{\Gamma,0\}$ in the fifth line of Eqs.~\eqref{eq:diffconfig} mark the difference between the single-particle \mbox{(i:~$\Gamma$)} and the configuration basis (ii:~$0$) in the EoM, which we will discuss in more detail below. For further discussion it is convenient to formulate the EoM in matrix form $\mathrm{d}_t r = Mr$, where the column vector $r=\left( X\!X^0,\dots,G^0\right)^T$ contains the dynamical quantities as listed in Eqs.~\eqref{eq:diffconfig} (for initial state $r_0=(1,0,\dots,0)^T$) and the parameter matrix $M$ reads
\begin{align}
M= \left(\begin{array}{ccc|ccc|cc}
-\Gamma -\beta & 2g & \sz & \\
-g & -\Gamma-\nicefrac{\beta}{2} & g & & & \multicolumn{1}{c}{\bigzero} \\
\sz & -2g & -\Gamma & \\\hline
\Gamma & \sz & \sz & -\beta & 2g & \sz & \sz & \sz \\
\sz & \{\Gamma,0\} & \sz & -g & - \nicefrac{\beta}{2} & g & \sz & \sz \\
\sz & \sz & \Gamma & \sz & -2g & \sz & \sz & \sz \\\hline
\beta & \sz & \sz & \sz & \sz & \sz & -\Gamma & \sz \\
\sz & \sz & \sz & \beta & \sz & \sz & \Gamma & \sz \\
\end{array}\right). \label{eq:diffmatrix}
\end{align}
The matrix $M$ can be separated into eight blocks, indicated by the lines in Eq.~(\ref{eq:diffmatrix}). We refer to these blocks row-by-row. Block I describes the Rabi oscillations with frequency $2g$ on its off-diagonal elements and the decay of excitation and the dephasing of the polarization on its diagonal elements. The same holds for block IV and the off-diagonal elements of these blocks correspond to the terms $\mathrm{Rabi^\pm}$ in Eq.~(\ref{eq:rabi}). Since there is no pumping in this system, block II, which transports occupation from lower to higher electronic states, is zero. Block III together with the diagonal elements of block I describes the transfer of population from the part of the system with a p-shell exciton to the one without. The occupation that is lost due to the negative sign of the $\Gamma$s in block I is transferred to occupations without a p-shell exciton by the positive $\Gamma$s of block III. The entry in curled brackets $M_{\psi_s,\psi_X}=\{\Gamma, 0\}$ reflects the difference between the construction of the dissipator in single-particle basis (i) ($M_{\psi_s,\psi_X}=\Gamma$) and in configuration basis (ii) ($M_{\psi_s,\psi_X}=0$). In the single-particle basis, the photon-assisted polarization is transferred from the $X\!X$ - $X_p$ oscillation to the $X_s$ - $G$ oscillation. Therefore, only the s-exciton decay with rate $\beta$ and not the p-exciton decay with rate $\Gamma$ contributes to the dephasing of $\psi$ in Eq.~(\ref{eq:rabi}). The loss of polarization $\psi^0_X$ with rate $\Gamma$ is transferred to $\psi^0_s$ with exactly the same rate. On the contrary in the many-particle basis, the polarization $\psi^0_X$, which is lost by the decay of the p-shell exciton, is not picked up by the polarization $\psi^0_s$, thus the element $M_{\psi_s,\psi_X}$ is zero. The remaining blocks can be interpreted analogously, by associating pairs of positive and negative entries in the same column with a transfer of occupation. The transitions between the states are illustrated in Fig.~\ref{fig:decay} (b).
\paragraph*{The solutions of the numerical integration} of Eq.~(\ref{eq:diffconfig}) are shown in Fig.~\ref{fig:dephasing}. In panel (a) and (b), a generic case for the time evolution of the system, for the configuration probabilities in (a) and the single-particle occupations in (b), is depicted. Here the deviations of the two approaches are visible but one might overlook or dismiss them as irrelevant.
The results for the occupation of the s-exciton state $\ket{X_s}$ and s-shell electron $\mean{e^\dagger_s e_s}$ depend on the construction of the dissipator. The results obtained in the single-particle basis are labeled by the subscript 'sp'.
The initially prepared biexciton (panel (a), shaded area) oscillates with the Rabi frequency $2g$ and decays with the rate $\Gamma +\beta$. The s-exciton occupation increases with rate $\Gamma$, oscillates with the Rabi frequency, and decays with the rate $\beta$. This behavior holds for both, the construction of the dissipator in the many-particle configurations and in the single-particle basis. The two curves deviate in the fact that in the single-particle basis, the oscillations have a larger amplitude than in the many-particle basis, and that in the single-particle basis, the ground state is fully occupied within each Rabi cycle. An alternative representation of the dynamics is given in panel (b), in which the single-particle expectation values for the electrons $\langle e^\dagger_ie_i\rangle$ are shown. The p-electron is decaying with rate $\Gamma$ in both formulations of the dissipator, whereas the oscillation of the s-shell electron depends on the formulation of the dissipator.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Fig7_comparison_single_config.pdf}
\caption{Dynamics of the s-shell Rabi oscillations and spontaneous p- and s-shell decay obtained with a the dissipator constructed in the single-particle basis (i) ('sp', red dashed line) and in the many-particle basis (ii) (black line). (a): Occupation probability of the s-exciton $X_s$, the shaded area marks the biexciton occupation $X\!X$. (b, c): Occupation probability of s-shell electron occupation $\mean{e^\dagger_se_s}$, the shaded area marks the p-shell electron occupation $\mean{e^\dagger_pe_p}$. The decay rates are $\beta=0.25$ and $\Gamma = 0.3$ measured in units of the Rabi frequency $2g$ for panel (a) and (b), in panel (c) the rate $\beta=0$.}
\label{fig:dephasing}
\end{figure}
To emphasize the characteristic difference between the two constructions of the dissipator we consider the limiting case of vanishing s-shell decay ($\beta = 0$), where the deviations are not blurred by a circumstantial dephasing mechanism, with $\langle e^\dagger_ie_i\rangle$ shown in panel (c). In the single-particle basis, the s-shell performs Rabi oscillations with a constant amplitude of \nicefrac{1}{2}, while the p-shell exciton decays exponentially. In the many-particle basis, the p-shell electron decay dephases the Rabi oscillations in the s-shell, resulting in a diminished amplitude in the long-term behavior. Due to the construction of the dissipator in the non-local basis of the many-particle configurations the dissipation in one system part induces non-local dephasing in an otherwise independent system part.
In many cases, e.g.~in cw-lasers, the long term behavior or the steady state of the system are of interest. The simple form of the EoM in the case of vanishing $\beta$ allows to derive analytic expressions for the dependence of the amplitude of the Rabi oscillations on the rate $\Gamma$ (see App.~\ref{app:deph} for details). In the long term behavior the amplitude of the Rabi oscillations $A$ can be expressed by
\begin{align}
A|_{t\gg \frac{1}{\Gamma}} = \frac{1}{2}\frac{\sqrt{(\tilde{\Gamma}^2+2)^2 + \tilde{\Gamma}^2}}{(\tilde{\Gamma}^2+4)},\quad \tilde{\Gamma} = \frac{\Gamma}{2g}.
\end{align}
Note the peculiar result that the long term effect of the non-local dephasing is strongest, when its rate is the smallest since in this case the Rabi oscillations are exposed to the dephasing for the longest time. As it can be seen in Fig.~\ref{fig:amplitude}, the minimal amplitude is $\nicefrac{1}{4}$ for almost vanishing but nonzero decay rates $\tilde{\Gamma}$. In the opposite case of an immediate p-exciton decay, the amplitude remains at its maximum value of $\nicefrac{1}{2}$ since no polarization could build up to be dephased.
\begin{figure}[]
\includegraphics[width=0.48\textwidth]{Fig8_amplitude.pdf}
\caption{Asymptotic effect of the non-local dephasing on the amplitude of the oscillation of $\mean{X_s}$ in dependence of the scaled decay rate $\tilde{\Gamma}$.}
\label{fig:amplitude}
\end{figure}
Going beyond this minimal example one can further increase the non-local dephasing effect by exploiting the same mechanism discussed above. Adding an additional pump process to the dissipator $\mathcal{D}$, compensating the p-shell loss, ties the s-exciton permanently to the dephasing influence of the p-shell. In this case the Rabi oscillations in the s-shell would completely vanish, when the dissipator is constructed in the configuration basis (ii). Whereas when the dissipator is constructed of in the single-particle basis (i), the Rabi oscillations in the s-shell would again not be effected at all by the p-shell (see App.~\ref{app:pump}).
The problematic conclusion to this section is that the outcome of the EoM depends crucially on the choice of basis states for constructing the dissipator. In the next section we will see how this problem can be resolved and that, in contrast to our first example in Sec.~\ref{sec:issue}, the non-local dephasing effect is not an artifact of an approximation error.
\subsection{System plus reservoir approach}
\label{ssec.sytempres}
The discrepancies between the results, when the dissipator $\mathcal{D}$ is constructed in either single-particle (i) or the configuration basis (ii) originate from deviating approximations and assumptions about the system-reservoir interaction, already build into the construction of the dissipator $\mathcal{D}$ itself. To see where the crucial assumptions deviate we discuss in this section how the dissipator describing the decay of a p-shell exciton in Sec.~\ref{ssec:ssd} can be derived from a system plus reservoir approach.
Starting from the von Neumann equation $\mathrm{d}_t \chi=i[\chi,H]$ for the full density operator $\chi$ describing the QD-cavity-mode system and a reservoir of non-confined modes, we derive the EoM for the reduced density operator $\rho=\tr[\mathcal{R}]{\chi}$ in Born-Markov approximation \cite{carmichael_dissipation_1999}. To this end we divide the Hilbert space $\mathcal{H}$ into a reservoir part $\mathcal{H}_{\mathcal{R}}$ consisting of the non-confined modes and a system part $\mathcal{H}_{S}=\mathcal{H}_{QD}\otimes\mathcal{H}_{C}$ consisting of the QD and the confined cavity mode. The QD Hilbert space itself consists of the s- and p-shell subspace $\mathcal{H}_{QD}=\mathcal{H}_{s}\otimes\mathcal{H}_{p}$. After recapitulating how one can derive the general EoM for $\rho$, where we essentially follow the approach from Ref.\cite{carmichael_dissipation_1999}, we compare the obtained EoM (formulated in the single-particle and in the configuration basis) to the EoM for $\rho$ used in the previous section.
Assuming a reservoir of harmonic modes with frequency $\omega_k$ that are annihilated(created) by $ r_k^{(\dagger)}$, we can formulate the reservoir Hamiltonian $H_{\mathcal{R}}$ and the system-reservoir interaction Hamiltonian $H_{\mathcal{S}\Leftrightarrow \mathcal{R}}$ as
\begin{align}
H_{\mathcal{R}} =&\sum_{k}\omega_k r_k^{\dagger}r_k,\\
H_{\mathcal{S}\Leftrightarrow \mathcal{R}}=& \sum_{j}\left(\sum_{k}\kappa^{j}_{k} r_k^{\dagger}L_{j}+\sum_{k}\kappa^{j*}_{k} r_k L^{\dagger}_{j}\right)\nonumber\\
=&\sum_j\left(R^{\dagger}_j L_j+R_j L^{\dagger}_j\right)
\label{eq:systemreshamilt}
\end{align}
respectively. In $H_{\mathcal{S}\Leftrightarrow \mathcal{R}}$ the sum over all reservoir modes is summarized in the reservoir operators $R_j$ coupling to the system operators $L_j$ in full rotating wave approximation\cite{carmichael_dissipation_1999,nakatani_quantum_2010}. The operators $L_j$, will be chosen as $L_{\textrm{sp}}$ according to Eq.~\eqref{eq:Lsp} in the single-particle (i) and as $L_{G}$ and $L_{X}$ according to Eq.~\eqref{eq:LGLX} in the configuration basis formulation (ii).
In Born approximation the full density operator $\chi(t)$ factorizes to $\chi(t)=\rho(t)\otimes \rho^T_\mathcal{R}$, where $\rho^T_\mathcal{R}$ is the reservoir density operator in thermal equilibrium. We trace over the reservoir $\mathcal{R}$ and reformulate the von Neumann equation in the interaction picture for $\chi(t)=\rho(t)\otimes \rho^T_\mathcal{R}$ as an integro-differential equation
\begin{align*}
&\mathrm{d}_t\rho(t)=\int_0^t dt^\prime\tr[\mathcal{R}]{[H_{\mathcal{S}\Leftrightarrow \mathcal{R}}(t),[\rho(t^\prime) \rho^T_\mathcal{R},H_{\mathcal{S}\Leftrightarrow \mathcal{R}}(t^\prime)]]},
\end{align*}
describing the dissipative influence of the reservoir $\mathcal{R}$ on the reduced density operator $\rho$. Now we insert the general Hamiltonian from Eq.~\eqref{eq:systemreshamilt} and execute the commutators and collect all reservoir operators in the reservoir correlations $\tr[\mathcal{R}]{\bullet\,\rho^T_\mathcal{R}}=\mean{\bullet}_\mathcal{R}$. When the reservoir occupations can be neglected the only contributing reservoir correlations are $\langle R_j(t^\prime)R^{\dagger}_i(t)\rangle _\mathcal{R}$ and the EoM for the reduced density operator reads
\begin{align*}
&\mathrm{d}_t\rho(t)=\sum_{i,j}\int_0^tdt^\prime\mean{R_j(t^\prime)R^{\dagger}_i(t)}_\mathcal{R}\Big\lbrace L_i(t)\rho(t^\prime) L^{\dagger}_j(t^\prime)\\
&-L^{\dagger}_j(t^\prime)L_i(t)\rho(t^\prime)+L_i(t)\rho(t^\prime) L^{\dagger}_j(t^\prime) -\rho(t^\prime) L^{\dagger}_j(t^\prime)L_i(t)\Big\rbrace .
\end{align*}
When the time scales of the reservoir and the system can be separated we can apply the Markov approximation, which corresponds to
\begin{align}
\label{eq:rescorrelations}
\mean{R_j(t^\prime)R^{\dagger}_i(t)}_\mathcal{R}=&\sum_{kl}\delta_{kl}\kappa^{i\ast}_l\kappa^{j}_ke^{i\omega_k t^\prime}e^{-i\omega_l t}\\
=&\sum_k\kappa^{i\ast}_k\kappa^{j}_ke^{-i\omega_k (t-t^\prime)}\approx\gamma_{ji}\delta(t-t^\prime),\nonumber
\end{align}
and we obtain
\begin{align}
&\mathrm{d}_t\rho=\widetilde{\mathcal{D}}(\rho)=\sum_{i,j}\gamma_{ji}\Big\lbrace2L_i\rho L^{\dagger}_j-L^{\dagger}_jL_i\rho-\rho L^{\dagger}_jL_i\Big\rbrace.
\label{eq:dissipatorsystempbath}
\end{align}
Here the dissipator $\widetilde{\mathcal{D}}$ has a more general non-diagonal form \cite{breuer_theory_2002} in the collapse operators $L_i$ and rates $\gamma_{ij}$, in contrast to the dissipator $\mathcal{D}$ in Eq.~\eqref{eq:lindblad} used in Sec.~\ref{ssec:ssd}.
This non-diagonal dissipator appears in many systems, e.g.,~in open resonators the non-diagonal form of the dissipator induces correlations between different photon modes\cite{hackenbroich_field_2002,hackenbroich_quantum_2003,eremeev_quantum_2011,fanaei_effect_2016}.
We now use the non-diagonal dissipator $\widetilde{\mathcal{D}}$ from Eq.~\eqref{eq:dissipatorsystempbath} and insert the system operators $L_j$ from the system-reservoir interaction Hamiltonian formulated in the single-particle basis and in the configuration basis.
\paragraph*{In the single-particle basis} the system-reservoir interaction Hamiltonian reads
\begin{align*}
H^{\mathrm{sp}}_{\mathcal{S}\Leftrightarrow \mathcal{R}}=&\sum_{k}\kappa_{k} r_k^{\dagger}h_p e_p+\sum_{k}\kappa_{k} r_k e^{\dagger}_ph^{\dagger}_p\\
=&R^{\dagger}_{sp} L_{sp}+R_{sp} L^{\dagger}_{sp}\\
=&\sum_{j=sp}\left(R^{\dagger}_j L_j+R_j L^{\dagger}_j\right)
\end{align*}
with $\kappa_{k}$ being the coupling strength of reservoir mode $k$ to the p-exciton. This Hamiltonian leads to the dissipator
\begin{align}
\widetilde{\mathcal{D}}_{\mathrm{sp}}(\rho)=&\Gamma\Big\lbrace2L_{sp}\rho L^{\dagger}_{sp}-L^{\dagger}_{sp}L_{sp}\rho-\rho L^{\dagger}_{sp}L_{sp}\Big\rbrace,
\label{eq:spsystempbath}
\end{align}
where we have identified the only appearing rate $\gamma_{\textrm{sp}\textrm{sp}}$ with the rate $\Gamma$ from the previous section. Equation~\eqref{eq:spsystempbath} is identical to the dissipative part of the EoM~\eqref{eq:lindblad} used in Sec.~\ref{ssec:ssd} in single-particle formulation (i) with $L_j=L_{\mathrm{sp}}$.
Using Eq.~\eqref{eq:Lsp} for $L_{sp}$ and $L^{(\dag)}_{G}L^{(\dag)}_{X}=0$ we can reformulate Eq.~\eqref{eq:spsystempbath} to
\begin{align}
\begin{split}
\widetilde{\mathcal{D}}_{\mathrm{sp}}(\rho)&=\Gamma\Big\lbrace2L_{G}\rho L^{\dagger}_{G}-L^{\dagger}_{G}L_{G}\rho-\rho L^{\dagger}_{G}L_{G}\Big\rbrace\\
&+\Gamma\Big\lbrace2L_{X}\rho L^{\dagger}_{X}-L^{\dagger}_{X}L_{X}\rho-\rho L^{\dagger}_{X}L_{X}\Big\rbrace\\
&+2\Gamma L_{G}\rho L^{\dagger}_{X}+2\Gamma L_{X}\rho L^{\dagger}_{G},
\end{split}\label{eq:spsystempbathinconf}
\end{align}
which corresponds to the $\Gamma$ dependent part of Eqs.~\eqref{eq:diffconfig} and \eqref{eq:diffmatrix} with $M_{\psi_s,\psi_X}=\Gamma$.
\paragraph*{In the configuration basis} the system-reservoir interaction Hamiltonian reads
\begin{align*}
\begin{split}
H^{\mathrm{C}}_{\mathcal{S}\Leftrightarrow \mathcal{R}}=&\sum_{k}\kappa^{G}_{k} r_k^{\dagger}L_{G}+\sum_{k}\kappa^{X}_{k} r_k^{\dagger}L_{X}\\
&+\sum_{k}\kappa^{G*}_{k} r_k L_{G}^{\dagger}+\sum_{k}\kappa^{X*}_{k} r_k L_{X}^{\dagger}\\
=&R^{\dagger}_{G} L_{G}+R^{\dagger}_{X} L_{X}+R_{G} L^{\dagger}_{G}+R_{X} L^{\dagger}_{X}\\
=&\sum_{j=G,X}\left(R^{\dagger}_j L_j+R_j L^{\dagger}_j\right)
\end{split}
\end{align*}
where we have allowed the dipole-matrix elements $\kappa^{j}_{k}$ to depend on s-exciton state $j=G,X$, which would not be possible in the single-particle basis. By inserting the system operators operators $L_j$ into Eq.~\eqref{eq:dissipatorsystempbath} we obtain
\begin{align*}
\begin{split}
\widetilde{\mathcal{D}}_{\mathrm{C}}(\rho)&=\gamma^{C}_{GG}\Big\lbrace2L_{G}\rho L^{\dagger}_{G}-L^{\dagger}_{G}L_{G}\rho-\rho L^{\dagger}_{G}L_{G}\Big\rbrace\\
&+\gamma^{C}_{XX}\Big\lbrace2L_{X}\rho L^{\dagger}_{X}-L^{\dagger}_{X}L_{X}\rho-\rho L^{\dagger}_{X}L_{X}\Big\rbrace\\
&+2\gamma^{C}_{XG}L_{G}\rho L^{\dagger}_{X}+2\gamma^{C}_{GX}L_{X}\rho L^{\dagger}_{G}.
\end{split}
\end{align*}
This dissipator $\widetilde{\mathcal{D}}_{\mathrm{C}}$ is in general not in agreement with the diagonal dissipator $\mathcal{D}$ from Eq.~\eqref{eq:lindblad}.
For rates $\gamma^{C}_{ij}=\Gamma$ the dissipator $\widetilde{\mathcal{D}}_{\mathrm{C}}$ agrees with the dissipator constructed in the single-particle basis $\widetilde{\mathcal{D}}_{\mathrm{sp}}$ in Eq.~\eqref{eq:spsystempbathinconf}.
If we assume the system-reservoir coupling strength to be independent of the s-shell exciton $\kappa^{G}_{k}=\kappa^{X}_{k}=\kappa_{k}$, we obtain $\gamma^{C}_{ij}=\Gamma$ and thus $\widetilde{\mathcal{D}}_{\mathrm{C}}=\widetilde{\mathcal{D}}_{\mathrm{sp}}$. In fact in this case the system-reservoir interaction Hamiltonians are identical with $H^{\mathrm{C}}_{\mathcal{S}\Leftrightarrow \mathcal{R}}=\mathbb{1}_s\otimes H^{\mathrm{sp}}_{\mathcal{S}\Leftrightarrow \mathcal{R}}$, where $\mathbb{1}_s$ is the identity operator in $ \mathcal{H}_s$. This resolves the problematic conclusion from Sec.~\ref{ssec:ssd} and we see that starting from the system-reservoir interaction Hamiltonian leads to a dissipator that is in general non-diagonal and independent from the choice of basis states \footnote{An analog approach, based on a system-reservoir interaction Hamiltonian, is also advisable to describe the hole capture from our introductory example in Sec.~\ref{ssec:hole}.}.
When we use the diagonal form of the dissipator ad-hoc as done in Eq~\eqref{eq:lindblad}, we implicitly make strong assumptions about the reservoir, namely that the reservoir correlations result in rates
\begin{align}
\gamma^{C}_{GG}=\gamma^{C}_{XX}=\Gamma \quad \textnormal{and}\quad\gamma^{C}_{XG}=\gamma^{C}_{GX}=0.
\label{eq:sprratesforspooky}
\end{align}
Nevertheless, from a formal point of view it is possible to construct a reservoir Hamiltonian that leads to the rates in Eq.~\eqref{eq:sprratesforspooky} and thus the described non-local dephasing effect. To this end it is however necessary that the coupling strengths $\kappa_k^j$ depend on the s-exciton state and thus the system-reservoir Hamiltonian interacts non-locally with the QD \cite{schirmer_stabilizing_2010,may_exciton_2003} as illustrated in Fig.~\ref{fig:localvsnonlocalres}.
\begin{figure}[]
\centering
\includegraphics[width=0.48\textwidth]{Fig9_local_vs_nonlocal_res.pdf}
\caption{Illustration of the different reservoir couplings. In the left figure (a) the reservoir coupling elements $\kappa$ are independent of the state of s-exciton. The interaction Hamiltonian $H^{\mathrm{sp}}_{\mathcal{S}\Leftrightarrow \mathcal{R}}$ operates in $\mathcal{H}_p\otimes\mathcal{H}_\mathcal{R}$, thus the reservoir interacts only with a single localized state. In the right figure (b) the reservoir coupling elements $\kappa^j$ depend on the state of s-exciton thus $H^{\mathrm{C}}_{\mathcal{S}\Leftrightarrow \mathcal{R}}$ operates in $\mathcal{H}_s\otimes\mathcal{H}_p\otimes\mathcal{H}_\mathcal{R}$ and the p-exciton loss is connected to a non-local measurement of the s- and p-exciton state corresponding to $L_{G/X}$.}
\label{fig:localvsnonlocalres}
\end{figure}
\section{Conclusion}
\label{sec:last}
We have shown how the choice of basis states can change the dynamics of a system, if an approximation is involved in the calculation. In our first example, the appearance
of the equations, formulated in a single-particle basis, suggested a factorization scheme, which created an artificial dependence between two actually independent quantities.
We have analyzed this dependence in terms of the systems many-particle basis states, in which the relations between the quantities can be seen directly.
In the second part, we have investigated an open system treated in Born-Markov-approximation, where the reservoir influence is modeled by a dissipator in Lindblad form.
We have shown that the way, in which an equal set of collapse operators enter the dissipator, has a profound influence on the systems dynamics. The construction of the dissipator determines
if the Rabi oscillations of the s-shell exciton are non-locally dephased by the decay of the p-shell exciton.
The problem of formulation dependent dynamics, has been resolved, by taking the system-reservoir interaction Hamiltonian into account.
Starting from the full Hamiltonian and evaluating the reservoir correlation functions, we have shown that in both formulations, the s-shell Rabi
oscillations are independent of the p-shell decay. However we also shown that the non-locally dephased s-shell oscillations can actually occur when the system-reservoir interaction Hamiltonian depends on the whole QD state. In contrast to the first
example, the misconception in second part arises not from an inappropriate approximation scheme, but from the notion that two differently constructed dissipators would
describe the same physical situation.
\section{Acknowledgments}
We thank T.~Pistorius for a stimulating discussion, that has led to our last example. We would also like to thank the two unknown referees who provided us with very helpful hints for improvements and constructive criticism. T. Lettau and H.A.M. Leymann have contributed equally to this work.
\newpage
| proofpile-arXiv_065-7659 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Hecke algebras of Coxeter systems are classical objects of study in
representation theory because of their rich connections with finite groups of
Lie type, Lie algebras, quantum groups, and the geometry of flag varieties
(see, for example, \cite{Curtis}, \cite{CurtisIwahori},
\cite{DipperJames}, \cite{GeckPfeiffer}, \cite{KL}, \cite{LusztigCharacters}).
Let $(W,S)$ be a Coxeter system, and let $H$ be its Hecke algebra defined
over the ring $\mathbb{Z}[v,v^{-1}]$. Using Kazhdan-Lusztig polynomials, Lusztig
constructed the \emph{asymptotic Hecke algebra}
$J$ of $(W,S)$ from $H$ in \cite{L2}. The algebra $J$ can be
viewed as a limit of $H$ as the parameter $v$ goes to infinity, and its
representation theory is closely related to that of $H$ (see
\cite{L2}, \cite{L3}, \cite{L4}, \cite{LG}, \cite{Geck_asymptotic}). In particular, upon
suitable extensions of scalars, $J$ admits a natural homomorphism from
$H$, hence representations of $J$ induce representations of $H$
(\cite{LG}).
The asymptotic Hecke algebra $J$ has several interesting features. First,
given a Coxeter system $(W,S)$, $J$ is defined to be the free abelian group
$J=\oplus{w\in W}\mathbb{Z} t_w$, with multiplication of the basis elements
declared by
\[
t_xt_y=\sum_{z\in W}\gamma_{x,y,z^{-1}}t_z
\]
where the coefficients $\gamma_{x,y,z^{-1}}$ ($x,y,z\in W$) are
nonnegative integers extracted from the structure constants
of the \emph{Kazhdan-Lusztig basis} of the Hecke
algebra $H$ of $(W,S)$. The non-negativity of its structure constants
makes $J$ a \emph{$\mathbb{Z}_+$-ring}, and the basis elements satisfy additional
conditions which make $J$ a \emph{based ring} in the sense of
\cite{Lusztigbased} and \cite{EGNO} (see \cref{sec:subregular J-ring}).
Another interesting feature of $J$ is that for any
\emph{2-sided Kazhdan-Lusztig cell}
$E$ of $W$, the subgroup
\[
J_E=\oplus_{w\in E}\mathbb{Z} t_w
\]
of $J$ is a subalgebra of $J$ and also a based ring. Here, as the notation
suggests, a 2-sided Kazhdan-Lusztig cell is a subset of $W$. The cells of
$W$ are defined using the Kazhdan-Lusztig basis of its associated Hecke algebra
$H$ and form a partition of $W$. Further, the subalgebra $J_E$ is in fact
a direct summand of $J$ for each 2-sided
cell $E$, and $J$ admits the direct sum decomposition
\[
J=\oplus_{E\in \mathcal{C}}J_E,
\]
where $\mathcal{C}$ denotes the collection of all 2-sided cells of $W$
(see \cref{sec:subalgebras}). Thus, it is natural to study $J$ by first
studying its direct summands corresponding to the cells.
In this paper, we
focus on a particular 2-sided cell $C$ of $W$ known as the
\emph{subregular cell} and study the titular based ring $J_C$. We also
study subalgebras $J_s$ of $J_C$ that correspond to the generators $s\in S$
of $W$. Thanks to a result of Lusztig in \cite{subregular}, the cell $C$ can be characterized as the set of elements in $W$ with
unique reduced expressions, and the main theme of the paper is to exploit
this combinatorial characterization and study $J_C$ and $J_s (s\in S)$
without reference to Kazhdan-Lusztig polynomials. This is desirable since a main obstacle in
understanding $J$ for arbitrary Coxeter systems lies in the difficulty of
understanding Kazhdan-Lusztig polynomials.
A third important feature of the algebra $J$ is that it has very interesting
\emph{categorification}. Here by categorification we mean the process of
adding an extra layer of structure to an algebraic object to produce an
interesting category which allows one to recover the object; more specially,
we mean $J$ appears as the Grothendieck ring of a \emph{tensor category}
$\mathcal{J}$ (see \cite{EGNO} for the definition of a tensor category, \cite{LG} for the construction of $\mathcal{J}$). A well-known example of categorification is the
categorification of the Hecke algebra $H$ by the \emph{Soergel category}
$\mathcal{SB}$, which was used to prove the ``positivity properties'' of the
\emph{Kazhdan-Lusztig basis} of $H$ in \cite{EW}.
Just as the algebra $J$ is constructed from $H$, the category
$\mathcal{J}$ is constructed from the category $\mathcal{SB}$, also by
Lusztig (\cite{LG}). Further, just as the algebra $J$ has a subalgebra of the form $J_E$ for each 2-sided
cell $E$ and a subalgebra $J_s$ for each generator $s\in S$, the category
$\mathcal{J}$ has a subcategory $\mathcal{J}_E$ for each 2-sided cell $E$
and a subcategory $\mathcal{J}_s$ for each $s\in S$. Moreover,
$\mathcal{J}_E$ categorifies $J_E$ for each 2-sided cell $E$, and
$\mathcal{J}_E$ is a \emph{multifusion category} in the sense of
\cite{EGNO} whenever $E$ is finite, which can happen for suitable cells even
when the ambient group $W$ is infinite. Similarly, $J_s$ is a \emph{fusion
category} whenever $J_s$ has finite rank. Multifusion and fusion
categories have rich connections with quantum groups (\cite{Kassel}),
conformal field theory (\cite{CFT}), quantum knot invariants (\cite{Turaev})
and topological quantum field theory (\cite{BKtcat}), so the categories
$\mathcal{J}_E$ (in particular, $\mathcal{J}_C$) and $\mathcal{J}_s$ are
interesting since they can potentially provide new examples of multifusion and
fusion categories.
Historically, the intimate connection between the algebra $J$ and its
categorification $\mathcal{J}$ has been a major tool in the study of both
objects. For Weyl groups and an affine Weyl groups, Lusztig (\cite{L4},
\cite{Lusztig_cells}) and
Bezrukanikov et al. (\cite{Bez1}, \cite{Bez2}, \cite{Bez3}) showed that there
is a bijection between the
two-sided cells in the group and unipotent conjugacy classes of
an algebraic group, and that the subcategories of $\mathcal{J}$ corresponding
to the cells can be described geometrically, as categories of vector
bundles on a square of a finite set equivariant with respect to an algebraic
group. Using the categorical results, they computed the structure constants in
$J$ explicitly.
For other Coxeter systems, however, the nature of
$J$ or $\mathcal{J}$ seems largely unknown, partly because there is no known recourse
to advanced geometry. In this context, our paper may be viewed as an attempt
to understand the subalgebra $J_C$ of $J$ for arbitrary Coxeter systems from
a more combinatorial point of view. We hope to
understand the structure of $J_C$ by examining the multiplication rule in
$J_C$, then, in some cases, use our knowledge of $J$ to deduce the
structure of $\mathcal{J}$. This idea is further discussed in
\cref{sec:background}.
The main results of the paper fall into two sets. First, we
describe some connections between the \emph{Coxeter diagram} $G$ of an arbitrary
Coxeter system $(W,S)$ and the algebra $J_C$ associated to $(W,S)$. The
first result in this spirit describes $J_C$ in terms of $G$ for all
\emph{simply-laced} Coxeter systems. Recall that given any vertex $s$ in
$G$, the fundamental group $\Pi_s(G)$ of $G$ based at $s$ is the group
consisting of all homotopy equivalence classes of walks
in $G$ starting and ending at $s$, equipped with concatenation as the group
operation. One may generalize this notion to define the \emph{fundamental
groupoid} $\Pi(G)$ of $G$ as the set of homotopy equivalence classes of
all walks on $G$, equipped with concatenation as a partial binary
operation that is defined between two classes when their concatenation makes
sense. We define the groupoid algebra of $\mathbb{Z}\Pi(G)$ of $\Pi(G)$ by
mimicking the construction of a group algebra from a group, and we prove the following theorem.
\begin{namedtheorem}[A]
\label{simply-laced}
Let $(W,S)$ be an any simply-laced Coxeter system, and let $G$ be its Coxeter
diagram. Let $\Pi(G)$ be the fundamental groupoid of $G$, let
$\Pi_s(G)$ be the fundamental group of $G$ based at $s$ for any $s\in S$, let
$\mathbb{Z}\Pi(G)$ be the groupoid algebra of $\Pi(G)$, and let $\mathbb{Z}\Pi_s(G)$ be the
group algebra of $\Pi_s(G)$. Then $J_C\cong \mathbb{Z}\Pi(G)$ as based rings, and
$J_s\cong \mathbb{Z}\Pi_s(G)$ as based rings for all $s\in S$.
\end{namedtheorem}
\noindent The key idea behind the theorem is to find a correspondence between basis
elements of $J_C$ and classes of walks on $G$. The correspondence then
yields explicit formulas for the claimed isomorphisms.
In our second result, we study the case where $G$ is \emph{oddly-connected}.
Here by oddly-connected we mean that each pair of distinct vertices in
$G$ are connected by a path involving only edges of odd weights.
\begin{namedtheorem}[B]
\label{oddly-connected}
Let $(W,S)$ be an oddly-connected Coxeter system. Then
\begin{enumerate}
\item $J_s\cong J_t$ as based rings for all $s,t\in S$.
\item $J_C\cong
\mathrm{Mat}_{S\times S}(J_s)$ as based rings for all $s\in S$. In particular,
$J_C$ is Morita equivalent to $J_s$ for all $s\in S$.
\end{enumerate}
\end{namedtheorem}
\noindent Once again, we will provide explicit isomorphisms between the
algebras using $G$.
In a third result, we describe all \emph{fusion rings} that appear in the form
$J_s$ for some Coxeter system $(W,S)$ and some choice of $s\in S$. We show that any
such fusion ring is isomorphic to a ring $J_s$ associated to a dihedral
system, which is in turn always isomorphic to the \emph{odd part} of a
\emph{Verlinde algebra} associated to the Lie group $SU(2)$ (see Definition
\ref{verlinde def}).
\begin{namedtheorem}[C]
\label{fusion J}
Let $(W,S)$ be a Coxeter system, and let $s\in S$. Suppose $J_s$ is a fusion
ring for some $s\in S$. Then there exists a dihedral Coxeter system
$(W',S')$ such that $J_s\cong
J_{s'}$ as based rings for either $s'\in S$.
\end{namedtheorem}
In our second set of results, we focus on certain specific Coxeter
systems $(W,S)$ whose Coxeter diagram involves edges of weight $\infty$, and
show that for suitable choices of $s\in S$, $J_s$ is isomorphic to a \emph{free
fusion ring} in the sense of \cite{Banica}. A free fusion ring can be
described in terms of the data of its underlying \emph{fusion set}, and we
describe these data explicitly for each free fusion ring $J_s$ in our examples.
Furthermore, each free fusion ring we discuss is isomorphic to the
Grothendieck rings of the category $\mathrm{Rep}(\mathbb{G})$ of
representations of a known \emph{partition quantum groups} $\mathbb{G}$, and we
will identify the group $\mathbb{G}$ in all cases.
Our main theorems appear as Theorem \hyperref[unitary]{D} and
Theorem \hyperref[amalgamate]{E} in \cref{sec:example2} and \cref{sec:example3}, but we omit their technical statements for the
moment.
All the results mentioned above rely heavily on the following theorem, which
says that a combinatorial ``factorization'' of a reduced word of an
element into its \emph{dihedral segments} (see Definition \ref{dihedral
segments def}) carries over to a factorization of
basis elements in $J_C$.
\begin{namedtheorem}[F]{\rm (Dihedral factorization)}
\label{dihedral factorization}
Let $x$ be the reduced word of an element in $C$, and let $x_1,
x_2,\cdots, x_l$ be the dihedral segments of $x$. Then
\[
t_x=t_{x_1}\cdot t_{x_2} \cdot \cdots \cdot t_{x_l}.
\]
\end{namedtheorem}
The rest of the article is organized as follows. We review some
preliminaries about Coxeter systems and Hecke algebras in Section 2. In
Section 3, we define
the algebras $J$, $J_C$ and $J_{s} (s\in S)$ and explain how $J_C$ and
$J_s (s\in S)$ appear as based rings. We prove
Theorem \hyperref[dihedral factorization]{F} in Section 4 and show how it can be used to compute
products of basis elements in $J_C$. In Section 5, we prove our results on the
connections between $J_C$ and Coxeter diagrams. Finally, we
discuss our second set of results in Section 6, where we prove that certain
rings $J_s$ are free fusion rings.
\subsection*{Acknowledgements.} I would like to thank Victor Ostrik for his
guidance and numerous helpful suggestions. I am also very grateful to
Alexandru Chirvasitu and Amaury Freslon for helpful discussions about free fusion rings
and compact quantum groups. Finally, I would like to acknowledge the
mathematical software {\tt SageMath} (\cite{sagemath}), which was used
extensively for many of our computations.
\section{Preliminaries}
\label{sec:prelim}
In this section we review the basic theory of Coxeter systems and Hecke
algebras relevant to this paper. Out main references are \cite{BB} and \cite{LG}. In
particular, we define the Hecke algebras over the ring $\mathbb{Z}[v,v^{-1}]$ and use a normalization
seen in \cite{LG}, where the quadratic relations are $(T_s-v)(T_s+v^{-1})=0$
for all simple reflections $s$. Otherwise the treatment is standard and
self-contained. Readers familiar with these topics may skip this
section entirely.
\subsection{Coxeter systems}
\label{sec:coxter systems}
A \emph{Coxeter system} is a pair $(W,S)$ where $S$ is a set equipped with a map
$m: S\times S\rightarrow \mathbb{Z}_{\ge 1}\cup \{\infty\}$ such that $m(s,s)=1$ and
$m_{s,t}=m_{t,s}\ge 2$ for all distinct elements $s,t\in S$, and $W$ is the
group presented by
\begin{equation*}
\label{eq:coxeter definition}
W=\langle S\;\vert\;(st)^{m(s,t)}=1,\; \forall
s,t\in S\rangle.
\end{equation*}
\noindent The group $W$ is called a \emph{Coxeter group}. If $W$ is a finite
group, we say $(W,S)$ is a \emph{finite} Coxeter system.
\begin{example}
\label{example groups}
\begin{enumerate}[leftmargin=2em]
\item (Dihedral groups)
Let $n\in \mathbb{Z}_{\ge 3}$ and let $(W,S)$ be the Coxeter
system with $S=\{s,t\}$ and $W=\langle s,t\;\vert\; s^2=t^2=(st)^n=1\rangle$.
Then $W$ is isomorphic to the dihedral group $D_n$ of order $2n$, the group
of symmetries of a regular $n$-gon $P$.
To see this, let $c$ be the center of $P$, let $d$ a
vertex of $P$, and let $e$ be the midpoint of an edge incident to $d$. Let
$s'$ and $t'$ be reflections with respect to the two lines going through
$c,d$ and through $c,e$, respectively. Then $s',t'$ are involutions since
they are reflections. Since the two lines form an angle of $\pi/n$, $s't'$ is rotation at an angle of
$2\pi/n$, hence $(s't')^n=1$. It follows that the map $s\mapsto s', t\mapsto t'$ extends
uniquely to a group homomorphism $\varphi: W \rightarrow D_n$. The
map is surjective since $s',t'$ generate $D_n$, and a moment's
thought reveals that $\abs{W}\le 2n=\abs{D_n}$, therefore $\varphi$ must be
an isomorphism.
\item (Symmetric groups)
Let $n\in \mathbb{Z}_{\ge 2}$, $S=\{s_1,s_2,\cdots, s_{n-1}\}$, and let $W$ be the
Coxeter group generated by $S$ subject to the relations $s_i^2=1$ for all
$i$, $(s_is_j)^3=1$ for all $i,j$ with $\abs{i-j}=1$, and $(s_is_j)^2=1$ for
all $i,j$ with $\abs{i-j}>1$. Then $W$ is isomorphic to the symmetric group
$S_n$. More precisely, let $s_i'$ be the $i$-th basic transposition $(i,i+1)$
in $S_n$, then it is straightforward to check that the map $s_i\mapsto s_i'$
extends to a group isomorphism from $W$ to $S_n$.
\item (Weyl groups)
The Weyl group of a \emph{root system} (\cite{Humphreys}) is a Coxeter
group. Weyl groups constitute the majority of finite Coxeter groups
(see \cite{BB}).
\end{enumerate}
\end{example}
The data of a Coxeter system $(W,S)$ can be efficiently encoded via a
\emph{Coxeter diagram} $G$. By definition, $G$ is the loopless, weighted,
undirected graph $(V,E)$ with vertex set $V=S$ and with edges $E$ given as
follows.
For any distinct $s,t\in S$, $\{s,t\}$ forms an edge in $G$ exactly when
$m(s,t)\ge 3$, whence the weight of the edge is $m(s,t)$. When drawing a
Coxeter graph, it is conventional to leave edges of weight 3 unlabeled. We call
edges of weight 3 \emph{simple}, and we say $(W,S)$ is \emph{simply-laced} if
all edges of $G$ are simple.
We call a Coxeter system $(W,S)$ \emph{irreducible} if its Coxeter graph $G$ is
connected. This terminology comes from the following fact. If $G$ is not
connected, then each connected component of $G$ encodes a Coxeter system. Since
$m(s,t)=2$ for any vertices $s,t$ selected from different connected components of
$G$, and since $s^2=t^2=1$ now that $m(s,s)=m(t,t)=1$, we have $st=ts$. This
means that the Coxeter groups corresponding to the components commute with each
other, so $W$ is isomorphic to the direct
product of these Coxeter groups and hence ``reducible''. That said, for
most purposes we may study only irreducible Coxeter systems.
\subsection{Combinatorics of words}
\label{sec:word combinatorics}
Let $(W,S)$ be a Coxeter system,
and let $\ip{S}$ be the free monoid generated by
$S$. It is natural to think of elements in $W$ as represented by
elements of $\ip{S}$, or \emph{words} or \emph{expressions} in the alphabet
$S$. We review some basic facts about words in Coxeter groups in this
subsection.
For $w\in W$, we define the \emph{length} of
$w$ in $W$, written $l(w)$, to be the minimal length of a word representing
$w$, and we call any such minimal-length word a \emph{reduced word} or
\emph{reduced expression} of $w$. As we shall see, reduced words lie at the
heart of the combinatorics of Coxeter groups.
Our first fact concerns the order of products of the form $st$ ($s,t\in S$) in
$W$. Recall that $W$ is presented by
\begin{equation}
\label{eq:coxeter group def}
W=\langle S\;\vert\;(st)^{m(s,t)}=1,\; \forall
s,t\in S\rangle,
\end{equation}
hence the order of the element $st$ divides $m(s,t)$ for any $s,t\in S$. But
more is true:
\begin{prop}[\cite{LG}, Proposition 1.3]
\label{order}
If $s\neq t$ in $S$, then $st$ has order $m({s,t})$ in $W$. In particular,
$s\neq t$ in $W$.
\end{prop}
In light of the proposition, we shall henceforth identify $S$ with a subset of
$W$ and call $S$ the set of \emph{simple reflections} in $W$. For $s,t\in S$,
since $s^2=t^2=1$, the relation
$(st)^{m(s,t)}=1$ is equivalent to
\begin{equation}
\label{eq:braid}
sts\cdots=tst\cdots,
\end{equation}
where both sides are words that alternate in $s$ and $t$ and have length
$m(s,t)$. Such a relation is called a \emph{braid relation}.
\begin{example}
\label{dihedral groups}
Let $W$ be the dihedral group with Coxeter generators $S=\{1,2\}$ and
$m(1,2)=M$ for some $M\ge 3$. For $0\le k\le M$, let $1_k$ and $2_k$ be the
alternating words $121\cdots$ and $212\cdots$ of length $k$, respectively. In
particular, set $1_0=2_0=1_W$, the identity element of $W$. By Proposition
\ref{order} and the braid relations, if $M<\infty$, then $W$ consists of the
$2M$ elements $1_k,2_k$ where $0\le k\le M$, and they are all distinct except
the equalities $1_0=2_0$ and $1_M=2_M$; if $M=\infty$, then $W$ consists of the
elements $1_k,2_k$ for all $k\in \mathbb{Z}_{\ge 0}$, and they are all distinct except
for $1_0=2_0$. Moreover, it is clear that $l(1_k)=l(2_k)=k$ for all $0\le k\le
M$.
\end{example}
Our second fact concerns the reduced expressions of a fixed element in
$W$. Note that the braid relations mean that
whenever one side of Equation \eqref{eq:braid} appears consecutively in a word representing an element in
$W$, we may replace it
with the other side of the equation and obtain a different expression of the same
element. Call such a move a \emph{braid move}. Then we have:
\begin{prop}[Matsumoto's Theorem; see, e.g., \cite{LG}, Theorem 1.9]
\label{Matsumoto}
Any two reduced words of a same element in $W$ can be obtained from each
other by performing a finite sequence of braid moves.
\end{prop}
Our third fact concerns the \emph{descent sets} of elements in $W$. For $x\in W$, define the \emph{left descent set} and \emph{right descent set}
of $x$ to be the sets
\[
\mathcal{L}(x)=\{s\in S: l(sx)<l(x)\},
\]
\[
\mathcal{R}(x)=\{s\in S: l(xs)>l(x)\},
\]
respectively. Descent sets can again be characterized in terms of reduced
words:
\begin{prop}[Descent criterion; \cite{BB}, Corollary 1.4.6]
\label{descent}
Let $s\in S$ and $x\in W$. Then
\begin{enumerate}
\item $s\in \mathcal{L}(x)$ if and only if $x$ has a reduced
word beginning with $s$;
\item $s\in \mathcal{R}(x)$ if and only if $x$ has a reduced
word ending with $s$.
\end{enumerate}
\end{prop}
Finally, our fourth fact concerns the \emph{Bruhat order} $\le$ on $W$. The
Bruhat order is an important partial order on $W$ defined as follows. First,
define a \emph{reflection} in $W$ to be a
conjugate of any simple reflection, i.e., an element of the form
$t=s_1s_2\cdots s_{k-1}s_ks_{k-1}\cdots s_2s_1$ where
$s_i\in S$ for all $1\le i\le k$. Then, declare a relation $\prec$ on $W$ such
that $x\prec y$ for $x,y\in W$ if and only if $x=ty$ and $l(x)<l(y)$
for some {reflection} $t$. Finally, take the Bruhat order $\le$ to be the reflexive and transitive closure of
$\prec$.
Once again, our fact says that there is a characterization of
the Bruhat order in terms of
reduced words. Define a \emph{subword} of any word $s_1s_2\cdots s_k\in S^*$ to be a word of the form $s_{i_1}s_{i_2}\cdots s_{i_l}$ where
$1\le i_1<i_2< \cdots < i_l\le k$. The fact says:
\begin{prop}[Subword Property; \cite{BB}, Corollary 2.2.3]
Let $x,y\in W$. Then the following are equivalent:
\begin{enumerate}
\item $x\le y$;
\item every reduced word for
$y$ contains a subword that is a reduced word for $x$;
\item some reduced word for $y$ contains a subword that is a
reduced word for $x$.
\end{enumerate}
\end{prop}
This immediately implies the following:
\begin{corollary}[\cite{BB}, Corollary 2.2.5]
\label{Bruhat auto}
The map $w\mapsto w^{-1}$ on $W$ is an automorphism of the Bruhat order, i.e.,
$u\le w$ if and only if $u^{-1}\le w^{-1}$.
\end{corollary}
\subsection{Hecke algebras}
\label{sec:hecke algebras}
In this subsection we review some basic facts about Hecke algebras and their
Kazhdan-Lusztig theory. Throughout, let $\mathcal{A}=\mathbb{Z}[v,v^{-1}]$.
Let $(W,S)$ be an arbitrary Coxeter system. Following \cite{LG}, we define the
\emph{Iwahori-Hecke algebra} (or simply the
\emph{Hecke algebra} for short) of $(W,S)$ to be the unital
$\mathcal{A}$-algebra ${H}$ generated by the set $\{T_s:s\in S\}$ subject to
the relations
\begin{equation} \label{eq:quadratic}
(T_s-v)(T_s+v^{-1})=0
\end{equation}
for all $s\in S$ and the relations
\begin{equation}
T_sT_tT_s\cdots=T_tT_sT_t\cdots \label{eq:hecke-braid}
\end{equation}
for all $s,t\in S$, where both sides have $m(s,t)$ factors. Note that when we
set $v=1$, the quadratic relation reduces to $T_s^2=1$, so ${H}$ is
isomorphic to the group algebra $\mathbb{Z} W$ of $W$ by the braid relations in
$W$ and \eqref{eq:hecke-braid}. In this sense, ${H}$ is often called a
\emph{deformation} of $\mathbb{Z} W$.
Let $x\in W$, let $s_1s_2\cdots s_k$ be any reduced word of $x$, and set
$T_x:=T_{s_1}\cdots T_{s_k}$. Thanks to Proposition \ref{Matsumoto} and
Equation \eqref{eq:hecke-braid}, this is well-defined, i.e., different reduced words of
$x$ produce the same element in ${H}$. The following is well known.
\begin{prop}[\cite{LG}, Proposition 3.3]
\label{standard basis}
The set $\{T_x:x\in W\}$ is an $\mathcal{A}$-basis of ${H}$.
\end{prop}
\noindent The basis $\{T_x:x\in W\}$ is called the \emph{standard basis} of
${H}$. In the seminal paper \cite{KL}, Kazhdan and Lusztig introduced another
basis of $\{c_x:x\in W\}$ of $H$ that is now known as the
\emph{Kazhdan-Lusztig basis}. The transition matrices
between the two bases give rise to the famous \emph{Kazhdan-Lusztig polynomials}. By
definition, they are the elements $p_{x,y}\in \mathcal{A}$ for which
\[
c_y=\sum_{x\in W} p_{x,y} T_x
\]
for all $x,y\in W$.
\begin{notation}
From now on we will mention the phrase ``Kazhdan-Lusztig'' numerous times. We
will often abbreviate it to ``KL''.
\end{notation}
\begin{remark}
In the paper \cite{KL} where the KL basis and polynomials were first
introduced, the Hecke algebra $H$ is defined over
a ring $\mathbb{Z}[q]$ and the KL polynomials are polynomials in $q$. Under our choice of
normalization for $H$, however, we actually have $p_{x,y}\in \mathbb{Z}[v^{-1}]
$ for all $x,y\in W$ (see \cite{LG}, $\S 5$).
\end{remark}
\subsection{Kazhdan-Lusztig theory}
\label{sec:KL theory}
KL bases and KL polynomials are essential to the construction of asymptotic
Hecke algebras. We recall the relevant facts below.
First, for $x,y,z\in W$, let $h_{x,y,z}$ be the unique elements in $\mathcal{A}$ such that
\begin{equation}
c_xc_y=\sum_{z\in W}h_{x,y,z}c_z.
\label{eq:h polynomials}
\end{equation}
The following theorem says that both the KL polynomials $p_{x,y}$ and the
coefficients $h_{x,y,z}$ always have nonnegative integer coefficients.
\begin{thm}[Positivity of the KL basis and KL polynomials; \cite{EW}]
\label{positivity}
{\hspace{2em}}
\begin{enumerate}
\item $p_{x,y}\in \mathbb{N}[v,v^{-1}]$ for all $x,y\in W$.
\item $h_{x,y,z}\in \mathbb{N}[v,v^{-1}]$ for all $x,y,z\in W$.
\end{enumerate}
\end{thm}
\noindent As mentioned in the introduction, these facts are proved using the
categorification of ${H}$ by the Soergel category $\mathcal{SB}$ associated
with the Coxeter system of ${H}$.
\begin{remark}
\label{polynomial}
It is well-known that KL polynomials can be
computed recursively with the aid of the so-called
\emph{$R$-polynomials}. This is explained in sections 4 and 5 of \cite{LG},
and example computations can be found in Chapter 5 of
\cite{BB}. However, the computation is
often very difficult to carry out in practice, even for computers, and the
computation
algorithm does not seem adequate for a proof of part (1) of the above theorem.
\end{remark}
Second, we recall a multiplication formula for KL basis elements in ${H}$.
For $x,y\in W$, let $\mu_{x,y}$ denote the coefficient of $v^{-1}$ in $p_{x,y}$
and call it a \emph{$\mu$-coefficient}. The $\mu$-coefficients can be used to define representations of ${H}$ via
\emph{$W$-graphs} (\cite{KL}). They also govern the
multiplication of KL basis elements in ${H}$:
\begin{prop}[Multplication of KL-basis; \cite{LG}, Theorem 6.6, Corollary 6.7]
\label{KL basis mult}
Let $x\in W$, $s\in S$, and let $\le$ be the Bruhat order on $W$. Then
\begin{eqnarray*}
c_s c_y&=&
\begin{cases}
(v+v^{-1}) c_y&\quad\qquad\text{if}\quad sy<y\\
c_{sy}+\sum\limits_{z:sx<x<y} \mu_{x,y}c_z&\quad\qquad \text{if} \quad sy>y
\end{cases},\\
&&\\
c_y c_s&=&
\begin{cases}
(v+v^{-1}) c_y&\quad \text{if}\quad ys<y\\
c_{ys}+\sum\limits_{x:xs<x<y} \mu_{x^{-1},y^{-1}}c_x&\quad \text{if} \quad sy>y
\end{cases}.
\\
\end{eqnarray*}
\end{prop}
Next, we define \emph{Kazhdan-Lusztig cells}. For each $x\in W$, let
$D_x:{H}\rightarrow\mathcal{A}$ be the linear map such that
\[
D_x(c_{y})=\delta_{x,y}
\]
for all $y\in W$, where $\delta_{x,y}$ is the Kronecker delta symbol. For $x,y\in W$, write
$x\prec_{L} y$ if $D_x(c_sc_{y})\neq 0$ for some $s\in S$, and write
$x\prec_R y$ if $D_x(c_{y}c_s)\neq 0$ for some $s\in S$. Define $\le_L$ and
$\le_R$ to be the transitive closures of $\prec_L$ and $\prec_R$,
respectively, and define another partial order $\le_{LR}$ by declaring that
$x\le_{LR} y$ if there exists a sequence $x=z_1,\cdots, z_n=y$ in $W$ such that
$z_{i}\prec_L z_{i+1}$ or $z_{i}\prec_R z_{i+1}$ for all $1\le i\le n-1$. Finally,
define $\sim_L$ to be the equivalence relations such that
$x\sim_L y$ if and only if we have both $x\le_L y$ and $y\le_L x$, and
define $\sim_R, \sim_{LR}$ similarly. The equivalence classes of $\sim_L,
\sim_R$ and $\sim_LR$ are called the \emph{left (Kazhdan-Lusztig) cells},
\emph{right (Kazhdan-Lusztig) cells} and \emph{2-sided (Kazhdan-Lusztig) cells} of
$W$, respectively. Clearly, each 2-sided KL cell is a union of left cells as
well as a union of right cells. Since the elements $c_s$ ($s\in S$) generate ${H}$ as an $\mathcal{A}$-algebra, the
following is also clear:
\begin{prop}[\cite{LG}, Lemma 8.2]
\label{KL order and ideals}
Let $y\in W$. Then
\begin{enumerate}
\item The set ${H}_{\le_L y}:=\oplus_{x:x\le_L y} \mathcal{A} c_x$ is a
left ideal of ${H}$.
\item The set ${H}_{\le_R y}:=\oplus_{x:x\le_R y} \mathcal{A} c_x$ is a right ideal
of ${H}$.
\item The set ${H}_{\le_{LR} y}:=\oplus_{x:x\le_{LR} y} \mathcal{A} c_x$ is a
2-sided ideal of ${H}$.
\end{enumerate}
\end{prop}
Observe that by Proposition \ref{KL basis mult} , we have $x\prec_L y$ if and only
if either
$x=y$ or $x<y, \mu_{x,y}\neq 0$ and
$\mathcal{L}(x)\not\subseteq\mathcal{L}(y)$. Similarly, $x\prec_R y$ if and only if
either $x=y$ or $x<y, \mu_{x^{-1},y^{-1}}\neq 0$ and
$\mathcal{R}(x)\not\subseteq\mathcal{R}(y)$.
Corollary \ref{Bruhat auto} now implies that $x\le_L y$
if and only if $x^{-1}\le_R y^{-1}$. We have just proved
\begin{prop}[\cite{LG}, Section 8.1]
\label{inverse and cells}
The map $x\mapsto x^{-1}$
takes left cells in $W$ to right cells, right cells to left cells, and
2-sided cells to 2-sided cells.
\end{prop}
We will need to use one more fact later.
\begin{prop}[\cite{LG}, Section 14.1]
\label{inverse-cells}
For any $x\in W$, we have $x\sim_{LR} x^{-1}$.
\end{prop}
\section{The subregular $J$-ring}
\label{sec:subregular J}
In this section, we describe Lusztig's construction of the asymptotic Hecke
algebra $J$ of a Coxeter system and recall some basic properties of $J$. We
show how KL cells in $W$ give rise to subalgebras of $J$, then shift our focus
to a particular algebra $J_C$ of $J$ corresponding to the subregular cell of
$W$. We also recall the definition of a based ring and explain why $J_C$ is a
based ring.
Throughout the section, suppose $(W,S)$ is an arbitrary Coxeter system with
$S=[n]=\{1,2,\cdots,n\}$ unless otherwise stated. Let ${H}$ be the
Iwahori-Hecke algebra of $(W,S)$, and let $\{T_w:w\in W\}, \{c_w:w\in W\}$ and
$\{p_{y,w}:y,w\in W\}$ be the standard basis, KL basis and KL polynomials in
${H}$, respectively.
\subsection{The asymptotic Hecke algebra $J$}
\label{sec:J}
Consider the elements $h_{x,y,z}\in \mathbb{Z}[v,v^{-1}]$ ($x,y,z\in W$) from
Equation \ref{eq:h polynomials}.
Lusztig showed in \cite{LG} that for any $z\in W$, there exists a unique integer
$\mathbf{a}(z)\ge 0$ that satisfies the conditions
\begin{enumerate}
\item[(a)] $h_{x,y,z}\in v^{\mathbf{a}(z)}\mathbb{Z}[v^{-1}]$ for all $x,y\in W$,
\item[(b)] $h_{x,y,z}\not\in v^{\mathbf{a}(z)-1}\mathbb{Z}[v^{-1}]$ for some $x,y\in W$.
\end{enumerate}
\noindent Define $\gamma_{x,y,z^{-1}}$ to the non-negative integer such that
\[
h_{x,y,z}=\gamma_{x,y,z^{-1}}v^{\mathbf{a}(z)}\,\mod v^{\mathbf{a}(z)-1}\mathbb{Z}[v^{-1}],
\]
and define multiplication on the free abelian group $J=\oplus_{w\in W}\mathbb{Z} w$
by
\[
t_{x}t_y=\sum_{z\in W} \gamma_{x,y,z^{-1}} t_z
\]
for all $x,y\in W$. It is known in that this product is well-defined (i.e.,
$\gamma_{x,y,z^{-1}}=0$ for all but finitely many $z\in W$ for all $x,y\in
W$), and the multiplication defined above is associative,
making $J$ a ring (see \cite{LG}, 18.3). We call $J$ the \emph{asymptotic Hecke algebra} or simply
the \emph{$J$-ring} of $(W,S)$.
The following facts about the coefficients $\gamma_{x,y,z}$ and $J$ will be useful later.
\begin{prop}[\cite{LG}, Proposition 13.9]
For all $x,y,z\in W$, $\gamma_{y^{-1}, x^{-1},
z^{-1}}=\gamma_{x,y,z}$.
\end{prop}
\noindent Note that this immediately implies the following.
\begin{corollary}
\label{J anti involution}
The $\mathbb{Z}$-linear map with $t_x\mapsto t_{x^{-1}}$ is an
anti-homomorphism of $J$.
\end{corollary}
\subsection{Subalgebras of $J$}
\label{sec:subalgebras}
For each $x\in W$, let $\Delta(x)$ be the unique non-negative integer such that
\[
p_{1,x}\in n_xv^{-\Delta(x)}+v^{-\Delta(x)-1}\mathbb{Z}[v^{-1}]
\]
for some $n_x\neq 0$. This makes sense by Remark \ref{polynomial}. Let
\[
\mathcal{D}=\{x\in W:\mathbf{a}(x)=\Delta(x)\}.
\]
It is known that $d^2=1$ for all $d\in \mathcal{D}$, and $\mathcal{D}$ is called the set of
\emph{distinguished involutions}. There are many intricate connections
between $\mathcal{D}$, the coefficients $\gamma_{x,y,z}$, and KL cells in
$W$. The
connections would lead us to many subalgebras of $J$ that are indexed by cells
and have units provided by the distinguished involutions.
\begin{prop}[\cite{LG}, Conjectures 14.2]
\label{gamma and cells}
Let $x,y,z\in W$. Then
\begin{enumerate}
\item $\gamma_{x,y,z}=\gamma_{y,z,x}$.
\item If $\gamma_{x,y,z}\neq 0$, then $x\sim_L y^{-1}, y\sim_L
z^{-1}, z\sim_L x^{-1}$.
\item If $\gamma_{x,y,d}\neq 0$ for some $d\in \mathcal{D}$, then $y=x^{-1}$
and $\gamma_{x,y,d}=1$.
Further, for each $x\in W$ there is a unique element $d\in \mathcal{D}$ such
that $\gamma_{x,x^{-1},d}=1$.
\item Each left KL cell $\Gamma$ of $W$ contains a unique element $d$ from
$\mathcal{D}$. Further, for this elements $d$, we have $\gamma_{x^{-1},
x,d}=1$ for all $x\in \Gamma$. \end{enumerate} \end{prop}
\begin{remark}
In the paper \cite{L2}, where Lusztig first defined the asymptotic Hecke
algebra $J$, Proposition \ref{gamma and cells} is proved for Coxeter systems
satisfying certain mild conditions. The conditions can be found in Section
1.1 of the paper, the four parts of the proposition appear in Theorem 1.8,
Corollary 1.9, Proposition 1.4 and Theorem 1.10 of the paper, respectively.
For arbitrary Coxeter systems, the statements of the proposition, as well as
the statement in Proposition \ref{inverse-cells}, appear only
as conjectures in Chapter 14 of \cite{LG}. However, \cite{LG} studies Hecke
algebras in a more general setting, namely, with possibly \emph{unequal
parameters}, and the statements are known to be true in the setting of this
paper, which is called the \emph{equal parameter} or the \emph{split} case in
the book. The proofs of the statements rely heavily on Theorem
\ref{positivity}; see Chapter 15 of \cite{LG}. \end{remark}
\begin{definition}
\label{support}
For any subset $X$ of $W$, define $J_X:=\oplus_{w\in X}\mathbb{Z} t_w$.
\end{definition}
\begin{corollary}[\cite{LG}, Section 18.3]\hfill
\label{subalgebra}
\begin{enumerate}
\item[(a)] Let $\Gamma$ be any left KL cell in $W$, say with $\Gamma\cap
\mathcal{D}=\{d\}$. Then the subgroup $J_{\Gamma\cap \Gamma^{-1}}$ is actually
a unital subalgebra of $J$; its unit is $t_d$.
\item[(b)] For any 2-sided cell
$E$ in $W$, the subgroup $J_E$ is a subalgebra of $J$. Further, we have a
direct sum decomposition $J=\oplus_{E\in \mathcal{C}}J_E$ of algebras, where
$\mathcal{C}$ is the collection of all 2-sided KL cells of $W$.
\item[(c)] If $E$ is a 2-sided cell such that $E \cap \mathcal{D}$ is finite, then
$J_E$ is a unital algebra with unit element $\sum_{d\in E\cap \mathcal{D}}
t_d$.
\item[(d)] If
$\mathcal{D}$ is finite, then $J$ is a unital algebra with unit $\sum_{d\in
\mathcal{D}} t_d$. \end{enumerate} \end{corollary}
\begin{proof}
We will repeatedly use Proposition \ref{gamma and cells}. When we say part
($i$), we will mean part ($i$) of the proposition.
\begin{enumerate}[leftmargin=2em]
\item[(a)] Let $x,y\in \Gamma\cap\Gamma^{-1}$, and suppose
$\gamma_{x,y,z^{-1}}\neq 0$ for some $z\in W$. Then by part (2),
$z=(z^{-1})^{-1}\sim_L y\in \Gamma$, and $z^{-1} \sim_L x^{-1}$
so that $z\sim_{R} x\in \Gamma^{-1}$ (since the inverse map takes left
cells to right cells by Proposition \ref{inverse and cells}). Thus, $z\in
\Gamma\cap\Gamma^{-1}$. It follows that $J_{\Gamma\cap\Gamma^{-1}}$
is a subalgebra of $J$.
It remains to show that $t_xt_d=t_x=t_dt_x$ for all $x\in
\Gamma\cap\Gamma^{-1}$. By parts (1) and (3),
$\gamma_{d,x,y}=\gamma_{x,y,d}\neq 0$ for some $y\in \Gamma\cap\Gamma^{-1}$ only
if $y=x^{-1}$, and in this case
$\gamma_{d,x,y}=\gamma_{d,x,x^{-1}}=\gamma_{x,x^{-1},d}=1$. This
implies $t_dt_x=t_x$.
Similarly, $\gamma_{x,d,y}=\gamma_{y,x,d}\neq 0$ for some $y\in
\Gamma\cap\Gamma^{-1}$ only if $y=x^{-1}$, whence
$\gamma_{x,d,y}=\gamma_{x,d,x^{-1}}=\gamma_{x^{-1},x,d}=1$ by Part
(4). This implies $t_xt_d=t_x$.
\item[(b)] Let $x,y\in E$, and suppose $\gamma_{x,y,z^{-1}}\neq 0$
for some $z\in W$. Let $\Gamma$ be the left cell containing $y$. Then
$\Gamma\subseteq E$. By part (2), $z\sim_L y$, therefore $z\in \Gamma\subseteq E$ as
well, hence $J_E$ is a subalgebra.
Now suppose $x,y\in W$ belong in
different 2-sided cells, say with $x\in E$ and $y\in E'$. Then
$y^{-1}\in E'$ by Proposition \ref{inverse-cells}, hence
$x\not\sim_L y^{-1}$. Part (2) now implies that $\gamma_{x,y,z^{-1}}=0$ for all
$z\in W$, therefore $t_xt_y=0$. It follows that
$J=\oplus_{E\in\mathcal{C}} J_E$.
\item[(c)] By part (4) of Proposition \ref{gamma and cells}, the fact that $E\cap \mathcal{D}$ is finite implies $E$ is a
disjoint union of finitely many left cells $\Gamma_1,\cdots, \Gamma_k$.
Suppose $\Gamma_i\cap \mathcal{D}=\{d_i\}$ for each $i\in [k]$, and let
$x\in E$, say with $x\in \Gamma_i$ and $x^{-1} \in \Gamma_{i'}$ for some $i,
i' \in [k]$. Then by parts (1), (2) and (3),
$\gamma_{x,d_j,y}=\gamma_{y,x,d_j}\neq 0$ for some $y\in E, j\in [k]$ only
if $d_j\sim_L x$ and $y=x^{-1}$ . In this case,
$j=i$ and $\gamma_{x,d_j,y}=\gamma_{y,x,d_i}=1$ by part (4). Consequently,
\[
t_x\left(\sum_{j=1}^k t_{d_j}\right) =t_{x}t_{d_i}=t_x.
\]
Similarly, $\gamma_{d_j,x,y}=\gamma_{x,y,d_j}$ for some $y\in E, j\in
[k]$ only if $d_j\sim_L x^{-1}$ and $y=x^{-1}$, in which case
$j=i'$ and $\gamma_{d_j,x,y}=\gamma_{x,x^{-1},d_{i'}}=1$. Consequently,
\[
\left(\sum_{j=1}^k t_{d_j}\right)t_x =t_{d_{i'}}t_{x}=t_x.
\]
It follows that $\sum_{d\in E\cap \mathcal{D}}=\sum_{j=1}^k d_k$ is the unit of
$J_E$, as claimed.
\item[(d)] Let $x\in W$, and let $d_1, d_2$ be the unique distinguished involution in
the left cell of $x$ and $x^{-1}$, respectively. To show $\sum_{d\in
\mathcal{D}}t_d$ is the unit of $J$, it suffices to show that
\[
t_x\left(\sum_{d\in \mathcal{D}}
t_d\right)=t_xt_{d_1}=t_x=t_{d_2}t_{x}=\left(\sum_{d\in\mathcal{D}}t_d\right).
\]
This can be proved in a similar way to the last part. \qedhere
\end{enumerate}
\end{proof}
\begin{remark}
In part (3) of the corollary, we dealt with the case where $\mathcal{D}$ is finite.
When $\mathcal{D}$ is infinite, $J$ only has a generalized unit element in the
sense that the elements $t_d (d\in\mathcal{D})$ satisfy $t_dt_{d'}=\delta_{d,d'}$
and $\sum_{d,d'\in \mathcal{D}}t_dJt_{d'}=J$. Lusztig also showed that even when
$\mathcal{D}$ is not finite, $J$ can be naturally imbedded into a certain unital
algebra (\cite{LG}, 18.13). We will not need these technicalities, though.
\end{remark}
\subsection{The subregular $J$-ring}
\label{sec:subregular J-ring}
We are now ready to introduce our main objects of study. Consider the following
sets.
\begin{definition}
Let $C$ denote the set of all non-identity elements in $W$ with a unique reduced
expression. For each $s\in S$, let $\Gamma_s$ be the sets of
all elements in $C$ whose reduced
expression ends in $s$.
\end{definition}
\noindent We will be interested the groups $J_C$ and
$J_{\Gamma_s\cap\Gamma_s^{-1}}$(see Definition \ref{support}).
Thanks to the following theorem and Corollary \ref{subalgebra}, they are
actually subalgebras of $J$.
\begin{thm}[\cite{subregular}, 3.8]
\label{subregular}
The set $C$ is a 2-sided Kazhdan-Lusztig cell of $W$, and $\Gamma_s$
is a left Kazhdan-Lusztig cell of $W$ for each $s\in S$.
\end{thm}
\begin{definition}
\label{subregular cell}
We call the cell $C$ the
\emph{subregular cell} of $W$, and we call the subalgebra $J_C$ the
\emph{based ring of the subregular cell} of $(W,S)$, or simply the \emph{subregular
$J$-ring} of $(W,S)$.
For each $s\in S$, we write $J_s:=
J_{\Gamma_s\cap\Gamma_s^{-1}}$.
\end{definition}
The rest of
the paper is devoted to the study of the algebras $J_C$ and $J_s (s\in S)$.
These algebras naturally possess the additional
structures of a \emph{based ring}. We explain this below.
The following three definitions are taken from Chapter 3 of \cite{EGNO}.
\begin{definition}[$\mathbb{Z}_+$-rings]
Let $A$ be a ring which is free as a $\mathbb{Z}$-module.
\begin{enumerate}
\item A \emph{$\mathbb{Z}_+$-basis} of $A$ is a basis $B=\{t_i\}_{i\in I}$ such
that for all $i,j\in I$,
$t_it_j=\sum_{k\in I}c_{ij}^k t_k$ where $c_{ij}^k\in \mathbb{Z}_{\ge 0}$ for all
$k\in I$.
\item A \emph{$\mathbb{Z}_+$-ring} is a ring with a fixed $\mathbb{Z}_+$-basis and with
identity 1 which is a nonnegative linear combination of the basis
elements.
\item A \emph{unital} $\mathbb{Z}_+$-ring is a $\mathbb{Z}_+$ ring such that 1 is a basis
element.
\end{enumerate}
\end{definition}
Let $A$ be a $\mathbb{Z}_+$-ring, and let $I_0$ be the set of $i\in I$ such that
$t_i$ occurs in the decomposition of 1. We call the
elements of $I_0$ the \emph{distinguished index set}. Let $\tau: A\rightarrow \mathbb{Z}$ denote the
group homomorphism defined by
\[
\tau(t_i)=
\begin{cases}
1 &\quad \text{if}\quad i\in I_0,\\
0 &\quad \text{if}\quad i\not\in I_0.
\end{cases}
\]
\begin{definition}[Based rings]
\label{based rings def}
A $\mathbb{Z}_+$-ring $A$ with a basis $\{t_i\}_{i\in I}$ is called a
\emph{based ring} if there exists an involution $i\mapsto i^*$ such that
the induced map
\[
a=\sum_{i\in I} c_it_i\mapsto a^*:=\sum_{i\in I} c_it_{i^*}, c_i\in \mathbb{Z}
\]
is an anti-involution of the ring $A$, and
\begin{equation}\label{eq:based ring}
\tau(t_it_j)=
\begin{cases}
1 &\quad \text{if}\quad i=j^*,\\
0 &\quad \text{if}\quad i\neq j^*.
\end{cases}
\end{equation}
\end{definition}
\begin{definition}[Multifusion rings and fusion rings]
\label{fusion def}
A \emph{multifusion ring} is a based ring of finite rank. A \emph{fusion
ring} is
a unital based ring of finite rank.
\end{definition}
We now use results from \cref{sec:subalgebras} to show that under certain finiteness
conditions, all the subalgebras of $J$ introduced in the subsection are based
rings.
\begin{prop}
\label{based structure}
\begin{enumerate}
\item Let $E$ be any 2-sided KL cell in $W$ that contains finitely many
distinguished involutions. Then the algebra $J_E$ is a based ring with
basis $\{t_x\}_{x\in I}$ with index set $I=E$, with distinguished index set $I_0=E\cap \mathcal{D}$, and with
the map $^*: I\rightarrow I$ given by $x^*=x^{-1}$.
\item Let $\Gamma$ be any left KL cell in $W$, and let $d$ be the unique
element in $\Gamma\cap \mathcal{D}$. Then $J_{\Gamma\cap\Gamma^{-1}}$ is a
unital based ring with index set $I=\Gamma\cap\Gamma^{-1}$, with distinguished
index set $I_0=\{d\}$, and with $^*:I\rightarrow I$ given by $x^*=x^{-1}$.
\end{enumerate}
\end{prop}
\begin{proof}
(1) The set $\{t_x\}_{x\in E}$ forms a $\mathbb{Z}_+$-basis of $J_E$ by the
definition of $J_E$, and $J_E$ is $\mathbb{Z}_+$-ring with distinguished index
set $E\cap \mathcal{D}$ since the its unit is
$\sum_{d\in E\cap\mathcal{D}}t_d$ by Part (c) of Corollary \ref{subalgebra}.
$J_E$. The fact that $x\mapsto x^{-1}$ induces an anti-involution on
$J_E$ follows from Corollary \ref{J anti involution}. Finally, Equation \eqref{eq:based ring} holds by parts
(3) and (4) of Proposition \ref{gamma and cells}. We have now shown that
$J_E$ is a based ring.
(2) The proof is similar to the previous part, with the only
difference being that $J_{\Gamma\cap\Gamma^{-1}}$ is unital with
$I_0=\{d\}$ since $t_d$ is its unit by Part (a) of Corollary \ref{subalgebra}.
\end{proof}
\begin{corollary}
\label{subregular base}
Let $(W,S)$ be a Coxeter system where $S$ is finite (this will be the case
for all Coxeter systems in this paper). Let $C, \Gamma_s, J_C$ and $J_s$ be
as before. Then
\begin{enumerate}
\item $J_C$ is a based ring with index set $I=C$, distinguished index set
$I_0=S$ anti-involution induced by the map $^*: I\rightarrow I$ with
$x^*=x^{-1}$.
\item For each $s\in S$, $J_s$ is a based ring with index set $I=C$, distinguished index set
$I_0=\{s\}$ anti-involution induced by the map $^*: I\rightarrow I$ with
$x^*=x^{-1}$.
\end{enumerate}
\end{corollary}
\begin{proof}
This is immediate from Proposition \ref{based structure} and Theorem
\ref{subregular} once we show
that for each $s\in S$, the unique distinguished involution in $\Gamma_s$ is
exactly $s$. So it suffices to show that $s\in \mathcal{D}$ for each $s\in S$.
This is well-known (we will also see this from the
proof of Corollary \ref{a=1}, where we show $\mathbf{a}(s)=\Delta(s)=1$ for
all $s\in S$).
\end{proof}
For future use, let us formulate the notion of an isomorphism of based rings.
Naturally, we define it to be a ring isomorphism that respects all the
additional defining structures of a based ring.
\begin{definition}[Isomorphism of Based Rings]
Let $A$ be a based ring $\{t_i\}_{i\in I}$ with index set $I$, distinguished index set
$I_0$ and anti-involution ${}^*$ induced by a map ${}^*: I\rightarrow I$.
Let $B$ be a based ring $\{t_j\}_{j\in J}$ with index set $J$, distinguished index set
$J_0$ and anti-involution ${}^*$ induced by a map ${}^*: J\rightarrow J$. We define an
\emph{isomorphism of based rings} from $A$ to $B$ to be a unit-preserving
ring isomorphism $\Phi: A\mapsto B$ such that $\Phi(t_i)=t_{\phi(i)}$ for
all $i\in I$, where $\phi$ is a bijection from $I$ to $J$ such that
$\phi(I_0)=J_0$ and $\Phi(t_i^*)=(\Phi(t_i))^*$ for all $i\in I$.
\end{definition}
\section{Products in $J_C$}
The notations from the previous sections remain in force. In particular, we
assume $(W,S)$ is an arbitrary Coxeter system with $S=[n]$ for some
$n\in \mathbb{N}$, and we use
$C$ to denote the subregular cell.
In this section, we develop the tools to
study the algebra $J_C$. The notion of the \emph{dihedral segments} of a word plays a
central role. First, we use this notion to characterize elements in $C$,
and enumerate basis elements of $J_C$ as walks on certain
graphs. Second, we prove Theorem \hyperref[dihedral factorization]{F}, and reduce the study of a basis element $t_w$ in $J_C$ to only the basis
elements corresponding to its dihedral segments. Finally, we explain how
to use Theorem \hyperref[dihedral factorization]{F} to compute the products of
arbitrary basis elements in $J_C$. The next two sections of the paper
will depend heavily on combining our knowledge of the basis elements in $J_C$ and
our ability to compute their products.
\subsection{Dihedral segments}
\label{sec:dihedral}
We characterize the elements of the subregular cell $C$ in terms of their
reduced words. Since no simple reflection can appear consecutively in a reduced word
of any element in $W$, we make the following assumption.
\begin{assumption}
From now on, whenever we speak of a word in a Coxeter system,
we assume that no simple reflection appears consecutively in the word.
\end{assumption}
With this assumption, we may now define the dihedral segments of a word.
\begin{definition}[Dihedral segments]
\label{dihedral segments def}
For any word $x\in \ip{S}$, we define
the \emph{dihedral segments} of $x$ to be the maximal contiguous subwords of
$x$ involving two letters.
\end{definition}
\noindent For example, suppose $S=[3]$ and $x=121313123$, then
$x$ has dihedral segments $x_1=121, x_2=13131, x_3=12, x_4=23$. We may think of
breaking a word into its dihedral segments as a ``factorization'' process. The
process can be easily reversed, that is, we may recover a word from its dihedral
segments by taking a proper ``product''. This motivates the following
definition.
\begin{definition}[Glued product]
\label{glued product def}
For any two words
$x_1,x_2\in \ip{S}$
such that $x_1$ ends with the same letter that $x_2$ starts with, say
$x_1=\cdots st$ and $x_2=tu\cdots$, we define their \emph{glued product} to be the
word $x_1 * x_2:=\cdots stu\cdots$ obtained by concatenating $x_1$ and
$x_2$ then deleting one occurrence of the common letter.
\end{definition}
\noindent Note that the
operation $\cdot$ is obviously associative. Further, if $x_1,x_2,\cdots, x_k$
are the dihedral segments of $x$, then
\begin{equation}
\label{eq:comb factorization}
x=x_1*x_2*\cdots * x_k.
\end{equation}
For example, with $x,x_1,x_2,x_3,x_4$ as before, we have
\[
x_1*x_2*x_3*x_4=121*13131*12*23=121313123=x.
\]
Clearly, the
dihedral segments of a word must alternate in two letters and take the form $sts\cdots$ for some $s,t\in S$. It is thus convenient to have the following notation.
\begin{definition}
\label{alternating words}
For $s,t\in S$ and $k\in \mathbb{N}$, let $(s,t)_k$ denote the alternating word
$sts\cdots$ of length $k$. In particular, take $(s,t)_0$ to be the empty word
$\emptyset$.
\end{definition}
Proposition
\ref{Matsumoto} now implies the following.
\begin{prop}[Subregular Criterion]
\label{subregular criterion}
Let $x\in \ip{S}$. Then $x$ is the reduced word of an element in $C$ if and
only no letter in $S$ appears consecutively in $x$ and each dihedral segment
of $x$ is of the form $(s,t)_k$ for some $s,t\in S$ and $k<m(s,t)$.
\end{prop}
We can use this criterion to enumerate the elements of $C$.
\begin{definition}[Subregular graph]
\label{subregular graph}
Let $H, T:S^*\setminus\{\emptyset\}\rightarrow S$ be the functions that send any
nonempty word $w=s_1s_2\cdots s_k$ to its first and last letter $s_1$ and
$s_k$, respectively. Let $D=(V,E)$ be the directed graph such that
\begin{enumerate}
\item $V=\{(s,t)_k:
s,t\in S, 0< k <m(s,t)\}$,
\item $E$ consists of directed edges $(v,w)$ pointing
from $v$ to $w$, where
\begin{enumerate}
\item
either $v=(s,t)_{k-1}$ and $w=(s,t)_k$ for some $s,t\in S$, $1<k<
m(s,t)$,
\item or $v$ and $w$ are alternating words containing different sets of
letters, yet
$T(v)=H(w)$.
\end{enumerate}
\end{enumerate}
We call the graph $D$ the
\emph{subregular graph} of $(W,S)$. \end{definition}
Recall that a \emph{walk} on a directed graph is a sequence of vertices
$v_1,v_2,\cdots, v_k$ such that $(v_i,v_{i+1})$ is an edge for all $1\le i\le k-1$. Note that walks on $D$ correspond bijectively to elements
of $C$. Indeed, given any walk $v_1,v_2,\cdots,v_k$ on $D$, imagine we
successively write down the words $x_i=T(v_1)\cdots
T(v_i)$ as we traverse the walk. Then the vertex $v_i$ records
exactly the dihedral segment at the end of $x_i$, and traveling along an edge
of type (b) to $v_{i+1}$ corresponds to starting a new dihedral segment,
while traversing an edge of type (a) corresponds to extending the last dihedral
segment of the word $x_i$ by one more letter.
Under the bijection described above, it is also easy to see that for any
$s\in S$, the elements of
$\Gamma_{s}\cap\Gamma_{s}^{-1}$ correspond to
walks on the subregular graph $D$ that starts at the vertex $v=(s)$
(an alternating word of length 1) and
ends at a vertex $w$ with $T(w)=s$. Call such a walk an \emph{$s$-walk}. Such walks often involve
only a subset $V'$ of the vertex set $V$ of $D$, in which case they are exactly walks on
the subgraph of $D$ induced by $V'$. We denote this subgraph by $D_s$.
\begin{remark} When $m(s,t)<\infty$ for all $s,t\in S$, the vertex
sets of $D$ and $D_s$ ($s\in S$) are necessarily finite, hence $D$ and $D_s$
can be viewed as \emph{finite state automata} that recognize $C$ and
$\Gamma_s\cap\Gamma_s^{-1}$, respectively, in the sense of formal
languages (see \cite{automata}). \end{remark}
\begin{example}
\label{odd example}
Let $(W,S)$ be the Coxeter system whose Coxeter diagram is the triangle in
Figure \ref{fig:1}. It is easy to see that $D_1$ should be the directed graph
on the right, and elements of $\Gamma_1\cap\Gamma_1^{-1}$ correspond to walks on
$D_1$ that start with the top vertex and end with either the bottom-left or
bottom-right vertex.
\begin{figure}[h!]
\label{fig:1}
\begin{center}
\begin{minipage}{0.4\textwidth}
\begin{tikzpicture}
\node (00) {};
\node (0) [right=1cm of 00] {};
\node[main node] [diamond] (1) [right = 1cm of 0] {};
\node[main node] (2) [below left = 1.6cm and 1cm of 1] {};
\node[main node] (3) [below right = 1.6cm and 1cm of 1] {};
\node (111) [above=0.1cm of 1] {$1$};
\node[main node] (2) [below left = 1.6cm and 1cm of 1] {};
\node (222) [left=0.1cm of 2] {$2$};
\node[main node] (3) [below right = 1.6cm and 1cm of 1] {};
\node (333) [right=0.1cm of 3] {$3$};
\path[draw]
(1) edge node {} (2)
edge node {} (3)
(2) edge node [below] {\small{4}} (3);
\end{tikzpicture}
\end{minipage}%
\begin{minipage}{0.7\textwidth}
\begin{tikzpicture}
\node (000) {};
\node[state node] [diamond] (4) [right=3cm of 000] {\small{$1$}};
\node[state node] (5) [below left = 0.9cm and 0.5cm of 4]
{\small{$12$}};
\node[state node] (6) [below right = 0.9cm and 0.5cm of 4]
{\small{$13$}};
\node[state node] (7) [below left = 0.9cm and 0.5cm of 5]
{\small{$23$}};
\node[state node] (8) [below right = 0.9cm and 0.5cm of 6]
{\small{$32$}};
\node[state node] [diamond] (9) [below left = 0.9cm and 0.5cm of 7]
{\small{31}};
\node[state node] (10) [below right = 0.9cm and 0.5cm of 7]
{\small{232}};
\node[state node] (11) [below left = 0.9cm and 0.5cm of 8]
{\small{323}};
\node[state node] [diamond] (12) [below right = 0.9cm and 0.5cm of 8]
{\small{31}};
\path[draw,-stealth,,thick]
(4) -- (5);
\path[draw,-stealth,,thick]
(4) -- (6);
\path[draw,-stealth,,thick]
(5) -- (7);
\path[draw,-stealth,,thick]
(6) -- (8);
\path[draw,-stealth,,thick]
(7) -- (9);
\path[draw,-stealth,,thick]
(7) -- (10);
\path[draw,-stealth,,thick]
(8) -- (11);
\path[draw,-stealth,,thick]
(8) -- (12);
\path[draw,thick]
(9) edge [bend left=60,-stealth,thick] (5);
\path[draw,semithick]
(12) edge [bend right=60,-stealth,thick] (6);
\path[draw,semithick]
(10) edge [bend right=60,-stealth,thick] (12);
\path[draw,semithick]
(11) edge [bend left=60,-stealth,thick] (9);
\end{tikzpicture}
\end{minipage}
\end{center}
\end{figure}
\end{example}
\subsection{$\mathbf{a}$-function characterization of $C$}
\label{sec:a-function}
We give yet another characterization of the subregular cell $C$, this time in
terms of the $\mathbf{a}$-function defined in \cref{sec:J}. To start, we recall some
properties of $\mathbf{a}$ .
\vspace{-0.5em}
\begin{prop}[\cite{LG}, 13.7, 14.2] \label{a and cells}
Let $x,y\in W$. Then
\begin{enumerate}
\item $\mathbf{a}(x)\ge 0$, where
$\mathbf{a}(x)=0$ if and only if $x$ equals the identity element of $W$.
\item $\mathbf{a}(x)\le \Delta(x)$.
\item If $x\le_{LR} y$, then $\mathbf{a}(x)\ge \mathbf{a}(y)$.
Hence, if $x\sim_{LR} y$, then $\mathbf{a}(x)=\mathbf{a}(y)$.
\item If $x\le_{L} y$
and $\mathbf{a}(x)=\mathbf{a}(y)$, then $x\sim_L y$.
\item If $x\le_{R} y$ and
$\mathbf{a}(x)=\mathbf{a}(y)$, then $x\sim_R y$.
\item If $x\le_{LR} y$ and
$\mathbf{a}(x)=\mathbf{a}(y)$, then $x\sim_{LR} y$.
\end{enumerate}
\end{prop}
\begin{corollary}
\label{a=1}
$C=\{x\in W:\mathbf{a}(x)=1\}$.
\end{corollary}
\begin{proof}
Let $s\in S$. Then $a(s)\ge 1$ by Part (1) of the proposition.
On the other hand, it is well known that $c_s= T_s+v^{-1}$ (\cite{LG}, $\S$ 5), therefore $\Delta(s)=1$ by the definition of
$\Delta$ and $\mathbf{a}(s)\le 1$ by part (2) of the proposition. It follows that
$a(s)=1$. Since $s$ is clearly in $C$, Part (3) implies that $a(x)=1$ for all
$x\in C$.
Now let $x\in W\setminus C$. Then either $x$ is the group identity and
$\mathbf{a}(x)=0$, or $x$ has a reduced expression $x=s_1s_2\cdots s_k$ with $k>1$
and each $s_i\in S$. In the latter case, $x\le_L s_k$ by Proposition \ref{KL basis
mult}, so $\mathbf{a}(x)\ge \mathbf{a}(s_k)=1$. Meanwhile, since $x\not\in C$,
$x\not\sim_{LR} s_k$, so $\mathbf{a}(x)\neq \mathbf{a}(s_k)$ by part (6) of Proposition
\ref{a and cells}. It follows that $\mathbf{a}(x)>1$, and we are done. \end{proof}
\noindent
The characterization leads to a shortcut for studying products in
$J_C$. To see how, consider the filtration
\begin{equation*}
\cdots\subset {H}_{\ge 2}\subset {H}_{\ge 1}\subset {H}_{\ge 0}={H}.
\end{equation*}
of the Hecke algebra ${H}$ where
\[
{H}_{\ge a}=\oplus_{x: \mathbf{a}(x)\ge a} \mathcal{A} c_x
\]
for each $a\in \mathbb{N}$. By parts (3)-(6) of Proposition \ref{a and cells} and
Proposition \ref{KL order and ideals}, this may be viewed as a filtration of
submodules when we view ${H}$ as its regular left
module. It induces the left modules
\begin{equation}
\label{eq:quotient}
{H}_a:={H}_{\ge a}/{H}_{\ge a+1},
\end{equation}
where ${H}_a$ is spanned by images of the elements $\{c_x: \mathbf{a}(x)=a\}$. In particular,
${H}_1$ is spanned by the images of $\{c_x:x\in C\}$. By the construction
of $J$, to compute a product $t_x\cdot t_y$ in $J_c$, it then suffices to consider
the product $c_x\cdot c_y$ in ${H}_1$. More precisely, we
have arrived at the following shortcut.
\begin{corollary} \label{J_C shortcut} Let $x,y\in C$. Suppose \[
c_xc_y=\sum_{z\in W} h_{x,y,z} c_z \] for $h_{x,y,z}\in \mathcal{A}$. Then \[
t_xt_y=\sum_{z\in T}\gamma_{x,y,z^{-1}} t_z \] in $J_C$, where $T=\{z\in
C: h_{x,y,z}\in n_z v+\mathbb{Z}[v^{-1}] \;\text{for some}\; n_z\neq 0\}$.
\end{corollary} \noindent
The corollary plays a key role in the proof
of Lemma \ref{truncation}. A simple application of it reveals the
following, which we will use repeatedly in the next section.
\begin{corollary}
\label{head}
Let $x=s_1s_2\cdots s_k$ be the reduced word of an element in $C$. Then
\[
t_{s_1} t_x=t_x=t_{x}t_{s_k}.
\]
\end{corollary}
\begin{proof}
This follows immediately from Corollary \ref{J_C shortcut} and Proposition
\ref{KL basis mult}.
\end{proof}
\subsection{The dihedral factorization theorem}
\label{sec:dihedral factorization}
Recall the definition of dihedral segments from
\cref{sec:dihedral}. This subsection is dedicated to the proof of Theorem \ref{dihedral
factorization}. We restate it below.
\begin{namedtheorem}[F]{\rm (Dihedral factorization)}
Let $x$ be the reduced word of an element in $C$, and let $x_1,
x_2,\cdots, x_l$ be the dihedral segments of $w$. Then
\[
t_x=t_{x_1}\cdot t_{x_2} \cdot \cdots \cdot t_{x_l}.
\]
\end{namedtheorem}
It is convenient to have the following definition.
\begin{definition}(Dihedral elements)
We define a \emph{dihedral element} in $J_C$ to be a basis element of the
form $t_x$, where $x$ appears as a dihedral segment of some $y\in C$.
\end{definition}
\noindent In light of the definition, the theorem means that dihedral elements generate $J_C$.
The theorem also means that the combinatorial factorization
of $x$ into its dihedral segments in Equation \ref{eq:comb factorization}
carries over to an algebraic one in $J_C$.
To prove the theorem, we need to examine products in ${H}$ and apply
Corollary \ref{J_C shortcut}. To fully exploit the uniqueness of reduced
expressions of elements of $C$, we need the following well-known fact.
\begin{prop}[\cite{KL}, Statement 2.3.e]
\label{extremal}
Let $x,y\in W, s\in S$ be such that $x<y, sy<y, sx>x$.
Then $\mu(x,y)\neq 0$ if and only if $x=sy$; further, in this case, $\mu(x,y)=1$.
\end{prop}
\begin{lemma}
\label{truncation}
Let $x=s_1s_2s_3\cdots s_k$ be the reduced word of an element in $C$. Let
$x'=s_2s_3\cdots s_k$ and $x''=s_3\cdots s_k$ be the sequences obtained by
removing the first letter and first two letters from $x$, respectively. Then in ${H}_1$,
we have
\[
c_{s_1}c_{x'}=
\begin{cases}
c_{x''} &\quad\text{if}\; s_1\neq s_3;\\
c_{x}+c_{x''} & \quad\text{if}\; s_1=s_3.
\end{cases}
\]
\end{lemma}
\begin{proof}
By Proposition \ref{KL basis mult} and Corollary \ref{a=1}, in ${H}_1$ we have
\begin{equation*}
c_{s}c_{x'}=c_x+\sum_{P}\mu_{z,x'}c_z
\end{equation*}
where $P=\{z\in C:s_1z<z<x'\}$. Let $z\in P$. Then $z$ has a unique
reduced expression that is a proper subword of $x'$ and starts with
$s_1$. Since $s_1\neq s_2$ now that $x$ is reduced, we have
$\mathcal{L}(z)=\{s_1\}$, therefore $s_2x'<x'$ while $s_2z>z$. Now, if
$l(z)<l(x')-1$, then $z\neq s_2x$, so $\mu(z,x')=0$ by Lemma
\ref{extremal}. If $l(z)=l(x')-1$, then we must have
$s_3=s_1$ and $z=x''=s_2x'$, for otherwise $s_2\neq s_1, s_3\neq s_1$, and any
subword of $x'=s_2s_3\cdots s_k$ that starts with $s_1$ must have length
smaller than $l(x')-1$. This implies $\mu(z,x')=1$ by Lemma
\ref{extremal}. The lemma now follows. \end{proof}
We are ready to prove the theorem.
\begin{proof}[Proof of Theorem F]
We use induction on $l$. The base case where $l=1$ is trivially true. If
$l>1$, let $y$ be the glued product $y=x_2* x_3* \cdots* x_l$, so that by induction, it suffices to
show
\begin{equation}
\label{eq:w1}
t_x=t_{x_1}\cdot t_y.
\end{equation}
Suppose $y$ starts with some $t\in S$. Note that the construction of the
dihedral segments guarantees that $x_1$ contains at least two letters and is of the alternating form
$w_1=\cdots tst$ for some $s\in S$, while $x_2$, hence also $y$, is of the form
$tu\cdots$ for some $u\in S\setminus\{s,t\}$.
We prove Equation \eqref{eq:w1} by induction on the length $k=l(x_1)$ of
$x_1$. For the base case $k=2$, Proposition \ref{KL basis mult} and Lemma
\ref{truncation} imply that
\[
c_{x_1}c_{y}=c_{st}c_{tu\cdots}=c_sc_tc_{tu\cdots}=(v+v^{-1})c_{stu\cdots}=(v+v^{-1})c_{x_1*
y}
\]
in ${H}_1$. Equation \eqref{eq:w1} then follows by Corollary \ref{J_C shortcut}.
Now suppose $k>2$, write $x_1=s_1s_2s_3\cdots s_k$, and let $x_1'=s_2s_3\cdots
s_k$ and
$x_1''=s_3\cdots s_k$. Since the letters $s_1,s_2,\cdots, s_k$ alternate
between $s_1$ and $s_2$, Proposition \ref{KL basis mult} and Lemma
\ref{truncation} imply that
\[
c_{s_1s_2}\cdot
c_{x_1'}=c_{s_1}c_{s_2}c_{x_1'}=(v+v^{-1})c_{s_1}c_{x_1'}=(v+v^{-1})(c_{x_1}+c_{x_1''})
\]
and similarly
\[
c_{s_1s_2}\cdot
c_{x_1'* y}=(v+v^{-1})(c_{x_1* y}+c_{x_1''* y}).
\]
From the last two equations, it follows that
\begin{eqnarray*}
t_{s_1s_2}t_{x_1'}&=& t_{x_1}+t_{x_1''}\;,\\
t_{s_1s_2}t_{x_1'\cdot y}&=& t_{x_1\cdot
y}+t_{x_1''\cdot y}\;,
\end{eqnarray*}
therefore
\[
t_{x_1}t_y=(t_{s_1s_2}t_{x_1'}-t_{x_1''})t_y=t_{s_1s_2}t_{x_1'\cdot
y}-t_{x_1''* y}=t_{x_1* y}+t_{x_1''* y}-t_{x_1''*
y}=t_{x_1* y}=t_x,
\]
where the second equality holds by the inductive hypothesis now that
$l(x_1')<l(x_1)$. This completes our proof.
\end{proof}
\subsection{Products of dihedral elements}
\label{sec:dihedral products}
Now that Theorem \hyperref[dihedral factorization]{F} allows us to factor any basis
element in $J_C$ into dihedral elements, to understand products of basis
elements, it is natural to first study products of dihedral elements. We
do so now. Fortunately, for dihedral elements $t_x,t_y\in J_C$, the product
$c_xc_y$ of the corresponding KL basis elements are well understood in the
Hecke algebra, so the formula for $t_{x}t_{y}$ will be easy to derive.
Also, as we shall in see in the next subsection, we only need to focus on the case where $x$ and $y$ are generated by the same set of two
simple reflections and $x$ ends with the same letter that $y$ starts with.
We need more notation. Fix $s,t\in S$. For any $k\in \mathbb{N}$, set
$s_k=sts\cdots$ to be the word that has length $k$, alternates in
$s,t$ and starts with $s$. Similarly, define ${}_k s=(s_k)^{-1}$ to be the
word of length $k$ that alternates in $s,t$ and ends with $s$. If $s_k$ ends with a letter
$u\in \{s,t\}$ and we wish to emphasize this fact, write $s_k u$ for $s_k$.
Similarly, if ${}_k s$ starts with $u\in \{s,t\}$, we may write $u_k s$ for
${}_k s$. Define the counterparts of all these words with $s$ replaced by
$t$ in the obvious way. The following fact is well-known.
\begin{prop}
\label{dihedral KL mult}
Let $M=m(s,t)$. Suppose $x=u_k s$ and $y=s_l u'$ for some $u,u'\in \{s,t\}$ and
$0<k,l<M$. For $d\in \mathbb{Z}$, let $\phi(d)=k+l-1-2d$.
Then
\[
c_xc_y=c_{u_k s}c_{s_l {u'}}=
(v+v^{-1})\sum\limits_{d=\max(k+l-M,0)}^{\min(k,l)-1}c_{u_{\phi(d)}
u'}+\varepsilon
\]
in ${H}$, where $\varepsilon= f\cdot c_{1_M}$ for some $f\in \mathcal{A}$ if
$M<\infty$ and $\varepsilon=0$
otherwise.
\end{prop}
By Corollary \ref{J_C shortcut}, this immediately yields the multiplication
formula below.
\begin{prop}
\label{d3}
Suppose $x={u}_k s$ and $y=s_l u'$ for some $u,u'\in \{s,t\}$ and $0< k,l <M$.
For $d\in \mathbb{Z}$, let $\phi(d)=k+l-1-2d$.
Then in $J_C$, we have
\[
t_xt_y=t_{u_k s}t_{s_l u'}=\sum\limits_{d=\max(k+l-M,0)}^{\min(k,l)-1}
t_{u_{\phi(d)}u'}.
\]
\end{prop}
\noindent The obvious counterparts of the propositions with $s$ replaced with
$t$ hold as well.
Let us decipher the formula from Proposition \ref{d3}. It says that the product
$t_xt_y$ is the linear combination of the terms $t_z$, all with coefficient 1,
where $z$ runs through the elements in $C$ whose reduced words begins with the
same letter as $x$, ends with the same letter as $y$, and have lengths from the
list obtained in the following way: consider the list of numbers $\abs{k-l}+1,
\abs{k-l}+3, \cdots, k+l-1$ of the same parity, then delete from it all numbers
$r$ with $r\ge M$, as well as their mirror images with respect to the point
$M$, i.e., delete $2M-r$. Note that when $k=1$, this agrees with Corollary
\ref{head}.
\begin{example}[Product of dihedral elements]
Let $s,t\in S$.
\begin{enumerate}[leftmargin=2em]
\label{dihedral 7}
\item
Suppose $m(s,t)=7, x=stst$ and $y=tst$.
Then by Proposition \ref{d3},
\[
t_xt_y=t_{st}+{t_{stst}}+{t_{ststst}}.\]
\item Suppose $m(s,t)=7, x=stst$ and $y=tsts$. Then by Proposition \ref{d3},
\[
t_xt_y=t_{s}+t_{sts}+t_{ststs}+\xcancel{t_{stststs}}=t_{s}+t_{sts}+t_{ststs}.
\]
\item Suppose $m(s,t)=7, x=tst$ and $y=tststs$. Then by Proposition \ref{d3},
\[
t_xt_y=t_{tsts}+\bcancel{t_{tststs}}+\cancel{t_{tstststs}}=t_{tsts}.
\]
\end{enumerate}
\end{example}
The rule we described before the example to get the list of lengths for the $z$'s is
well-known; it is the \emph{truncated Clebsch-Gordan rule}. It governs the
multplication of the basis elements of the \emph{Verlinde algebra of the
Lie group $SU(2)$}, which
appears as the Grothendieck ring of certain {fusion categories}
(see \cite{EK} and Section 4.10 of \cite{EGNO}).
Since it will cause no confusion, we will also refer to this algebra simply as
the \emph{Verlinde algebra}.
\begin{definition}[The Verlinde algebra, \cite{EK}]
\label{verlinde def}
Let $M\in \mathbb{Z}_{\ge 2}\cup\{\infty\}$. The \emph{$M$-th Verlinde algebra} is the
free abelian group $\mathrm{Ver}_M= \oplus_{1\le k\le M-1} \mathbb{Z}
L_k$, with multiplication defined by
\[
L_k L_l=\sum\limits_{d=\max(k+l-M,0)}^{\min(k,l)-1}
L_{k+l-1-2d}.
\]
We call the $\mathbb{Z}$-span of the elements $L_k$ where $k$ is an odd integer the
\emph{odd part} of $\mathrm{Ver}_M$, and denote it by $\mathrm{Ver}_M^{\text
odd}$.
\end{definition}
Note that by the multiplication formula, $\mathrm{Ver}_M^{\text odd}$ is
clearly a subalgebra of
$\mathrm{Ver}_M$.
Indeed, suppose $(W,S)$ is a dihedral system, say with $S=\{1,2\}$ and
$m(1,2)=M$ for some $M\in \mathbb{Z}_{\ge 2}\cup\{\infty\}$, then we claim that the
subalgebra $J_1$ of $J_C$ is isomorphic to $\mathrm{Ver}_M^{\text odd}$. To
see this, recall that $J_1$ is given by the $\mathbb{Z}$-span of all
$t_{1_k}$ where $k$ is odd, $0<k<M$, and $1_k$ is the alternating word
$121\cdots 1$ containing $k$ letters. Since the multiplication of such
basis elements are governed by the truncated Clebsch-Gordan rule in
Proposition \ref{d3}, the map $t_{1_k}\mapsto L_k$ induces an isomorphism.
Furthermore, it is easy to check that both $\mathrm{Ver}_M$ and
$\mathrm{Ver}_M^{\text odd}$ are unital based rings with $L_1$ as the unit
and with the identity map as the anti-involution, so this isomorphism is
actually an isomorphism of based rings. By a similar argument, $J_2$ is isomorphic to
$\mathrm{Ver}_M^{\text odd}$ as based rings as well. We discuss incarnations of $\mathrm{Ver}_M^{\text odd}$ for some small values of
$M$ below.
\begin{example}
Let $(W,S)$ be a dihedral system with $S=\{1,2\}$ and $M=m(1,2)$.
\begin{enumerate}[leftmargin=2em]
\item Suppose $M=5$.
Then
$J_1=\mathbb{Z} t_{1}\oplus
\mathbb{Z} t_{121}$, where $t_1$ is the unit and
\[
t_{121}t_{121}=t_1,
\]
so $J_1$, hence $\mathrm{Ver}_5^{\text odd}$, is isomorphic to the
\emph{Ising fusion ring} that arises from the Ising model of statistical
mechanics.
\item Suppose $M=6$. Then
$J_1=\mathbb{Z} t_{1}\oplus
\mathbb{Z} t_{121}\oplus \mathbb{Z} t_{12121}$, where $t_1$ is the unit and
\[
\hspace{2.5em} t_{121}t_{121}=t_1+t_{121}+t_{12121},\quad
t_{121}t_{12121}=t_{12121}t_{121}=t_{121},\quad
t_{12121}t_{12121}=t_{1}.
\]
On the other hand, the category $\mathcal{C}$ of complex
representations of the symmetric group $S_3$ has three non-isomorphic simple
objects $1$ (the trivial representation), $\chi$ (the sign representation) and
$V$ satisfying
\[
1\otimes \chi=\chi\otimes 1=\chi,\quad 1\otimes V=V\otimes 1=V,\]
\[V\otimes
V=1\oplus V\oplus \chi,\quad V\otimes \chi=\chi\otimes V=V,\quad \chi\otimes \chi=1,
\]
so $J_{1}$, hence $\mathrm{Ver}_6^{\text odd}$, is isomorphic to the Grothendieck
ring $\mathrm{Gr}(\mathcal{C})$ of $\mathcal{C}$.
\end{enumerate}
\end{example}
\subsection{Products of arbitrary elements}
\label{sec:arbitrary products}
Let $x,y\in C$. We now describe the product $t_xt_y$ of two arbitrary basis
elements in $J_C$. For
convenience, let us make the following assumption.
\begin{assumption}
From now on, whenever we write $x\in C$,
we assume not only that $x$ is an element of the subregular cell, but also that
$x$ is the unique reduced word of the element.
\end{assumption}
Recall the
definition of $J_X$ for $X\subseteq W$ from Definition \ref{support} and $\Gamma_s$
($s\in S$) from the beginning of $\S$3.3. Here is a simple fact about
$t_xt_y$:
\begin{prop}
\label{start and end}
Let $a,b,c,d\in S$, let $x\in {\Gamma_a^{-1}\cap\Gamma_b}$, and let
$y\in \Gamma_{c}^{-1}\cap \Gamma_d$. Then $t_xt_y=0$ if $b\neq c$, and
$t_xt_y\in J_{\Gamma_{a}^{-1}\cap\Gamma_d}$ if $b=c$.
\end{prop}
\begin{proof}
Recall that for any $s\in S$, $\Gamma_s$ is a left KL cell in $W$ that
consists of the elements in $C$ whose reduced word ends
in $s$. Consequently, $\Gamma_s^{-1}$ is a right KL cell by Proposition
\ref{inverse and cells} and it consists of the elements in $C$ whose reduced
word starts with $s$. That said, the statement follows from part (2) of Proposition
\ref{gamma and cells} in the following way. If $b\neq c$, then $x\in
\Gamma_b$ while $y^{-1}\in \Gamma_c$, so $x\not\sim_L y^{-1}$. This
implies $\gamma_{x,y,z^{-1}}=0$ for all
$z\in W$, therefore $t_xt_y=0$. If $b=c$, then for any $z\in W$ such that
$\gamma_{x,y,z^{-1}}\neq 0$, we must have $y\sim_L {z}$ and
$z^{-1}\sim_L x^{-1}$. The last condition implies $z\sim_R x$ by
Proposition \ref{inverse and cells}, so $z\in \Gamma_a^{-1}\cap
\Gamma_d$. It follows that $t_xt_y\in J_{\Gamma_a^{-1}\cap\Gamma_d}$.
\end{proof}
\begin{remark}The proposition may be interpreted as saying that for any basis
element $t_z$ that occurs in the product $t_xt_y$ (when the product is
nonzero), $z$ must start with the same letter as $x$ and end with the same
letter as $y$. This fact will be used later in \cref{sec:oddly-connected def}.
\end{remark}
For a more detailed description of $t_xt_y$, we discuss three cases. The first
case simply paraphrases the case $b\neq c$ in Proposition \ref{start and end}.
\begin{prop}
\label{d1}
Let $x,y\in C$. Suppose $x$ does not end
with the letter that $y$ starts with. Then $t_xt_y=0$.
\end{prop}
\begin{proof}
This is immediate from Proposition \ref{start and end}.
\end{proof}
The second case is also relatively simple.
\begin{prop}
\label{d2}
Let $x,y \in C$. Suppose $x$ ends with the letter that $y$ starts with, and
suppose that the last dihedral segment of $x$ and the first dihedral segment
of $y$ involve different sets of letters. Then $t_xt_y=t_{x * y}$.
\end{prop}
\begin{proof}
Let $x_1,\cdots, x_p$ and $y_1,\cdots, y_q$ be the dihedral segments of
$x$ and $y$, respectively. By the assumptions, $x_1,\cdots, x_k, y_1,\cdots,
y_l$ are exactly the dihedral segments of the glued product $x*y$, therefore
Theorem \hyperref[dihedral factorization]{F} implies
\[
t_xt_y=t_{x_1}\cdots t_{x_p} t_{y_1}\cdots t_{y_q}=t_{x_1 * \cdots
* x_p * y_1 * \cdots * y_q}=t_{x * y}.
\qedhere
\]
\end{proof}
For the third and most involved case, it remains to compute products of the form
$t_xt_y$ in $J_C$ where $x$ ends in the letter $y$ starts with and the last
dihedral segment $x_p$ of $x$ contain the same set of letters as the first dihedral
segment $y_1$ of $y$. By Theorem \hyperref[dihedral factorization]{F}, to
understand $t_{x} t_{y}$ we need to first understand $t_{x_p}t_{y_1}$. This
leads us to the configuration studied in Proposition \ref{d3}. We illustrate
below how we may combine Proposition \ref{d3} and Theorem \hyperref[dihedral factorization]{F}
to
compute $t_xt_y$.
\begin{example}[Product of arbitrary elements]
Suppose $S=\{1,2,3\}$ and $m(1,2)=4, m(1,3)=5, m(2,3)=6$.
\begin{enumerate}[leftmargin=2em]
\item Let $x=123, y=323213$. Then by Theorem \hyperref[dihedral
factorization]{F}
and Proposition \ref{d3},
\begin{eqnarray*}
t_xt_y&=& t_{12}t_{23}t_{3232}t_{21}t_{13}\\
&=& t_{12}(t_{232}+t_{23232})t_{21}t_{13}\\
&=& t_{12}t_{232}t_{21}t_{13}+t_{12}t_{23232}t_{21}t_{13}.
\end{eqnarray*}
Applying Theorem \ref{dihedral factorization} again to the last expression, we
have
\[
t_{x}t_y=t_{123213}+t_{12323213}.
\]
\item Let $x=123, y=3213$. Repeated use of Theorem \hyperref[dihedral factorization]{F} and
Proposition \ref{d3} yields
\begin{eqnarray*}
t_xt_y&=& t_{12}t_{23}t_{32}t_{21}t_{13}\\
&=& t_{12}(t_{2}+t_{232})t_{21}t_{13}\\
&=& (t_{12}t_2) t_{21}t_{13}+t_{12}t_{232}t_{21}t_{13}\\
&=& (t_{12}t_{21})t_{13}+t_{12}t_{232}t_{21}t_{13}\\
&=& (t_{1}+t_{121})t_{13}+t_{12}t_{232}t_{21}t_{13}\\
&=& t_1t_{13}+t_{121}t_{13}+t_{12}t_{232}t_{21}t_{13}\\
&=& t_{13}+t_{1213}+t_{123213}.
\end{eqnarray*}
\end{enumerate}
\end{example}
The examples illustrate the general algorithm to compute the product
$t_xt_y$ in our third case. Namely, we first compute $t_{x_p}t_{y_1}$ and distribute the product
so as to write $t_xt_y$ as a linear combinations of products of dihedral
elements. If such a
product has two consecutive factors corresponding to elements in the same dihedral group, use
Proposition \ref{d3} to compute the product of these two factors first, then
distribute the product to obtain a new linear combination. Repeat this process
until we have a linear combination of products where no consecutive factors
correspond to elements of the same dihedral group. This means the factors
appear as the dihedral segment of an element in $C$, so we may apply
Theorem \hyperref[dihedral factorization]{F} to each of the products and rewrite $t_xt_y$ as a linear combination of other basis
elements. Some Sage (\cite{sagemath}) code implementing this algorithm is available at \cite{mycode}.
\begin{example}
\label{counter example}
Consider the algebra $J_1$ arising from the Coxeter system with the
following diagram.
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\node[main node] (1) {};
\node (11) [below=0.1cm of 1] {\small{1}};
\node[main node] (2) [right=1.5cm of 1] {};
\node (22) [below=0.1cm of 2] {\small{2}};
\node[main node] (3) [right=1.5cm of 2] {};
\node (33) [below=0.1cm of 3] {\small{3}};
\path[draw]
(1) edge node [above] {\small{4}} (2)
(2) edge node [above] {\small{4}} (3);
\end{tikzpicture}
\end{figure}
\end{example}
\noindent Let $x={121}$, $y=12321$, and let $y_n$ denote the glued product
$y*y*\cdots *y$ of $n$ copies of $y$ for each $n\in \mathbb{Z}_{\ge 1}$. It is easy to
see that
$\Gamma_1\cap\Gamma_1^{-1}$ consists exactly of 1, $x$ and all $y_n$ where
$n\ge 1$ so that $J_1$ has basis elements $t_1, t_x$ and $t_n$ ($n\ge 1$) where
we set $t_n:=t_{y_n}$ for all $n\ge 1$. One efficient way to see this is to
draw the subgraph $D_1$ of
the subregular graph of the system (see Definition \ref{subregular graph}
and the ensuing discussion) shown in Figure \ref{D1} and recall that elements of $\Gamma_1\cap\Gamma_1^{-1}$ are
in a bijection with the walks on $D_1$ which start at the top vertex and end at
one of the diamond-shaped vertices.
\begin{figure}[h!]
\label{D1}
\centering
\begin{tikzpicture}
\node[state node] [diamond] (1) {\small{$1$}};
\node[state node] (2) [below = 0.5cm of 1]
{\small{$12$}};
\node[state node] [diamond] (3) [below left = 0.5cm and 0.5cm of 2]
{\small{$121$}};
\node[state node] (4) [below right = 0.5cm and 0.5cm of 2]
{\small{$23$}};
\node[state node] (5) [below left = 0.5cm and 0.5cm of 4]
{\small{$232$}};
\node[state node] [diamond] (6) [below right = 0.5cm and 0.5cm of 5]
{\small{21}};
\node[state node] (7) [below right = 0.5cm and 0.5cm of 4]
{\small{212}};
\path[draw,-stealth,,thick]
(1) -- (2);
\path[draw,-stealth,,thick]
(2) -- (3);
\path[draw,-stealth,,thick]
(2) -- (4);
\path[draw,-stealth,,thick]
(4) -- (5);
\path[draw,-stealth,,thick]
(5) -- (6);
\path[draw,-stealth,,thick]
(6) -- (7);
\path[draw,-stealth,,thick]
(7) -- (4);
\end{tikzpicture}
\caption{The graph $D_1$}
\end{figure}
Let us describe the products of all pairs of basis element in $J_1$. First,
we have $t_1t_w=t_w=t_wt_1$ for each basis element $t_w\in J_1$, as
$t_1$ is the identity. For products involving $t_x$ but not $t_1$, propositions
\ref{d2} and \ref{d3} imply that $t_xt_x=t_{121}t_{121}=t_1$,
while
\begin{equation}
\label{eq:check n}
t_{x}t_{n}=t_{121}t_{12321\cdots}=t_{121}t_{12}t_{2321*y_{n-1}}=t_{12}t_{2321*y_{n-1}}=t_{n}
\end{equation}
and similarly $t_nt_x=t_n$ for all $n\ge 1$ (where we set $y_0=1$). Finally, to
describe products of the form $t_mt_n$ where $m,n\ge 1$, set $t_0=t_1+t_x$.
Using computations similar to those in Equation \eqref{eq:check n}, we can
easily check that $y_1y_n=y_{n-1}+y_{n+1}$ for all $n\ge 1$, then show by induction on $m$ that
\begin{equation}
\label{eq:44}
t_mt_n=t_{\abs{m-n}}+t_{m+n}
\end{equation}
for all $m,n\ge 1$.
\section{$J_C$ and the Coxeter diagram}
\label{sec:diagram}
Let $(W,S)$ be an arbitrary Coxeter
system, and let $J_C$ be its subregular $J$-ring.
We study the relationship between $J_C$ and the Coxeter diagram of $(W,S)$ in
this section.
\subsection{Simply-laced Coxeter systems}
\label{sec:simply-laced}
Let us recall some graph-theoretic terminology. Let $G=(V,E)$ be an undirected graph.
Recall that just as in the directed case, a \emph{walk} on $G$ is a sequence $P=(v_1,\cdots ,v_k)$ of vertices in $G$
such that $\{v_i,v_{i+1}\}$ is an edge for all $1\le i\le
k-1$. We define a \emph{spur} on
$G$ to be a walk of the form $(v,v',v)$ where $\{v,v'\}$ forms an edge. Given any
walk containing a spur, i.e., a walk of the form $P_1=(\cdots, u, v,v',v,
u', \cdots)$, we
may {remove the spur} to form a new walk $P_2=(\cdots, u, v, u', \cdots)$;
conversely, we can add a spur $(v,v',v)$ to a walk of the form
$P_2$ to obtain the walk $P_1$.
Recall that a \emph{groupoid} may be
viewed as a generalization of a group, in that it is defined to be a pair
$(\mathcal{G},\circ)$, where $\mathcal{G}$ is set and $\circ$ is a partially-defined binary
operation on $\mathcal{G}$ that satisfy certain
axioms (see \cite{groupoid-def}). More precisely, for any topological space
$X$ and a chosen subset $A$ of $X$, the \emph{fundamental groupoid of
$X$ based on $A$} is defined to be $\Pi(X,A):=(\mathcal{P},\circ)$, where
$\mathcal{P}$ are the homotopy equivalence classes of paths on $X$ that
connect points in $A$ and $\circ$ is concatenation of paths. Given an
undirected graph $G=(V,E)$, we may view $G$ as embedded in a topological
surface and hence as a topological space with the subspace topology induced
from the surface. We define the \emph{fundamental groupoid of $G$} to be
$\Pi(G):=\Pi(G,V)=(\mathcal{P},\circ)$, where $\mathcal{P}$ stands for paths on
$G$.
Note that paths on $G$ are just walks,
and concatenation of paths correspond to concatenation of walks. More precisely, for any two walks $P=(v_1,\cdots,v_{k-1}, v_k)$ and $Q=(u_1, u_2\cdots
u_l)$ on $G$, we define their \emph{concatenation} to be the
walk $P\circ Q = (v_1,\cdots, v_{k-1}, v_k, u_2,\cdots, u_l)$ if $v_k=u_1$;
otherwise we leave $P\circ Q$ undefined. Also note that two walks are
homotopy equivalent if and only if they can be obtained from each other by
a sequence of removals or additions of spurs, and each homotopy
equivalence class of walks contains a unique walk with no spurs. We use $[P]$
to denote the class of a walk $P$. For each path $P=(v_1,v_2,\cdots, v_k)$, we also
define it \emph{inverse} to be the walk $P^{-1}:=(v_k,\cdots,v_2,v_1)$.
For each vertex $s$ in $G$, we may similarly define the \emph{fundamental
group of $G$ based at $s$} to be $\Pi_s(G)=(\mathcal{P}_s,\circ)$, where
$\mathcal{P}_s$ are now equivalence classes of walks on $G$ that start and end
with $s$, and $\circ$ is concatenation as before. Note that $\Pi_s(G)$ is
actually a group, so it makes
sense to talk about the its group algebra $\mathbb{Z}\Pi_s(G)$ over $\mathbb{Z}$. We
may define a counterpart of $\mathbb{Z}\Pi_s(G)$ for $\Pi(G)$ by mimicking the
construction of a group algebra.
\begin{definition}
\label{groupoid mult}
Let $\Pi(G)=(\mathcal{P},\circ)$ be the fundamental groupoid of a graph $G$.
We define the \emph{groupoid algebra} of $\Pi({G})$ over $\mathbb{Z}$ to be the
free abelian group $\mathbb{Z}\mathcal{P}=\oplus_{[P]\in \mathcal{P}} \mathbb{Z}[P]$ equipped with an
$\mathbb{Z}$-bilinear multiplication $\cdot$ defined by
\[
[P]\cdot [Q]=
\begin{cases}
[P\circ Q]\quad&\text{if}\;P\circ Q \;\text{is defined in $G$},\;
\\
0\quad&\text{if}\; P\circ Q\;\text{is not defined}.
\end{cases}
\]
\end{definition}
\noindent Note that $\mathbb{Z}\Pi(G)$ is clearly associative.
\begin{prop}
\label{groupoid base}
Let $G=(V,E)$ where $V$ is finite. Let $P_s$ be the constant walk $(s)$ for
all $s\in V$. Then the groupoid algebra $\mathbb{Z}\Pi(G)$ has the structure of a based ring with basis
$\{[P]\}_{[P]\in \mathcal{P}}$, with unit 1=$\sum_{s\in V} [P_s]$ (so the
distinguished index set simply corresponds to $V$), and with its
anti-involution induced by the map $[P]\mapsto [P^{-1}]$. For each
$s\in V$, the group algebra $\mathbb{Z}\Pi_s(G)$ has the structure of a unital
based ring with basis $\{[P]\}_{[P]\in \mathcal{P}_s}$, with unit $1=[P_s]$
(so the distinguished index set is simply $\{s\}$), and with its
anti-involution induced by the map $[P]\mapsto [P^{-1}]$.
\end{prop}
\begin{proof}
All the claims are easy to check using definitions.
\end{proof}
Now, suppose $(W,S)$ is a
simply-laced Coxeter system, and let $G$ be its Coxeter diagram.
Recall that this means $m(s,t)=3$ for $s,t\in S$ whenever $\{s,t\}$ is an edge
in $G$ while $m(s,t)=2$ otherwise. Let us consider the map $C\rightarrow \Pi(G)$ which
sends each element $x=s_1\cdots s_k\in C$ to the homotopy equivalence class
$[P_x]$ of
the walk $P_x:(s_1,s_2,\cdots, s_k)$. We claim this is a bijection.
To see this, note that for each $1\le i\le k-1$, since $s_is_{i+1}$ appears
inside a dihedral segment of $x$, Proposition \ref{subregular criterion}
implies that $m(s_i,s_{i+1})>2$, therefore $\{s_i,s_{i+1}\}$ is an edge and
$P_x=(s_1,\cdots,s_k)$ is a walk on $G$. Further, we must have
$m(s_i,s_{i+1})=3$, so $s_{i+2}\neq s_i$ for all $1\le i\le k-2$ by
Proposition \ref{subregular criterion}, therefore
$P_x$ contains no spurs. This means $P_x$ is exactly the unique representative
with no spurs in its class. Conversely, given class of walks in $\Pi(G)$, we
may take its unique representative $(s_1,\cdots,
s_k)$ with no spurs and consider the word $s_1\cdots s_k$. By Proposition
\ref{subregular criterion}, $s_1\cdots
s_k$ is the reduced word of an element in $C$. This gives a two-sided inverse to the map
$x\mapsto [P_x]$.
Since $C$ and $\mathcal{P}$ index the basis elements of
$J_C$ and $\mathbb{Z}\Pi(G)$, respectively, the bijection $x\mapsto [P_x]$ induces a unique $\mathbb{Z}$-module
isomorphism $\Phi: J_C \rightarrow \mathbb{Z}\Pi(G)$ defined by
\begin{equation}
\label{vs iso}
\Phi(t_{x})=[P_x],\qquad
\forall x\in C.
\end{equation}
We are now ready to prove Theorem \hyperref[simply-laced]{A}, which is restated below.
\begin{namedtheorem}[A]
Let $(W,S)$ be an any simply-laced Coxeter system, and let $G$ be its Coxeter
diagram. Let $\Pi(G)$ be the fundamental groupoid of $G$, let
$\Pi_s(G)$ be the fundamental group of $G$ based at $s$ for any $s\in S$, let
$\mathbb{Z}\Pi(G)$ be the groupoid algebra of $\Pi(G)$, and let $\mathbb{Z}\Pi_s(G)$ be the
group algebra of $\Pi_s(G)$. Then $J_C\cong \mathbb{Z}\Pi(G)$ as based rings, and
$J_s\cong \mathbb{Z}\Pi_s(G)$ as based rings for all $s\in S$.
\end{namedtheorem}
\begin{proof}
We show that the $\mathbb{Z}$-module isomorphism $\Phi: J_C\rightarrow \mathbb{Z}\Pi(G)$ defined by
Equation \ref{vs iso} is an algebra
isomorphism. This would imply $J_s\cong \mathbb{Z}\Pi_s(G)$ for all $s\in S$, since
$\Phi$ clearly restricts to a $\mathbb{Z}$-module map from $J_s$ to $\mathbb{Z} \Pi_s(G)$.
The fact that $\Phi$ and the restrictions are actually isomorphisms of based
rings will then be clear once we compare the based ring
structure of $J_C, \mathbb{Z}\Pi(G), J_s$ and $\mathbb{Z}\Pi_s(G)$ described in Corollary
\ref{subregular base} and
Proposition \ref{groupoid base}.
To show $\Phi$ is an algebra homomorphism, we need to show
\begin{equation}
\label{eq:groupoid hom}
[P_x]\cdot [P_y]=\Phi(t_xt_y)
\end{equation}
for all $x,y\in C$. Let $s_k\cdots s_1$ and
$u_1\cdots u_l$ be the reduced word of $x$ and $y$, respectively. If
$s_1\neq u_1$, then Equation \eqref{eq:groupoid hom} holds since both sides are
zero by Definition \ref{groupoid mult} and Corollary \ref{start and end}. If
$s_1=u_1$, let $q\le \min(k,l)$ be the largest integer such that $s_i=u_i$
for all $1\le i\le q$. Then
\begin{eqnarray*}
[P_x]\cdot [P_y]&=& [(s_k,\cdots, s_{q+1}, s_q, \cdots, s_1)\circ (s_1,\cdots,
s_q,u_{q+1},\cdots, u_l)]\\
&=& [(s_k,\cdots, s_{q+1}, s_q,\cdots, s_2,s_1, s_2,\cdots,
s_q,u_{q+1},\cdots, u_l)]\\
&=& [(s_k,\cdots, s_{q+1},s_q,u_{q+1},\cdots, u_l)],
\end{eqnarray*}
where the last equality holds by successive removal of spurs of the form
$(s_{i+1},s_i,s_{i+1})$. On the other hand, since $m(s_i,s_{i+1})=3$ for each
$1\le i\le q$, Proposition \ref{d3} implies
\begin{equation}
\label{eq:algebraic spur}
t_{s_{i+1}s_{i}}t_{s_is_{i+1}}=t_{s_{i+1}},\quad
t_{s_i}t_{s_is_{i+1}}=t_{s_is_{i+1}},
\end{equation}
therefore by Theorem \hyperref[dihedral factorization]{F},
\begin{eqnarray*}
t_xt_y&=& (t_{s_k\cdots s_{q+1}s_q} t_{s_qs_{q-1}}\cdots
t_{s_3s_2}t_{s_2s_1})(t_{s_1s_2}t_{s_2s_3}\cdots
t_{s_{q-1}s_q}t_{s_qu_{q+1}\cdots
u_{l}})\\
&=& (t_{s_k\cdots s_{q+1}s_q} t_{s_qs_{q-1}}\cdots
t_{s_3s_2})t_{s_2}(t_{s_2s_3}\cdots t_{s_{q-1}s_q}t_{s_qu_{q+1}\cdots
u_{l}})\\
&=& (t_{s_k\cdots s_{q+1}s_q} t_{s_qs_{q-1}}\cdots
t_{s_3s_2})(t_{s_2s_3}\cdots t_{s_{q-1}s_q}t_{s_qu_{q+1}\cdots
u_{l}})\\
&=& \cdots\\
&=& t_{s_k\cdots s_{q+1}s_q}t_{s_qu_{q+1}\cdots u_{l}}\\
&=& t_{s_k\cdots s_{q+1}s_qu_{q+1}\cdots s_l'}.
\end{eqnarray*}
Here the last equality follows from Proposition \ref{d2}, and the ``$\cdots$''
signify repeated use of the equations in \eqref{eq:algebraic spur} to
``remove'' the products of the form $(t_{s_{i+1}}t_{s_{i}})(t_{s_i}t_{s_{i+1}})$
where $2\le i\le q-1$. By the definition of $\Phi$, we then have
\[
\Phi(t_xt_y)=[(s_k,\cdots, s_{q+1},s_q, u_{q+1},\cdots, u_l)].
\]
It follows that
$
[P_x]\cdot [P_y]=\Phi(t_xt_y),
$
and we are done.
\end{proof}
\subsection{Oddly-connected Coxeter systems}
\label{sec:oddly-connected def}
Define a Coxeter system
$(W,S)$ to be \emph{oddly-connected} if for every pair
of vertices $s,t$ in its Coxeter diagram $G$, there is a walk in $G$ of the form
$(s=v_1,v_2,\cdots, v_k=t)$ where the edge weight $m(v_{i},v_{i+1})$ is odd for all $1\le i\le k-1$. In this subsection,
we discuss how the odd-weight edges affect the structure of the algebras
$J_C$ and $J_s$ ($s\in S$).
We need some relatively heavy notation.
\begin{definition}
\label{transition}
For any $s,t\in S$ such that $M=m(s,t)$ is odd:
\begin{enumerate}
\item We define
\[
z({st})=sts\cdots t
\]
to be the alternating word of length $M-1$ that starts with
$s$. Note that it necessarily ends with $t$ now that $M$ is odd.
\item We define maps $\lambda_{s}^t, \rho_t^s: J_C\rightarrow J_C$ by
\[
\lambda_{s}^t(t_x)=t_{z({ts})}t_{x},\]
\[
\rho_s^t
(t_x)=t_xt_{z({st})},
\]
and define the map $\phi_{s}^t: J_C\mapsto J_C$ by
\[
\phi_{s}^t(t_x)=\rho_{s}^t\circ\lambda_{s}^t (t_x)
\]
for all $x\in C$.
\end{enumerate}
\end{definition}
\begin{remark}
\label{match}
The notation above is set up in the following way. The letters $\lambda$
and $\rho$ indicate a map is multiplying its input by an element on the left and right, respectively. The subscripts and superscripts are to provide mnemonics for what the maps do on the reduced words indexing the basis elements of
$J_C$: by Corollary \ref{start and end},
$\lambda_{s}^t$ maps $J_{\Gamma_s^{-1}}$ to
$J_{\Gamma_{t}^{-1}}$ and vanishes on $J_{\Gamma_h^{-1}}$ for any $h\in
S\setminus\{s\}$.
Similarly, $\rho_{s}^t$ maps $J_{\Gamma_s}$ to
$J_{\Gamma_{t}}$ and vanishes on $J_{\Gamma_h}$ for any $h\in S\setminus\{s\}$.
\end{remark}
\begin{prop}
\label{odd edge}
Let $s,t$ be as in Definition \ref{transition}. Then
\begin{enumerate}
\item
$\rho_s^t\circ \lambda_s^t=\lambda_s^t\circ\rho_s^t$.
\item $\rho_{t}^s\circ\rho_{s}^t(t_x)=t_x$ for any $x\in\Gamma_s$,
$\lambda_t^s\circ\lambda_{s}^t(t_x)=t_x$ for any $x\in\Gamma_s^{-1}$.
\item
$\rho_{s}^t (t_x)\lambda_{s}^t(t_y)=t_xt_y$ for any $x\in \Gamma_s,y\in
\Gamma_s^{-1}$.
\item The restriction of
$\phi_s^t$ on $J_s$ gives an isomorphism of based rings from $J_s$ to $J_t$.
\end{enumerate}
\end{prop}
\begin{proof}
Part (1) holds since both sides of the equation sends $t_x$ to
$t_{z(ts)}t_xt_{z(st)}$. Parts (2) and (3) are consequences of the truncated
Clebsch-Gordan rule. By the rule,
\[
t_{z(st)}t_{z(ts)}=t_s,
\]
therefore $
\rho_{t}^s\circ\rho_{s}^t(t_x)=t_xt_s=t_x
$
for any $x\in\Gamma_s$ and
$
\lambda_t^s\circ\lambda_{s}^t(t_x)=t_st_x
$
for any $x\in\Gamma_s^{-1}$; this proves (2). Meanwhile, $
\rho_{s}^t
(t_x)\lambda_{s}^t(t_y)=t_xt_{z(st)}t_{z(ts)}t_y=t_xt_st_y=t_xt_y
$ for any $x\in\Gamma_s, y\in \Gamma_{s}^{-1}$; this proves (3).
For part (4), the fact that $\phi_s^t$ maps $J_s$ to $J_t$ follows from Remark
\ref{match}. To see that is a (unit-preserving) algebra homomorphism, note that
\[
\phi_s^t(t_s)=t_{z(ts)}t_{s}t_{z(st)}=t_{z(ts)}t_{z(st)}=t_t,
\]
and
for all $t_x, t_y\in
J_s$,
\[
\phi_s^t
(t_x)\phi_s^t(t_y)=(\rho_s^t (\lambda_s^t(t_x))\cdot
(\lambda_s^t(\rho_s^t(t_y))=\lambda_s^t(t_x)\cdot
\rho_s^t(t_y)
=\phi_{s}^t(t_xt_y)
\]
by parts (1) and (3).
We can similarly check $\phi_t^s$ is an algebra homomorphism from $J_t$ to
$J_s$. Finally, using
calculations similar to those used for part (2), it
is easy to check that $\phi_s^t$ and $\phi_{t}^s$ are mutual inverses , therefore $\phi_s^t$ is an algebra
isomorphism.
It remains to check that the restriction is an isomorphism of
based rings. In light of Proposition \ref{subregular base}, this means
checking that
$\phi_s^t(t_{x^{-1}})=(\phi_s^t(t_x))^*$ for each $t_x\in J_s$, where ${}^*$ is the linear map
sending $t_x$ to $t_{x^{-1}}$ for each $t_x\in J_s$. This holds because
\[
\phi_s^t(t_{x^{-1}})=t_{z(ts)}t_{x^{-1}}t_{z(st)}=(t_{z(st)^{-1}}t_{x}t_{z(ts)^{-1}})^*=(t_{z(ts)}t_xt_{z(st)})^*=(\phi_s^t(t_x))^*,
\]
where the second equality follows from the definition of ${}^*$ and the fact that $t_x\mapsto t_{x^{-1}}$ defines an
anti-homomorphism in $J$ (see Corollary \ref{J anti involution}).
\end{proof}
Now we upgrade the definitions and propositions from a single edge to a walk.
\begin{definition}
\label{lambdas and rhos}
For any walk $P=(u_1,\cdots, u_l)$ in $G$ where $m(u_k,u_{k+1})$ is odd
for all $1\le k\le l-1$, we define maps $\lambda_{P},\rho_P$ by
\[
\lambda_P=\lambda_{u_{l-1}}^{u_l}\circ \cdots \lambda_{u_2}^{u_3}\circ
\lambda_{u_1}^{u_2},
\]
\[
\rho_P=\rho_{u_{l-1}}^{u_l}\circ \cdots \rho_{u_2}^{u_3}\circ
\rho_{u_1}^{u_2},
\]
and define the map $\phi_P: J_C\rightarrow J_C$ by
\[
\phi_P=\lambda_P\circ\rho_P.
\]
\end{definition}
\begin{prop}
\label{odd walk}
Let $P=(u_1,\cdots, u_l)$ be as in Definition \ref{lambdas and rhos} Then
\begin{enumerate}
\item $ \phi_P=\phi_{u_{l-1}}^{u_l}\circ\cdots\circ \phi_{u_2}^{u_3}\circ
\phi_{u_1}^{u_2}$.
\item $\rho_{P^{-1}}\circ\rho_{P}(t_x)=t_x$ for any
$x\in\Gamma_{u_1}$,
$\lambda_{P^{-1}}\circ\lambda_P(t_x)=t_x$ for any
$x\in\Gamma_{u_1}^{-1}$.
\item $\rho_{P} (t_x)\lambda_P(t_y)=t_{x}t_y$ for any $x\in
\Gamma_{u_1}, y\in \Gamma_{u_l}^{-1}$.
\item The restriction of $\phi_P$ gives an isomorphism of based rings from
$J_{u_1}$ to $J_{u_l}$.
\end{enumerate}
\end{prop}
\begin{proof}
Part (1) holds since each left multiplication $\lambda_{u_k}^{u_{k+1}}$
commutes with all right multiplications $\rho_{u_{k'}}^{u_{k'+1}}$. Part
(2)-(4) can be proved by writing out each of the maps as a composition of
$(l-1)$ appropriate maps corresponding to the $(l-1)$ edges of $P$ and then repeatedly
applying their counterparts in Proposition \ref{odd walk} on the composition
components. In particular, part (4) holds since $\phi_P$ is a composition of
isomorphisms of based rings is clearly another isomorphism of based rings.
\end{proof}
We are almost ready to prove Theorem \hyperref[oddly-connected]{B}:
\begin{namedtheorem}[B]
Let $(W,S)$ be an oddly-connected Coxeter system. Then
\begin{enumerate}
\item $J_s\cong J_t$ as based rings for all $s,t\in S$.
\item $J_C\cong
\mathrm{Mat}_{S\times S}(J_s)$ as based rings for all $s\in S$. In particular,
$J_C$ is Morita equivalent to $J_s$ for all $s\in S$.
\end{enumerate}
\end{namedtheorem}
\noindent Here, for each fixed $s\in S$, the algebra $\mathrm{Mat}_{S\times
S}(J_s)$ is the matrix algebra of matrices with rows and columns indexed by $S$
and with entries from $J_s$. For any $a,b\in S$ and $f\in J_s$, let
$E_{a,b}(f)$ be the matrix in $\mathrm{Mat}_{S\times S} (J_s)$ with $f$ at the
$a$-row, $b$-column and zeros elsewhere. We explain how
$\mathrm{Mat}_{S\times S}(J_s)$ is a based ring below.
\begin{prop}
\label{odd base}
The ring
$\mathrm{Mat}_{S\times S}(J_s)$ is a based ring with basis
$\{E_{a,b}(t_x):a,b\in S, x\in \Gamma_s\cap\Gamma_s^{-1}\}$, with
unit element $1=\sum_{s\in S}E_{s,s}(t_s)$, and with its anti-involution
induced by $E_{a,b}(t_{x})^*=E_{b,a}(t_{x^{-1}})$.
\end{prop}
\begin{proof}
Note that for any $a,b,c,d\in S$ and $f,g\in J_s$,
\begin{equation}\label{eq:matrix mult}
E_{a,b}(f)E_{c,d}(g)=\delta_{b,c}E_{a,d}(fg).
\end{equation}
The fact that $\mathrm{Mat}_{S\times S}(J_s)$ is a unital $\mathbb{Z}_+$-ring with
$1=\sum_{s\in S}E_{s,s}(t_s)$ is then straightforward to check. Next, note that
\[
(E_{a,b}(f)E_{c,d}(g))^*=0=(E_{c,d}(g))^*(E_{a,b}(f))^*
\]
when $b\neq c$. When $b=c$,
\[
(E_{a,b}(t_x)E_{c,d}(t_y))^*= (E_{a,d}((t_xt_y)))^*=
E_{d,a}(t_{y^{-1}}t_{x^{-1}})=(E_{c,d}(t_y))^*(E_{a,b}(t_x))^*
\]
where, like in the proof of Proposition \ref{odd edge}, the second equalities
again follows from the fact that the map $t_x\mapsto t_{x^{-1}}$ induces
an anti-homomorphism of $J$. The last two displayed equations imply that
${}^*$ induces an anti-involution of $\mathrm{Mat}_{S\times S}(J_s)$. Finally,
note that $E_{u,u}(t_s)$ appears in
$E_{a,b}(t_x)E_{c,d}(t_y)=\delta_{b,c}E_{a,d}(t_xt_y)$ for some $u\in S$ if and only if
$b=c, a=d=u$ and $x=y^{-1}$ (for $t_s$ appears in $t_xt_y$ if and only if
$x=y^{-1}$; see Corollary \ref{subregular base}). This proves that
Equation \eqref{eq:based ring} from Definition \ref{based rings def} holds,
and we have completed all the
necessary verifications.
\end{proof}
\begin{proof}[Proof of Theorem]
Part (1) follows from the last part of Proposition \ref{odd walk}, since
there is a walk $(s=u_1,u_2,\cdots, u_l=t)$ in $G$ that
contains only odd-weight edges now that $(W,S)$ is oddly-connected.
To prove (2), fix $s\in S$. For each $t\in S$, fix a walk
$P_{st}=(s=u_1,\cdots, u_l=t)$ and define $P_{ts}=P_{st}^{-1}$. Write
$\lambda_{st}$ for $\lambda_{P_{st}}$, and define
$\rho_{st},\lambda_{ts},\rho_{ts}$ similarly.
Consider the
unique $\mathbb{Z}$-module map
\[
\Psi: J_C\rightarrow \mathrm{Mat}_{S\times S}(J_s)
\]
defined as follows:
for any $t_x\in J_C$, say $x\in\Gamma_a^{-1}\cap\Gamma_b$ for $a,b\in S$, let
\[
\Psi(t_x)=E_{a,b}(\lambda_{{a s}}\circ \rho_{{bs}} (t_x)).
\]
We first show below that $\Psi$ is an algebra isomorphism.
Let $t_{x},t_y\in J_C$. Suppose
$x\in\Gamma_a^{-1}\cap\Gamma_b$ and $y\in\Gamma_c^{-1}\cap\Gamma_d$ for
$a,b,c,d\in S$. If $b\neq c$, then
\[
\Psi(t_x)\Psi(t_y)=0=\Psi(t_xt_y),
\]
where the first equality follows from Equation \eqref{eq:matrix mult} and the second equality holds since $t_xt_y=0$ by Corollary
\ref{start and end}. If $b=c$, then
\begin{eqnarray*}
\Psi(t_x)\Psi(t_y)&=& E_{a,b}(\lambda_{{as}}\circ\rho_{bs}(t_x)) \cdot
E_{c,d}(\lambda_{{cs}}\circ\rho_{ds}(t_y))\\
&=& E_{a,d}([\lambda_{{as}}\circ\rho_{bs}(t_x)] \cdot
[\lambda_{{bs}}\circ\rho_{ds}(t_y)])\\
&=& E_{a,d}( (\lambda_{as}\circ\rho_{ds}) [\rho_{bs}(t_x)\cdot
\lambda_{bs}(t_y)])\\
&=& E_{a,d}( (\lambda_{as}\circ\rho_{ds}) [t_xt_y]) \\
&=& \Psi(t_xt_y),
\end{eqnarray*}
where the second last equality holds by part (3) of Proposition \ref{odd walk}.
It follows that $\Psi$ is an algebra homomorphism. Next, consider the map
\[
\Psi':\mathrm{Mat}_{S\times S}(J_s)\rightarrow J_C
\]
defined by
\[
\Psi'(E_{a,b}(f))=\lambda_{sa}\circ\rho_{sb}(f)
\]
for all $a,b\in S$ and $f\in J_s$. Using Part (2) of Proposition \ref{odd
walk}, it is easy to check that $\Psi$ and $\Psi'$ are mutual inverses as maps
of sets. It follows that $\Psi$ is an algebra isomorphism. Finally, it is
easy to compare Proposition \ref{subregular base} with Proposition \ref{odd
base} and check that $\Psi$ is an isomorphism of based rings by direct
computation.
\end{proof}
\begin{remark}
The conclusions of the theorem fail in general when $(W,S)$ is not
oddly-connected. As a counter-example, consider based rings $J_1$ and
$J_2$ arising from the Coxeter system in Example
\ref{counter example}.
By the truncated Clebsch-Gordan rule,
\[
t_{212}t_{212}=t_2=t_{232}t_{232},
\]
therefore $J_2$ contains at least two basis elements with multiplicative order
2. However, it is evident from Example \ref{counter example} that $t_{121}$ is
the only basis element of order 2 in $J_1$. This implies that $J_1$ and $J_2$ are not
isomorphic as based rings. Moreover, Equation \eqref{eq:matrix mult} implies
that for any
$s\in S$, the basis elements of $\mathrm{Mat}_{S\times S}(J_s)$ of order 2 must be of the
form $E_{u,u}(t_{x})$ where $u\in S$ and $t_x$ is a basis element of order 2 in $J_s$, so $\mathrm{Mat}_{S\times S}(J_1)$ and $\mathrm{Mat}_{S\times
S}(J_2)$ have different numbers of basis elements of order 2 as well. It follows that
Part (2) of the theorem also fails.
\end{remark}
\begin{remark}
The isomorphism between $J_s$ and $J_t$ can be easily lifted to a tensor
equivalence between their categorifications $\mathcal{J}_s$ and
$\mathcal{J}_t$, the subcategories of the category $\mathcal{J}$ mentioned in the introduction
that correspond to $\Gamma_s\cap\Gamma_s^{-1}$ and
$\Gamma_t\cap\Gamma_t^{-1}$.
\end{remark}
Let us end the section by revisiting an earlier example.
\begin{example}
Let $(W,S)$ be the Coxeter system from Example \ref{odd example}, whose Coxeter
diagram and subregular graph are shown again below.
\begin{figure}[h!]
\begin{center}
\begin{minipage}{0.4\textwidth}
\begin{tikzpicture}
\node (00) {};
\node (0) [right=1cm of 00] {};
\node[main node] [diamond] (1) [right = 1cm of 0] {};
\node[main node] (2) [below left = 1.6cm and 1cm of 1] {};
\node[main node] (3) [below right = 1.6cm and 1cm of 1] {};
\node (111) [above=0.1cm of 1] {$1$};
\node[main node] (2) [below left = 1.6cm and 1cm of 1] {};
\node (222) [left=0.1cm of 2] {$2$};
\node[main node] (3) [below right = 1.6cm and 1cm of 1] {};
\node (333) [right=0.1cm of 3] {$3$};
\path[draw]
(1) edge node {} (2)
edge node {} (3)
(2) edge node [below] {\small{4}} (3);
\end{tikzpicture}
\end{minipage}%
\begin{minipage}{0.7\textwidth}
\begin{tikzpicture}
\node (000) {};
\node[state node] [diamond] (4) [right=3cm of 000] {\small{$1$}};
\node[state node] (5) [below left = 0.9cm and 0.5cm of 4]
{\small{$12$}};
\node[state node] (6) [below right = 0.9cm and 0.5cm of 4]
{\small{$13$}};
\node[state node] (7) [below left = 0.9cm and 0.5cm of 5]
{\small{$23$}};
\node[state node] (8) [below right = 0.9cm and 0.5cm of 6]
{\small{$32$}};
\node[state node] [diamond] (9) [below left = 0.9cm and 0.5cm of 7]
{\small{31}};
\node[state node] (10) [below right = 0.9cm and 0.5cm of 7]
{\small{232}};
\node[state node] (11) [below left = 0.9cm and 0.5cm of 8]
{\small{323}};
\node[state node] [diamond] (12) [below right = 0.9cm and 0.5cm of 8]
{\small{31}};
\path[draw,-stealth,,thick]
(4) -- (5);
\path[draw,-stealth,,thick]
(4) -- (6);
\path[draw,-stealth,,thick]
(5) -- (7);
\path[draw,-stealth,,thick]
(6) -- (8);
\path[draw,-stealth,,thick]
(7) -- (9);
\path[draw,-stealth,,thick]
(7) -- (10);
\path[draw,-stealth,,thick]
(8) -- (11);
\path[draw,-stealth,,thick]
(8) -- (12);
\path[draw,thick]
(9) edge [bend left=60,-stealth,thick] (5);
\path[draw,semithick]
(12) edge [bend right=60,-stealth,thick] (6);
\path[draw,semithick]
(10) edge [bend right=60,-stealth,thick] (12);
\path[draw,semithick]
(11) edge [bend left=60,-stealth,thick] (9);
\end{tikzpicture}
\end{minipage}
\end{center}
\end{figure}
\noindent Clearly, $(W,S)$ is \emph{oddly-connected}, hence $J_3\cong J_2\cong
J_1$
and $J_C\cong \mathrm{Mat}_{3\times 3}(J_1)$ by Theorem
\ref{oddly-connected}. Let us study $J_1$. Recall that elements of
$\Gamma_1\cap\Gamma_{1}^{-1}$ correspond to walks on the subregular graph
that start with the top vertex and end with either the bottom-left or
bottom-right vertex. Observe that all such walks can be obtained by
concatenating the walks corresponding to the elements
$x=1231, y=1321, z=12321, w=13231$. This means that any reduced word in
$\Gamma_1\cap\Gamma_1^{-1}$ can be written as glued products of $x,y,z,w$,
which implies that $t_x,t_y,t_z,t_w$ generate $J_1$ by Theorem
\hyperref[dihedral factorization]{F} and
Proposition \ref{d2}. Computing the
products of these elements reveals that $J_1$ can be described as
the algebra generated by $t_x, t_y, t_z, t_w$ subject to the following
six relations:
\[
t_xt_y=1+t_z, \,t_yt_x=1+t_w,\, t_xt_w=t_x=t_zt_x,\, t_yt_z=t_y=t_wt_y, \,
t_{w}^2=1=t_{z}^2.
\]
The first two of the relations show that $t_z=t_xt_y-1, t_{w}=t_yt_x-1$,
whence the other four relations can be expressed in terms of
only $t_x$ and $t_y$. Easy calculations then show that $J_1$ can be
presented as the algebra generated by $t_x, t_y$ subject to only the
following two
relations:
\[
t_xt_yt_x=2t_x,\, t_yt_xt_y=2t_y.
\]
Finally, via the change of variables $X:={t_x}/{2}, Y:=t_y$, we see that
\[
J_1=\langle X, Y\rangle/ \langle XYX=X, YXY=Y\rangle.
\]
A simple presentation like this is helpful for studying representations of
$J_1$ and hence $J_2, J_3$ and $J_C$.
\end{example}
\subsection{Fusion $J_s$}
\label{sec:fusion J}
In this subsection, we describe all fusion rings appearing in the form $J_s$ from a Coxeter system.
Recall from Definition \ref{fusion def} that a fusion ring is a unital based
ring of finite rank, so the algebra $J_s$ is a fusion
ring if and only if $\Gamma_{s}\cap\Gamma_{s}^{-1}$ is finite. It is
easy to describe when this happens using Coxeter diagrams.
\begin{prop}
Let $(W,S)$ be an irreducible Coxeter system. Then
$\Gamma_s\cap\Gamma_s^{-1}$ is finite for some $s\in S$ if and only if
$\Gamma_s\cap\Gamma_s^{-1}$ is finite for all $s\in S$. Moreover, both of
these conditions are met
if and only if
the Coxeter graph
$G$ of
$(W,S)$ is a tree, no edge of $G$ has weight $\infty$, and at most one
edge of $G$ has weight greater than 3. \end{prop}
\begin{proof}
Since $(W,S)$ is irreducible, $G$ is connected. The condition that $G$ is a
tree is then equivalent to the condition that $G$ contains no cycle. Let
$D$ be the subregular graph of $(W,S)$. We need to show that one can find infinitely many $s$-walks on
$D$ for some $s\in S$ exactly when
$G$ contains a cycle or more than one edge of weight
3, exactly when we can find infinitely many $s$-walks on $D$ for all $s\in S$. This is a routine and straightforward
graph theory problem, and we omit the details.
\end{proof}
We can now deduce Theorem
\hyperref[fusion J]{C}.
\begin{namedtheorem}[C]
Let $(W,S)$ be a Coxeter system, and let $s\in S$. Suppose $J_s$ is a fusion
ring for some $s\in S$. Then there exists a dihedral Coxeter system
$(W',S')$ such that $J_s\cong
J_{s'}$ as based rings for either $s'\in S$.
\end{namedtheorem}
\begin{proof}
Let $G$ be the Coxeter diagram of
$(W,S)$, and suppose $J_s$ is a fusion ring for some $s\in S$. Then
$\Gamma_s\cap\Gamma_s^{-1}$ is finite, hence $G$ must be as described in
the previous proposition, that is, either $G$ is a tree and $(W,S)$ is simply-laced, or
$G$ is a tree and there exists a unique pair $a,b\in S$ such that
$m(a,b)>3$.
In the first case where $(W,S)$ is simply-laced, $J_s$ to group algebra of the fundamental group $\Pi_s(G)$ by Theorem
\ref{simply-laced}, and the group is
trivial since $G$ is a tree. This means $J_s$ is isomorphic to a ring of the
form $J_{s'}$ associated with the dihedral system $(W',S')$ with $S'=\{s',t'\}$
and $m(s',t)=3$. In the second case, let $m(a,b)=M$. By the description of
$G$, there must be a walk $P$ in $G$ from
$s$ to either $a$ or $b$ such all the edges in the walk must have weight 3,
so Part (4) of Proposition \ref{odd walk} implies that $J_s$ is isomorphic to
either $J_a$ or $J_b$ as based rings. Without loss of generality, suppose
$J_s\cong J_a$. We claim that $\Gamma_a\cap \Gamma_{a}^{-1}$ contains exactly
the elements $a, aba,\cdots, ab\cdots a$ where the reduced words alternate in
$a,b$ and contains less than $M$ letters. This would mean that $J_s$ is
isomorphic as a based ring to the fusion ring $J_{s'}$ associated with the
dihedral system $(W',S')$ with $S'=\{s',t'\}$ where $m(s',t')=M$.
It remains to prove the claim. Recall that any element
$x=s_1\cdots s_k \in
\Gamma_a\cap\Gamma_{a}^{-1}$ corresponds to a walk $(a=s_1,s_2,\cdots, s_k=a)$
on $G$. Since $G$ is a tree, the walk must be the
concatenation of walks $P_{at}$ that start with $a$, traverse to a vertex
$t\in S$ via the unique path from $a$ to $t$, and then come back via the
inverse path to
$a$, i.e., $P_{at}=(a=s_1,\cdots, s_{k-1},s_k=t,s_{k-1},\cdots, s_1)$. The
spur $(s_{k-1},t,s_{k})$ in the walk means $s_{k-1}ts_{k-1}$ appears in $x$, so
$m(t,s_{k-1})> 3$ by Proposition \ref{subregular criterion}, hence $t$ must be
$a$ or $b$. The claim follows.
\end{proof}
\begin{remark}
Recall from \cref{sec:dihedral products} that any algebra of the form $J_s$ arising from a dihedral Coxeter
system is isomorphic to the odd part $\mathrm{Ver}_M^{\text odd}$ of a
Verlinde algebra, where $M\in \mathbb{Z}_{\ge
2}\cup\{\infty\}$. Thus, the theorem
means that any fusion ring of the form $J_s$ arising from any Coxeter system
$(W,S)$ is isomorphic to $\mathrm{Ver}_M^{\text odd}$ for some $M$ as well.
Moreover,
the proof of the theorem reveals that $M$ can be described simply as the
largest edge weight in the Coxeter diagram of $(W,S)$.
\end{remark}
\section{Free fusion rings}
\label{sec:free fusion rings}
We focus on certain Coxeter systems $(W,S)$ whose Coxeter
diagrams involve edges of weight $\infty$ in this section. We show that for
suitable choice of $s\in S$, $J_s$ is isomorphic to a \emph{free fusion ring}.
\subsection{Background}
\label{sec:background}
Free fusion rings are defined as follows.
\begin{definition}[\cite{Raum}]
\label{ffr}
A \emph{fusion set} is a set $A$ equipped with an {involution}
\,$\bar{}: A\rightarrow A$ and a \emph{fusion} map $\circ: A \times A \rightarrow A\cup
\emptyset$. Given any fusion set $(A,\,\bar{}\,,\circ)$, we extend the operations
$\,\bar{}\,$ and $\circ$ to the free monoid $\langle A \rangle$ as follows:
\[
\overline{a_1\cdots a_k}=\bar a_k\cdots \bar a_1,
\]
\[ (a_1\cdots a_k)\circ
(b_1\cdots b_l)=a_1\cdots a_{k-1}(a_k\circ b_1)b_2\cdots b_l,
\]
where the right side of the last equation is taken to be $\emptyset$
whenever $k=0, l=0$ or $a_k\circ b_1=\emptyset$.
We then define the \emph{free fusion ring} associated with the fusion set
$(A,\,\bar{}\,,\circ)$ to be the free abelian group $R=\mathbb{Z}\langle
A \rangle$ on $\langle A \rangle$, with multiplication
$\cdot: R\times R\rightarrow R$ given by
\begin{equation}\label{eq:ffr}
v\cdot w=\sum_{v=xy,w=\bar yz} xz+x\circ z
\end{equation}
for all $v, w\in \ip{A}$, where $xz$ means the juxtaposition of $x$ and $z$.
\end{definition}
\noindent It is well known that $\cdot$ is associative (see \cite{Raum}). It is
also easy to check that $R$ is always a unital based ring with its basis given
by the free monoid $\ip{A}$, with unit given by the empty word, and
with its anti-involution ${}^*:\ip{A}\rightarrow\ip{A}$ given by the map $\,\bar{}\,$.
Free fusion rings were introduced in \cite{Banica} to capture the
tensor rules in certain semisimple tensor categories arising from the theory
of operator algebras. More specifically,
the categories are categories of representations of \emph{compact quantum
groups}, and their Grothendieck rings
fit the axiomatization of free fusion rings in Definition \ref{ffr}. In
\cite{Freslon-1}, A. Freslon classified all free
fusion rings arising as the Grothendieck rings of compact quantum groups in terms of their underlying
fusion sets. Further, while a free fusion ring may appear as the
Grothendieck ring of multiple non-isomorphic compact quantum groups,
Freslon described a canonical way to associate a \emph{partition
quantum group}---a special type of compact quantum group---to any free fusion
ring arising from a compact quantum group. These special quantum groups
correspond via a type
of Schur-Weyl duality to \emph{categories of non-crossing partitions}, which
can in turn be used to study the representations of the quantum
groups.
All the free fusion rings appearing as $J_s$ in our
examples fit in the classification of \cite{Freslon-1}. In each of our
examples, we will identify the associated
partition quantum group $\mathbb{G}$. The fact that $J_s$ is connected to
$\mathbb{G}$ is intriguing, and it
would be interesting to see how the categorification of $J_s$ arising from
Soergel bimodules connects to the representations of $\mathbb{G}$ on the
categorical level.
\subsection{Example 1: ${O_N^+}$}
\label{sec:example1}
One of the simplest fusion set is the singleton set $A=\{a\}$ with identity
as its involution and with fusion map $a\circ a=\emptyset$. The associated free fusion
ring is $R=\oplus_{n\in \mathbb{Z}_{\ge 0}} \mathbb{Z} a^n$, where
\[
a^k\cdot a^l=a^{k+l}+a^{k+l-2}+\cdots +a^{\abs{k-l}}
\]
by Equation \ref{eq:ffr}. The partition quantum group associated to
$R$ is the \emph{free orthogonal quantum group} $O_N^+$, and its
corresponding category of partitions is that of all noncrossing
\emph{pairings} (\cite{orthogonal}).
Let $(W,S)$ be the infinite dihedral system with $S=\{1,2\}$ and
$W=I_2(\infty)$, the infinite dihedral group. We claim that $J_1$ is
isomorphic to $R$ as based rings. To see this, recall from the discussion
following Definition \ref{verlinde def} that $J_s$ is the $\mathbb{Z}$-span of basis
elements $t_{1_n}$, where $n$ is odd and $1_n=121\cdots 1$ alternates in $1,2$
and has length $n$. For $m=2k+1$ and $n=2l+1$ for some $k,l\ge 1$, the
truncated Clebsch-Gordan rule implies that \[ t_{1_m}\cdot
t_{1_n}=t_{1_{2k+1}}t_{1_{2l+1}}=t_{1_{2(k+l)+1}}+t_{1_{2(k+l-1)+1}}+\cdots
+t_{1_{2\abs{k-l}+1}}. \] It follows that $R\cong J_1$ as based rings via the
unique $\mathbb{Z}$-module map with $a^k\mapsto t_{1_{2k+1}}$ for all $k\in \mathbb{Z}_{\ge
0}$. Similarly, $R\cong J_2$ as based rings.
\subsection{Example 2: ${U_N^+}$}
\label{sec:example2}
In this subsection we consider the free fusion ring $R$ arising from the fusion
set $A=\{a,b\}$ with $\bar a=b$ and $a\circ a=a\circ b=b \circ a=b \circ
a=\emptyset$. The partition quantum group associated to $R$ is the \emph{free
unitary quantum group} $U_N^+$. In the language of \cite{Freslon-1}, this
quantum group corresponds to the category of \emph{$\mathcal{A}$-colored}
noncrossing partitions where $\mathcal{A}$ is a \emph{color set} containing two
colors \emph{inverse} to each other.
Consider the Coxeter system $(W,S)$ with the following Coxeter diagram.
\begin{figure}[h!]
\begin{centering}
\begin{tikzpicture}
\node (4) {};
\node[main node] (5) [above right = 0.5cm and 1.5cm of 4] {};
\node (55) [above = 0cm of 5] {0};
\node[main node] (6) [below left = 1.6cm and 1cm of 5] {};
\node (66) [below = 0cm of 6] {1};
\node[main node] (7) [below right = 1.6cm and 1cm of 5] {};
\node (77) [below = 0cm of 7] {2};
\path[draw]
(5) edge node [left] {} (6)
(5) edge node [right] {} (7)
(6) edge node [below = 0.1cm] {$\infty$} (7);
\end{tikzpicture}
\end{centering}
\end{figure}
\begin{namedtheorem}[D]
\label{unitary}
We have a isomorphism $R\cong J_0$ of based rings.
\end{namedtheorem}
Our strategy to prove Theorem \ref{unitary} is to describe a bijection between the free
monoid $\ip{A}$ and the set
$\Gamma_0\cap \Gamma_0^{-1}$, use it to define a $\mathbb{Z}$-module isomorphism
from $R$ to $J_0$, then show that it is an isomorphism of based rings. To
establish the bijection, recall from the discussion after
Definition \ref{groupoid mult} that any element $x\in
\Gamma_0\cap\Gamma_0^{-1}$ corresponds to a unique walk $P_x$ on the graph
$G$. We may encode $x$ by a word in $\ip{A}$ in the following way:
imagine traversing the walk $P_x$, write down an ``$a$'' every time an edge in the walk goes from $1$
to 2, a ``$b$'' every time an edge goes from $2$ to 1, and write
nothing down otherwise. Call the resulting word $w_x$. For
example, the element
$x=012120120$ corresponds to the word $w_x=abaa$. Note that $w_x$ records all
parts of $P_x$ that travel along the edge $\{1,2\}$, but ``ignores'' the parts
that involve the edges containing $0$.
We claim that the map $\varphi: \Gamma_0\cap\Gamma_0^{-1} \rightarrow \ip{A},
x\mapsto w_x$ gives our desired bijection. To see that $\varphi$ is injective,
note that by Proposition \ref{subregular criterion}, the elements of $\Gamma_0\cap\Gamma_0^{-1}$
correspond to walks on $G$ that start and end with $0$ but
contain no spurs involving 0. The latter condition means that the parts of the walk $P_x$ that is
``ignored'' in $w_x$, i.e., the parts involving the edges $\{0,1\}$ or
$\{0,2\}$, can be recovered from $w_x$. More precisely, given any word $w=w_x$
for some $x\in \Gamma_0\cap\Gamma_0^{-1}$, we may read the letters of
$w$ from left to right and write down $P_x$ using the following principles:
\begin{enumerate}[leftmargin=3em]
\item The empty word $w=\emptyset$ corresponds to the element $0\in
\Gamma_0\cap\Gamma_0^{-1}$, for $P_x$ involves the edge $\{1,2\}$
for any other element of the
intersection.
\item
The only way $w$ can start with $a$ and not $b$ is for $P_x$ to start with 0, immediately travel to 1, then
travel from 1 to 2, so $P_x$ must start with (0,1,2) if $w$ start with
$a$.
Similarly, $P_x$ starts with $(0,2,1)$ if $w$ starts with $b$.
\item If the last letter we have read from $w$ is an ``$a$'', the last vertex we
have recovered in the sequence for $P_x$ must be 2.
\begin{enumerate}
\item
If this ``$a$'' is the last letter of
$w$, $P_x$ must involve no more traversals of the edge $\{1,2\}$ and
hence immediately return to 2 from 0, so adding one more $0$ to the
current sequence returns $P_x$.
\item If the ``$a$'' is followed by another ``$a$'', the next traversal of
$\{1,2\}$ in $P_x$ after the sequence already written down must be from 1
to 2 again. This forces $P_x$ to travel to 0 next, and to avoid a spur it
must go on to 1, then to 2, so we add
(0,1,2) to the sequence for $P_x$. If the ``$a$'' is followed by a
``$b$'', $P_x$ must next immediately travel to $1$ and we add 1 to the
sequence, for otherwise $P_x$ would have to travel along the cycle $2\rightarrow
0\rightarrow 1\rightarrow 2$ as we just described and the ``a'' would be followed by
another ``$a$''.
\end{enumerate}
\item If the last letter we have read from $w$ is an ``$b$'', the last vertex we
have recovered in the sequence for $P_x$ must be 1. The method to recover
more of $P_x$ from the rest of $w$ is similar to the one described in
(3).
\end{enumerate}
To illustrate the recovery of $w_x$ from $x$, suppose we know $abaa=w_x$ for some
$x\in \Gamma_0\cap\Gamma_0^{-1}$, we would get $P_x=(0,1,2,1,2,0,1,2,0)$ by
successively writing down $(0,1,2), (1), (2), (0,1,2)$ and $(0)$, so
$x=012120120$. Indeed, note that we may run the process for any word $w$ in
$\ip A$ to get an element in $\Gamma_0\cap\Gamma_0^{-1}$. This gives us
a map $\phi: \ip A \rightarrow \Gamma_0\cap\Gamma_0^{-1}$ that is a mutual inverse
to $\varphi$
and $\phi$, so both $\varphi$ and $\phi$ are bijective.
We can now prove Theorem \hyperref[unitary]{D}. We present an inductive proof
that can be easily adapted to prove Theorem \hyperref[amalgamate]{E} later.
\begin{proof}[Proof of Theorem D]
Let $\Phi: R\rightarrow J_0$ be the $\mathbb{Z}$-module homomorphism defined by
\[
\Phi(w) = t_{\phi(w)}.
\]
Since $\phi$ is a bijection, this is an isomorphism of $\mathbb{Z}$-modules. We will
show that $\Phi$ is an algebra isomorphism by showing that
\begin{equation}
\label{eq:check Phi}
\Phi(v)\Phi(w)=\Phi(v\cdot w)
\end{equation}
for all $v,w\in \ip{A}$. Note that this is true if $v$ or $w$ is empty, since then
$t_v=t_0$ or $t_w=t_0$, which is the identity of $J_0$ by Corollary \ref{head}.
Now, assume neither $v$ nor $w$ is empty. We
prove Equation \eqref{eq:check Phi} by induction on the \emph{length}
$l(v)$ of $v$, i.e., on the number of letters in $v$.
For the base case, suppose $l(v)=1$ so that $v=a$ or $v=b$. If
$v=a$, then $\phi(a)=0120$. There are two cases:
\begin{enumerate}[leftmargin=2em]
\item Case 1: $w$ starts with $a$. \\
Then $\phi(w)$ has
the form $\phi(w)=012\cdots$, so
\[
\Phi(v)\Phi(w)=t_{0120}t_{012\cdots}=t_{0120 *
012\cdots}=t_{012012\cdots}=t_{\phi(aw)}
\]
by Proposition \ref{d2}. Meanwhile,
since $\bar a\neq a$ and $a\circ a=\emptyset$ in $A$,
\[
v\cdot w=aw
\]
in $R$,
therefore
$
\Phi(v\cdot w)=t_{\phi(aw)}
$
as well. Equation \eqref{eq:check Phi} follows.
\item Case 2: $w$ starts with $b$. \\
In this case, suppose the longest alternating subword $bab\cdots$ appearing in the
beginning of $w$ has length $k$, and and write $w=bw'$.
Then $\phi(w)$ takes the form $\phi(w)=0212\cdots$, its first
dihedral segment is $02$, the second is
$(2,1)_{k+1}$, so that $\phi(w)=02*(2,1)_{k+1}*x$ where
$x$ is the glued product of all the remaining dihedral segments. Direct
computation using Theorem \hyperref[dihedral factorization]{F} and propositions
\ref{d2} and \ref{d3} then yields
\begin{eqnarray*}
\Phi(v)\Phi(w)
&=& t_{01}[t_{(1,2)_{k+2}}+t_{(1,2)_k}]t_{x}\\
&=& t_{01*(1,2)_{k+2}*x}+t_{01*(1,2)_{k}*x}\\
&=& t_{\phi(w)}+t_{\phi(w')}.
\end{eqnarray*}
Meanwhile, since $\bar a=b$ and $a\circ b=\emptyset$ in $A$,
\[
v\cdot w=a\cdot bab\cdots = abab\cdots + ab\cdots=w+w'
\]
in $R$, therefore $\Phi(v\cdot w)=t_{\phi(w)}+t_{\phi(w')}$ as well.
Equation \eqref{eq:check Phi} follows.
\end{enumerate}
The proof for the case $l(v)=1$ and $v=b$ is similar.
For the inductive step of our proof, assume Equation \eqref{eq:check Phi}
holds whenever $v$ is nonempty and $l(v)< L$ for some $L\in \mathbb{N}$,
and suppose $l(v)=L$. Let $\alpha\in A$ be the first
letter of $v$, and write $v=\alpha v'$. Then $l(v')<L$, and by \eqref{eq:ffr},
\[
a\cdot v'=v+\sum_{u\in U} u
\]
where $U$ is a subset of $\ip{A}$ where all words have length smaller than
$L$. Using the inductive hypothesis on $\alpha, v',u$ and the
$\mathbb{Z}$-linearity of $\Phi$, we have
\begin{eqnarray*}
\Phi(v)\Phi(w)&=& \Phi\left(\alpha \cdot v'- \sum_{u\in U} u\right)\Phi(w)\\
&=& \Phi(\alpha)\Phi(v')\Phi(w)-\sum_{u\in U}\Phi(u)\Phi(w)\\
&=& \Phi(\alpha)\Phi(v'\cdot w)-\Phi\left(\sum_{u\in U}u\cdot w\right)\\
\end{eqnarray*}
Here, the element $v'\cdot w$ may be a linear combination of multiple words in
$R$, but applying the inductive hypothesis on $\alpha$ still yields
\[
\Phi(\alpha)\Phi(v'\cdot w)=\Phi(\alpha\cdot(v'\cdot w))
\]
by the $\mathbb{Z}$-linearity of $\Phi$ and $\cdot$. Consequently,
\begin{eqnarray*}
\Phi(v)\Phi(w) &=& \Phi(\alpha\cdot (v'\cdot w))-\Phi\left(\sum_{u\in U} u\cdot w\right)\\
&=& \Phi\left( (\alpha\cdot v')\cdot w- \sum_{u\in U}u\cdot w\right)\\
&=& \Phi\left( \left[(\alpha\cdot v')- \sum_{u\in U}u\right]\cdot
w\right)\\
&=& \Phi(v\cdot w).
\end{eqnarray*}
by the associativity of $\cdot$ and the $\mathbb{Z}$-linearity of $\Phi$ and $\cdot$.
This completes the proof that $\Phi$ is an algebra isomorphism.
The fact that
$\Phi$ is in addition an isomorphism of based rings is straightforward to check.
In particular, observe that $\phi(\bar w)=\phi(w)^{-1}$ so that $\Phi(\bar
w)=t_{\phi(\bar w)}=t_{\phi(w)^{-1}}=(\Phi(w))^*$, therefore $\Phi$ is
compatible with the respective
involutions in $R$ and $J_0$. We omit the details of the other necessary
verifications.
\end{proof}
\subsection{Example 3: ${Z_N^+(\{e\},n-1)}$}
\label{sec:example3}
In this subsection, we consider an infinite family of fusion rings $\{R_n: n\in
\mathbb{Z}_{\ge 2}\}$, where each $R_n$ arises from the
fusion set
\[
A_n=\{e_{ij}: i,j\in [n]\}
\]
with $\bar e_{ij}=e_{ji}$ for all $i,j\in [n]$ and
\[
e_{ij}\circ e_{kl}=
\begin{cases}
e_{il} &\quad\text{if}\quad j=k\\
\emptyset &\quad\text{if}\quad j\neq k
\end{cases}
\]
for all $i,j,k,l\in [n]$. We may think of the fusion set as the usual matrix
units for $n\times n$ matrices and think of the fusion map as an analog of
matrix multiplication, with the fusion product being $\emptyset$ whenever the matrix
product is 0. In the notation of \cite{Freslon-1}, the partition quantum group
corresponding to $R_n$ is denoted by $Z_N^+(\{e\},n-1)$, which equals the
\emph{amalgamated free product} of $(n-1)$ copies of $\tilde H_N^+$ amalgamated
along $S_N^+$, where $S_N^+$ stands for the \emph{free symmetric group},
$H_N^+$ stands for the \emph{free
hyperoctohedral group}, and $\tilde H_N^+$ stands for the \emph{free
complexification} of $H_N^+$. In
particular, $R_2=\tilde H_N^+$.
For $n\in \mathbb{Z}_{\ge 2}$, let $(W_n,S_n)$ be the Coxeter system where
$S_n=\{0,1,2,\cdots,n\}$, $m(0,i)=\infty$ for all $i\in [n]$, $m(i,i+1)=3$ for
all $i\in [n-1]$, and $m(i,j)=2$ otherwise. The Coxeter diagrams $G_n$ of
$(W_n, S_n)$ are shown in Figure \ref{fig:Rn}.
\begin{figure}[h!]
\label{fig:Rn}
\begin{centering}
\begin{tikzpicture}
\node (4) {};
\node[main node] (5) [above right = 0.5cm and 1.5cm of 4] {};
\node (55) [above = 0cm of 5] {0};
\node[main node] (6) [below left = 1cm and 0.6cm of 5] {};
\node (66) [below = 0cm of 6] {1};
\node[main node] (7) [below right = 1cm and 0.6cm of 5] {};
\node (77) [below = 0cm of 7] {2};
\node (8) [below right = 0.5cm and 0.8cm of 5]
{};
\node (9)[main node] [above right = 0.5cm and 2cm of 8]
{};
\node (99) [above = 0cm of 9] {0};
\node[main node] (10) [below left = 1cm and 0.7cm of 9]
{};
\node (1010) [below = 0cm of 10] {1};
\node[main node] (11) [below =1.3cm of 9] {};
\node (1111) [below = 0cm of 11] {2};
\node[main node] (12) [below right = 1cm and 0.7cm of 9]
{};
\node (1212) [below = 0cm of 12] {3};
\node (13) [below right = 0.5cm and 1cm of 9]
{};
\node (14)[main node] [above right = 0.5cm and 2cm of 13]
{};
\node (1414) [above = 0cm of 14] {0};
\node[main node] (15) [below left = 1cm and 1cm of 14]
{};
\node (1515) [below = 0cm of 15] {1};
\node[main node] (16) [below left = 1.35cm and 0.3cm of 14] {};
\node (1616) [below = 0cm of 16] {2};
\node[main node] (17) [below right = 1.35cm and 0.3cm of 14] {};
\node (1717) [below = 0cm of 17] {3};
\node[main node] (18) [below right = 1cm and 1cm of 14]
{};
\node (1818) [below = 0cm of 18] {4};
\node (19) [below right = 0.5cm and 2cm of 14] {$\cdots$};
\path[draw,ultra thick]
(5) edge node [left] {} (6)
(5) edge node [right] {} (7)
(9) edge node [left] {} (10)
(9) edge node {} (11)
(9) edge node [right] {} (12)
(14) edge node {} (15)
(14) edge node {} (16)
(14) edge node {} (17)
(14) edge node {} (18);
\path[draw]
(6) edge node [below] {} (7)
(10) edge node [below] {} (11)
(11) edge node [below] {} (12)
(15) edge node [below] {} (16)
(16) edge node [below] {} (17)
(17) edge node [below] {} (18);
\end{tikzpicture}
\end{centering}
\caption{{The Coxeter diagrams of $(W_n,S_n)$. The thick edges have weight
$\infty$; the remaining edges have
weight 3.}}
\end{figure}
Let $J_0^{(n)}$ denote the subring $J_0$ of the subregular $J$-ring of $(W_n,S_n)$.
\begin{namedtheorem}[E]
\label{amalgamate}
For each $n\in \mathbb{Z}_{\ge 2}$, $R_n\cong J_0^{(n)}$ as
based rings.
\end{namedtheorem}
For each $n\ge 2$, the strategy to prove the isomophism $R_n\cong J_0^{(n)}$ is
similar to the one used for Theorem \hyperref[unitary]{D}. That is, we will first describe a
bijection $\phi: \ip{A_n}\rightarrow \Gamma_0\cap\Gamma_0^{-1}$, then show that the
$\mathbb{Z}$-module map $\Phi: R_n\rightarrow J_0^{0}$ given by $\Phi(w)=t_{\phi(w)}$ is an
isomorphism of based rings.
To describe $\phi$,
note that for $i,j\in [n]$, there is a unique shortest walk $P_{ij}$ from
$i$ to $j$ on the ``bottom part'' of $G_n$, i.e., on the subgraph of $G_n$
induced by the vertex subset $[n]$. Define $\phi(e_{ij})$ to be element in
$\Gamma_{0}\cap\Gamma_0^{-1}$ corresponding to the walk
on
$G$ that starts from 0, travels to $i$, traverses to $j$ along the path $P_{ij}$, then returns to $0$. For example, when $n= 4$,
$\phi(e_{24})=02340, \phi(e_{43})=0430, \phi(e_{44})=040$.
Next, for any word $w$ in $\ip{A_n}$, define $\phi(w)$ to be the glued product
of the $\phi$-images of its letters. For example,
$\phi(e_{24}e_{43}e_{44}e_{44})=023404304040$.
It is clear that $\phi$ is a
bijection, with inverse $\varphi$ given as follows: for any $x\in
\Gamma_{0}\cap\Gamma_0^{-1}$, write $x$ as the glued
product of subwords that start and end with 0 but do not contain 0 otherwise;
each such subword must be of the form $\phi(e_{ij})$. We define
$\varphi(x)$ to be the concatenation of these letters. For example,
$\varphi(0230404)=e_{23}e_{44}e_{44}$ since
$02304040=(0230)*(040)*(040)$.
Before we prove Theorem \ref{amalgamate}, let us record one useful lemma:
\begin{lemma}
\label{path mult}
Let $x_{ij}=i\cdots j$ be the element in $C$ corresponding to
the walk $P_{ij}$ for all $i,j\in [n]$. Then
$t_{x_{ij}}t_{x_{jk}}=t_{x_{ik}}$ for all $i,j,k\in [n]$.
\end{lemma}
\begin{proof}
This follows by carefully considering the possible relationships between
$i,j,k$ and repeatedly using Proposition \ref{d3} to directly compute
$t_{x_{ij}}t_{x_{jk}}$ in each case. Alternatively, notice that the simple
reflection 0 is not involved in $x_{ij}$ for any $i,j\in [n]$, hence the computation
of $t_{x_{ij}}t_{x_{jk}}$ can be done in the subregular $J$-ring of the Coxeter system with the
``bottom part'' of $G_n$ as its diagram. This system is simply-laced, so the
result follows immediately from
Theorem \ref{simply-laced}.
\end{proof}
\begin{proof}[Proof of Theorem E]
Let $n\ge 2$, and let $\phi$ and $\Phi$ be as above. As in the proof of
Theorem \ref{unitary}, we show that $\Phi$ is an algebra isomorphism by
checking that
\begin{equation}
\Phi(v)\Phi(w)=\Phi(v\cdot w)
\label{eq:check Phi again}
\end{equation}
for all $v,w\in \ip{A_n}$. Once again, we may assume that both $v$ and $w$
are non-empty again use induction on the length $l(v)$ of $v$. The inductive
step of the proof will be identical with the one for Theorem
\hyperref[unitary]{D}.
For the base case where $l(v)=1$, suppose $v=e_{ij}$ for some $i,j\in [n]$.
There are two cases.
\begin{enumerate}[leftmargin=2em]
\item Case 1: $w$ starts with a letter $e_{j'k}$ where $j'\neq j$. \\
Then $\phi(v)$ and $\phi(w)$ take the form $\phi(v)=\cdots
j0,\phi(w)=0j'\cdots$, so
\[
\Phi(v)\Phi(w)=t_{\cdots j0}t_{0j'\cdots}=t_{\cdots j0 *
0j'\cdots}=t_{\phi(e_{ij})*\phi(w)}=t_{\phi(e_{ij}w)}
\]
by Proposition \ref{d2}. Meanwhile,
since $\bar e_{ij}\neq e_{j'k}$ and $e_{ij}\circ e_{j'k}=\emptyset$ in
$A_n$,
\[
v\cdot w=e_{ij}w
\]
in $R$,
therefore
$
\Phi(v\cdot w)=t_{\phi(e_{ij}w)}
$
as well. Equation \eqref{eq:check Phi again} follows.
\item Case 2: $w$ starts with $e_{jk}$ for some $k\in [n]$. \\
Write $w=e_{jk}w'$. We need to carefully consider four subcases, according to how they affect
the dihedral segments of $\phi(v)$ and $\phi(w)$.
\begin{enumerate}
\item $i=j=k$.
Then $v=e_{jj}$, $\phi(v)=0j0=(0,j)_3$, and $w$ starts with
$e_{jj}\cdots$, hence $\phi(w)$ starts with $0j0\cdots$. Suppose the
first dihedral segment of $\phi(w)$ is $(0,j)_L$, and write
$\phi(w)=(0,j)_L* x$. Then Theorem
\hyperref[dihedral factorization]{F} and propositions \ref{d2} and
\ref{d3} yield
\begin{eqnarray*}
\Phi(v)\Phi(w)&=& t_{(0,j)_3}t_{(0,j)_L}t_{x}\\
&=&
t_{(0,j)_{L+2}*{x}}+t_{(0,j)_{L}*{x}}+t_{(0,j)_{L-2}*{x}}\\&=&
t_{\phi(e_{jj}w)}+t_{\phi(w)}+t_{\phi(w')},
\end{eqnarray*}
while
\[
v\cdot w=e_{jj}\cdot e_{jj}w'=e_{jj}e_{jj}w' + e_{jj}w'+{w'}=e_{jj}w+w+w'
\]
since $\bar e_{jj}=e_{jj}$ and $e_{jj}\circ e_{jj}=e_{jj}$. It follows that
Equation \eqref{eq:check Phi again} holds.
\item $i=j$, but $j\neq k$. In this case, $v=e_{jj}, \phi(v)=(0,j)_3$ as in
(a), while $\phi(w)=0j*x$ for some reduced word $x$ which starts with $j$ but
not $j0$. We have
\[
\hspace{3em}
\Phi(v)\Phi(w)=t_{0j0}t_{j0}t_{x}=t_{0j0j*x}+t_{0j*x}=t_{\phi(e_{jj}w)}+t_{\phi(w)},
\]
while
\[
v\cdot w=e_{jj}\cdot e_{jk}w'=e_{jj}e_{jk}w'+e_{jk}w'={e_{jj}}w+w
\]
since $\bar e_{jj}\neq e_{jk}$ and $e_{jj}\circ e_{jk}=e_{jk}$. This implies
Equation \eqref{eq:check Phi again}.
\item $i\neq j$, but $j=k$. In this case, $v=e_{ij}$ and $\phi(v)=y*j0$ for
some reduced word $y$ which ends in $j$ but not $0j$, and $\phi(w)$ can be written as
$\phi(w)=(0,j)_{L}*x$ as in (a). We have
\begin{eqnarray*}
\Phi(v)\Phi(w)&=& t_{y}t_{j0}t_{(0,j)_{L}}t_{x}\\
&=&
t_{y*(j,0)_{L+1}*{x}}+t_{y*(j,0)_{L-1}*{x}}\\
&=& t_{\phi(e_{ij}w)}+t_{\phi(e_{ij}w')},
\end{eqnarray*}
while
\[
v\cdot w=e_{ij}\cdot e_{jj}w'=e_{ij}w+e_{ij}w'
\]
since $\bar e_{ij}\neq e_{jj}$ and $e_{ij}\circ e_{jj}=e_{ij}$. This
implies Equation \eqref{eq:check Phi again}.
\item $i\neq j$, and $j\neq k$. In this case, $\phi(v)=0i*x_{ij}*j0$
(recall the definition of $x_{ij}$ from Lemma \ref{path mult}), and
$\phi(w)=0j*x_{jk}*x$ for some $x$ which starts with $k0$. We have
\begin{eqnarray*}
\Phi(v)\Phi(w)&=& t_{0i}t_{x_{ij}}t_{j0}t_{0j}t_{x_{jk}}t_x\\
&=&
t_{0i}t_{x_{ij}}t_{j0j}t_{x_{jk}}t_{x}+t_{0i}t_{x_{ij}}t_{j}t_{x_{jk}}t_{x}\\
&=& t_{0i*x_{ij}*j0j*x_{jk}*x}+t_{0i}t_{x_{ij}}t_{x_{jk}}t_{x}\\
&=& t_{\phi(e_{ij}w)}+t_{0i}t_{x_{ik}}t_{x},
\end{eqnarray*}
where the fact $t_{x_{ij}}t_{x_{jk}}=t_{x_{ik}}$ comes from Lemma \ref{path
mult}. Now, if $i\neq k$,
$t_{0i}t_{x_{ik}}t_{x}=t_{0i*x_{ik}*x}=t_{\phi(e_{ik}w')}$, so
\[
\Phi(v)\Phi(w)=t_{\phi(e_{ij}w)} + t_{\phi(e_{ik}w')}.
\]
If $i=k$, note that $t_{0i}t_{ik}t_{x}=t_{0k}t_kt_{x}=t_{0k}t_x$. Suppose the first dihedral segment of $x$ is
$(k,0)_{L'}$ for some $L'\ge 2$, and write $x=(k,0)_{L'}* x'$. Then $t_{0k}t_{x}=t_{0k}t_{(k,0)_{L'+1}}t_{x'}=
t_{(0,k)_{L'+1}*x'}+t_{(0,k)_{L'-1}*x'}=t_{\phi(e_{kk}w')+\phi(w')}$,
so
\[
\Phi(v)\Phi(w)=t_{\phi(e_{ij}w)}+t_{\phi(e_{ik}w')}+t_{\phi(w')}.
\]
In either case, Equation \eqref{eq:check Phi again} holds again, because
\[
\hspace{4.5em} v\cdot w= e_{ij}\cdot e_{jk}w'
=e_{ij}e_{jk}w'+e_{ik}w'+\delta_{ik}e_{w'}
=e_{ij}w+e_{ik}w'+\delta_{ik}e_{w'}
\]
now that $\bar e_{ij}=e_{ji}$ and $e_{ij}\circ e_{jk}=e_{ik}$.
\end{enumerate}
\end{enumerate}
We have now proved $\Phi$ is an algebra isomorphism. Just as in Theorem
\hyperref[unitary]{D}, the fact that $\Phi$ is in
addition an isomorphism of based rings is again easy to check, and we omit
the details.
\end{proof}
| proofpile-arXiv_065-7672 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
When ultrahigh-energy cosmic rays (UHECRs) hit the Earth, they collide with air nuclei and create a particle cascade
of millions of secondary particles, a so-called air shower. The atmosphere acts thereby as a giant calorimeter of
${\sim} 11$ hadronic interaction lengths. Instrumentation of such a
giant detector volume is challenging in every respect, especially concerning readout, calibration and
monitoring. Well-established solutions are stochastic measurements of the remaining secondary
particles at ground level and direct detection of fluorescence light emitted from air
molecules excited by the particle cascade. Both techniques are successfully applied in the Pierre
Auger Observatory in Argentina, covering $3000\,\rm{km^{2}}$ with $1660$ water-Cherenkov detectors and
$27$ telescopes for detection of fluorescence light \cite{Auger}.\\
In recent years, measurement of radio emission from air showers in the
megahertz (MHz) regime has become a
complementary detection technique \cite{LOPES, Codalema, AERA, TRex, LOFAR, Tim, Frank}. For this,
the Pierre Auger Observatory was extended by $153$ radio stations, the so-called Auger Engineering Radio Array (AERA). These antenna stations at
ground level provide information on the radio signal and are used to reconstruct the electric field generated by an air shower. \\
Two mechanisms contribute to coherent radio emission from air showers, namely the geomagnetic
effect induced by charged particle motion in the Earth's magnetic field \cite{LOPES, Geo-KahnLerche, Geo-Allan, Geo-HuegeFalcke, Geo-Codalema, Geo-Auger} and the time varying negative charge excess in the shower front.
The charge excess is due to the knock-out of electrons from air molecules and annihilation of positrons in the shower front \cite{CE-Ask, CE, CE-Codalema, CE-Lofar, CE-AERA}.
The radio emission can be calculated from first principles using classical electrodynamics \cite{CORSIKA, ZHAires, CoREAS, Radio}.
The emission primarily originates from the well-understood electromagnetic part of the air shower.
Thus, the theoretical aspect of radio measurements is on solid grounds \cite{Tim}. \\
As the atmosphere is transparent to radio waves, the radio technique has a high potential for precision measurements in cosmic-ray physics.
Correlation of the strength of the radio signal with the primary cosmic-ray energy has
meanwhile been demonstrated by several observatories \cite{LOPES-Energy, LOFAR-Energy, TREX-Energy, AERA-Energy-PRL, AERA-Energy-PRD}.
Furthermore, the radiation energy, i.e., the energy contained in the radio signal has
been determined \cite{AERA-Energy-PRD}. It was shown that the radio energy resolution is competitive with the results of particle measurements at ground level.
Furthermore using above-mentioned first-principle calculations, a
novel way of a stand-alone absolute energy calibration of the atmospheric
calorimeter appears feasible \cite{AERA-Energy-PRL}.\\
In all these considerations, the antenna to detect the electric field and a thorough description of its characteristics is of central importance.
Precise knowledge of the directional antenna characteristics is essential to reconstruct the electric field and therefore enables high quality measurements of the cosmic-ray properties.
For a complete description of the antenna characteristics an absolute antenna calibration needs to be performed. The uncertainties of the absolute calibration directly impact the energy scale
for air shower measurements from radio detectors. Therefore, a central challenge of the absolute antenna calibration
is to reduce the uncertainties of the antenna characteristics to the level of $10\,\rm{\%}$ which is a significant improvement in comparison with the
uncertainties obtained in calibration campaigns at other radio detectors \cite{LOPES-Calib, TRex-Calib, LOFAR-Calib}.\\
In this work, the reconstruction quality of
the electric-field signal from the measured voltage trace, which includes the directional
characteristics of the antenna and dispersion of the signal
owing to the antenna size, is investigated. All information are described with the
vector effective length $\vec{H}$, a complex measure that relates the measured voltage to the incoming electric field.
One antenna of the subset of $24$ radio stations equipped with
logarithmic periodic dipole antennas (LPDAs) is investigated here exemplarily.
This antenna is representative of all the LPDAs which are mechanically and
electrically identical at the percent level \cite{KlausPhD}.
While the low-noise amplifier attached to the antenna was included in the signal chain during the calibration,
amplifiers and the subsequent electronics of all radio stations have been characterized individually.
The LPDA antennas have the advantage of low sensitivity to radio waves reflecting from the ground which makes
them largely independent of potentially changing ground conditions.\\
The LPDA antennas have been studied before and a first absolute calibration of one signal polarization was performed in $2012$
giving an overall systematic uncertainty of $12.5\,\rm{\%}$ \cite{AERA-Antennas}.
In comparison to the first absolute calibration of AERA, in this paper a new absolute calibration is presented using
a new setup enabling a much more dense sampling of the arrival directions, more field polarization measurements,
and an extended control of systematic effects including the full repetition of calibration series.
To ensure far-field measurements, instead of the previously used balloon,
a drone was employed, carrying a separate signal generator and a calibrated transmitting antenna. \\
This work is structured as follows.
Firstly, a calculation of the absolute value of the vector effective length $|\vec{H}|$ of the LPDA
is presented. Then, the LPDA antenna and the calibration setup are specified.
In the next section the calibration strategy is presented using one example flight
where $|\vec{H}|$ is measured on site at the Pierre Auger Observatory at one of the radio stations.
The main section contains detailed comparisons of all the measurements with the calculated vector
effective length and the determination of the uncertainties in the current understanding of the antenna.
Finally, the influence of the calibration results are discussed in applications
before presenting the conclusions.
\section{Antenna Response Pattern}
This section gives a theoretical overview of the antenna response pattern. The vector effective length (VEL) is introduced as a measure of the directional dependent antenna sensitivity.
Furthermore, it is explained how the VEL is obtained for an uncalibrated antenna. For more details refer to \cite{AERA-Antennas}.
\subsection{The Vector Effective Length (VEL)}
Electromagnetic fields induce a voltage at the antenna output. The antenna signal depends on the incoming field $\vec{E}(t)$, the contributing frequencies $f$, as well as on the incoming direction
with the azimuthal angle $\Phi$ and the zenith angle $\Theta$ to the antenna. The relation between the Fourier-transformed electric field $\vec{\mathcal{E}}(f)$ and the Fourier transformed observed voltage $\mathcal{U}$ for $\Phi, \Theta, f$ is referred to
as the antenna response pattern and is expressed in terms of the VEL $\vec{H}$:
\begin{linenomath}
\begin{equation}
\mathcal{U}(\Phi, \Theta, f) = \vec{H} (\Phi, \Theta, f) \cdot \mathcal{\vec{E}} (f)
\label{eq:AntResponse}
\end{equation}
\end{linenomath}
The VEL $\vec{H}$ is oriented in the plane perpendicular to the arrival direction of the signal and can be expressed as a superposition of a horizontal component $H_{\phi}$ and a component $H_{\theta}$ oriented
perpendicular to $H_{\phi}$ which is called meridional component:
\begin{linenomath}
\begin{equation}
\vec{H} = H_{\phi} \vec{e}_{\phi} + H_{\theta} \vec{e}_{\theta}.
\label{eq:VELSpherical}
\end{equation}
\end{linenomath}
The VEL is a complex quantity $H_{k} = |H_{k}| e^{i\alpha_{k}}$ with $k=\phi,~\theta$
and accounts for the frequency-dependent electrical losses within the antenna as well as reflection effects which arise in the case of differences between the antenna and read-out system impedances.
Both effects lead to dispersion of the signal shape.\\
The antenna response pattern is often expressed in terms of the antenna gain based on the directional dependence of the received power. With the quadratic relation between voltage and power, the
antenna gain and the absolute value of the VEL are related by:
\begin{linenomath}
\begin{equation}
|H_{k}(\Phi, \Theta, f)|^{2} = \frac{c^{2} Z_{R}}{f^{2} 4 \pi Z_{0}} G_{k}(\Phi, \Theta, f).
\label{eq:VELGain}
\end{equation}
\end{linenomath}
Here, $f$ is the signal frequency, $c$ is the vacuum speed of light, $Z_{R}=50\,\rm{\Omega}$ is the read-out impedance, $Z_{0} \approx 120 \, \pi \, \Omega$ is the impedance of free space,
the index $k=\phi$ or $\theta$ indicates the polarization, and $\Phi$ and $\Theta$ denote the azimuth and zenith angle of the arrival direction.
\subsection{Calculating the Absolute Value of the VEL from a Transmission Measurement}
The antenna characteristics of an uncalibrated antenna under test (AUT) is determined by measuring the antenna response of the AUT in a transmission setup using a calibrated transmission antenna.
The relation between transmitted and received power is described by the Friis
equation \cite{Friis} considering the free-space path loss in vacuum as well as
the signal frequency:
\begin{linenomath}
\begin{equation}
\frac{P_{r}(\Phi, \Theta, f)}{P_{t}(f)}=G_{t}(f) G_{r}(\Phi, \Theta, f) \left( \frac{c}{f 4 \pi R} \right)^{2},
\label{eq:Friis}
\end{equation}
\end{linenomath}
with the received power $P_{r}$ at the AUT, the transmitted power $P_{t}$ induced on the transmission antenna, the known antenna gain $G_{t}$ of the calibrated transmission antenna, the unknown
antenna gain $G_{r}$ of the AUT, the distance $R$ between both antennas and the signal frequency $f$.\\
By considering Eq.~\eqref{eq:VELGain} and Eq.~\eqref{eq:Friis} the VEL of the AUT in a transmission setup is then determined by
\begin{linenomath}
\begin{equation}
|H_{k}(\Phi, \Theta, f)| = \sqrt{\frac{4 \pi Z_{R}}{Z_{0}}} R \sqrt{\frac{P_{r,k}(\Phi, \Theta, f)}{P_{t}(f) G_{t}(f)}}
\label{eq:VEL}
\end{equation}
\end{linenomath}
\subsection{Calculating the Absolute Value of the Antenna VEL with separate Amplifier from a Transmission Simulation}
In this work, the NEC-2 \cite{NEC2} simulation code is used to simulate the response pattern of the passive part of the AUT.
With the passive part of the AUT the antenna without a then following low-noise amplifier stage is meant.
These simulations provide information about the received voltage directly at the antenna footpoint (AF)
which is the location where the signals of all dipoles are collected and converted to the then following $50\, \Omega$ system of the read-out system.
In the case of an amplifier (AMP) connected to the AF, the voltage at the output of the AMP is the parameter of interest.
The AMP is connected to the AF using a transmission line (TL). Both, the AMP and the TL, then constitute the active part of the AUT.
In the simulation, mismatch and reflection effects between the AF, the TL and the AMP,
which arise if the impedances $Z_{j}$ ($j=\rm{AF}, \rm{TL}, \rm{AMP}$) of two connected components differ from each other, have to be considered separately.
Moreover, the attenuation of the TL with a cable length $l_{TL}$ as well as the AMP itself described by the AMP S-parameters have to be taken into account.
The transformation of the received voltage at the AF to the received voltage at the AMP output is described by the transfer function $\rho$:
\begin{linenomath}
\begin{equation}
\rho = \frac{1} {\sqrt{r}} \frac{Z_{TL}}{Z_{TL}+Z_{AF}/r} \left( 1 +
\Gamma_{AMP} \right) \frac{e^{(\gamma + i\frac{2 \pi
f}{c_{n}})l_{TL}}}{e^{2 (\gamma + i\frac{2 \pi f}{c_{n}})l_{TL}}-\Gamma_{AMP}
\Gamma_{AF}} \frac{S21}{1+S11}
\label{eq:SimVEL}
\end{equation}
\end{linenomath}
with $\Gamma_{AMP} = \frac{Z_{AMP}-Z_{TL}}{Z_{AMP}+Z_{TL}}$ and $\Gamma_{AF} =
\frac{Z_{AF}/r-Z_{TL}}{Z_{AF}/r+Z_{TL}}$. Furthermore, $\gamma$ denotes the
attenuation loss along the transmission line, $f$ is the frequency
of the signal, $c_{n}$ denotes the transfer rate inside the TL, and $r$ is the
transfer factor from an impedance transformer at the AF which transforms the
balanced signal of the antenna to an unbalanced signal of a TL. For more details
refer to \cite{AERA-Antennas}.
\section{Logarithmic Periodic Dipole Antenna (LPDA)}
In this section, the Logarithmic Periodic Dipole Antenna (LPDA) which is used in a subset of the radio stations of AERA is presented.
An LPDA consists of several $\lambda/2$-dipoles of different lengths which are combined to one single antenna with the largest dipole located at the bottom and the shortest dipole at the top of the LPDA.
The sensitive frequency range is defined by the length of the smallest $l_{min}$ and largest $l_{max}$ dipole.
The ratio of the distance between two dipoles and their size is described by $\sigma$ and
the ratio of the dipole length between two neighboring dipoles is denoted by $\tau$. The four design parameters of the LPDAs used at AERA are $\tau=0.875$, $\sigma=0.038$, $l_{min}=1470\,\rm{mm}$ and $l_{max}=4250\,\rm{mm}$.
These values were chosen to cover the frequency range from around $30\,\rm{MHz}$ to $80\,\rm{MHz}$ and to combine a high antenna sensitivity in a broad field of view using a
limited number of dipoles and reasonable dimensions. They lead to a LPDA with nine separate dipoles. For more details refer to \cite{AERA-Antennas}.
A full drawing of the LPDA used at AERA including all sizes is shown in Fig.~\ref{fig:LPDA}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.33]{Plots/Kapitel3/LPDASketch.pdf}
\caption{\it Drawing of the Logarithmic Periodic Dipole Antenna (LPDA), units are millimeter.}
\label{fig:LPDA}
\end{center}
\end{figure}
Each radio station at AERA consists of two perpendicular polarized antennas which are aligned to magnetic north with a precision better than $1\rm{^{\circ}}$. The dipoles are connected to a waveguide with the
footpoint at the top of the antenna. The footpoint is connected by an RG58
\cite{RG} coaxial transmission line to a low-noise amplifier (LNA) which
amplifies the signal
typically by $(18.1 \pm 0.2) \,\rm{dB}$. The LNA of the radio station and used in the calibration setup amplifies the signal by $18.1 \,\rm{dB}$.
The amplification is nearly constant in the frequency range $30\,\rm{MHz}$ to $80\,\rm{MHz}$ and variates at the level of $0.5 \,\rm{dB}$.
For more technical details about the LNA refer to \cite{LNA}.
\section{Calibration Setup}
\label{sec:Calib}
The antenna VEL of the LPDA is determined by transmitting a defined signal from a calibrated signal source from different arrival directions and measuring the LPDA response. The signal source consists
of a signal generator, producing known signals, and a calibrating transmitting antenna with known emission characteristics. The transmission measurement needs to be done in the far-field region, which is
fulfilled to a reasonable approximation at a distance of $R > 2\lambda = 20\,\rm{m}$ for the LPDA frequency range of $30\,\rm{MHz}$ to $80\,\rm{MHz}$. \\
In a first calibration campaign \cite{AERA-Antennas} a large weather balloon was used to lift
the transmitting antenna and a cable to the signal source placed on ground.
As a vector network analyzer was used to provide the source and to measure the AUT output this
transmission measurement allowed to determine both, the VEL magnitude and phase. This setup has the disadvantages that it
requires calm weather conditions and the cost per flight including the balloon and gas are high.
Moreover, the cable potentially impacts the measurements if not properly shielded. In this first calibration campaign only the horizontal VEL was investigated. A new calibration campaign was necessary and a new setup was developed.\\
Now, a signal generator as well as a transmission antenna were both mounted beneath a flying drone, a so-called remotely piloted aircraft (RPA), to position the calibration source.
Hence, the cable from ground to the transmitting antenna is not needed anymore. Furthermore, the RPA is much less dependent
on wind, and thus it is easier to perform the measurement compared to the balloon-based calibration. The new calibration is performed with a higher repetition rate and with more positions per measurement.\\
During the measurement, the RPA flies straight up to a height of more than $20\,\rm{m}$ and
then towards the AUT until it is directly above it. Finally, it flies back and lands again at the starting position. A sketch of the setup is shown at the top of Fig.~\ref{fig:Calib}.\\
\begin{figure}
\begin{center}
\includegraphics[scale=0.47]{Plots/Kapitel4/Calib.png}
\line(1,0){250}\\
\includegraphics[scale=0.42]{Plots/Kapitel4/AlphaBeta.png}
\caption{\it \textbf{(top)} LPDA calibration setup. The calibration signal is produced by
a signal generator and radiated by a transmitting antenna. Both the signal
generator and the transmitting antenna are attached underneath a flying
drone, a so-called RPA, to realize far-field conditions during the measurement. On
arrival of the signal at the LPDA, the antenna response is measured using a
spectrum analyzer. The orientation of the RPA is described by the yaw (twist
of front measured from north in the mathematically negative direction), and the
tilt by the
pitch and the roll angles. \textbf{(bottom)} Sketch of the expected (blue arrow) and measured (red arrow) electric field polarization at the LPDA emitted by the transmitting antenna from the nominal (blue)
and measured (red) position. The real transmitting antenna position is shifted from the nominal position, e.g., due to GPS accuracy. This misplacement changes the
electric-field strength and polarization measured at the LPDA and, therefore, influences the measurement.}
\label{fig:Calib}
\end{center}
\end{figure}
The RPA used here was an octocopter obtained from the company MikroKopter \cite{OctoXL}. Such an octocopter also has been used for the fluorescence detector \cite{FDCalib} and CROME \cite{CromeCalib} calibrations.
The horizontal octocopter position is measured by GPS and a barometer provides information about the height above ground.
Both are autonomously recorded nearly each second which enables measurements of the VEL with good resolution in zenith angle $\Theta$.
To obtain further improvements of the octocopter position determination an optical method using two cameras taking pictures of the flight was developed. The cameras are placed on orthogonal axes with a distance of
around $100\,\rm{m}$ to the AUT. Canon Ixus 132 cameras \cite{Camera} with a resolution of 16 MegaPixel are utilized. They are set to an autonomous mode where they take pictures every three seconds.
From these pictures the full flight path of the octocopter can be reconstructed. The method is explained in detail in \cite{OpticalMeth, OpticalMethProc}.
Beside the octocopter position, information about rotation angles (yaw, pitch, roll as defined in Fig.~\ref{fig:Calib}) are recorded during the flight which are later used to determine the orientation
of the transmission antenna with respect to the AUT.\\
The position of the LPDA station was measured by a differential GPS (DGPS) (Hiper V system \cite{DGPS}) and is therefore known with centimeter accuracy.\\
The reference spectrum generator, model RSG1000 produced by the company TESEQ \cite{RSG1000}, is used as the signal generator.
It continuously produces a frequency comb spectrum between $5\,\rm{MHz}$ and $1000\,\rm{MHz}$ with a spacing of $5\,\rm{MHz}$.
This signal is further amplified in order to accomplish power well above background for the measurement using the LPDA.
The output signal injected into the transmission antenna has been measured twice in the lab using a FSH4 spectrum analyzer from the company Rohde\&Schwarz \cite{FSH4} and using an Agilent N9030A ESA spectrum analyzer
\cite{Agilent} both with a readout impedance of $50\,\rm{\Omega}$.\\
In an effort to maintain the strict $2.5\,\rm{kg}$ octocopter payload limit, a small biconical antenna
from Schwarzbeck (model BBOC 9217 \cite{Schwarzbeck}) is mounted $0.7\,\rm{m}$ beneath the octocopter.
This antenna has been calibrated by the manufacturer in the frequency range from $30\,\rm{MHz}$ to $1000\,\rm{MHz}$
with an accuracy of $0.5\,\rm{dB}$. This response pattern and its uncertainty comprise all mismatch effects when connecting a $50\,\rm{\Omega}$ signal source to such a transmitting antenna.
The power received at the LPDA during the calibration procedure is measured using the same FSH4 spectrum analyzer as above.\\
The different VEL components mentioned in Eq.~\eqref{eq:VELSpherical} are determined by performing multiple flights in which the orientation of the transmitting antenna is varied
with respect to the AUT. Sketches of the antenna orientations during the flights
are shown on the left side of Fig.~\ref{fig:ExampleVEL}. The horizontal component $|H_{\phi}|$ of the LPDA is measured in the LPDA main axis perpendicular to the LPDA orientation. Then, both antennas are aligned in parallel
for the whole flight. The meridional component $|H_{\theta}|$ is split into two subcomponents: The other horizontally but perpendicular to $\vec{e}_{\phi}$ oriented component $|H_{\theta,\mathrm{hor}}|$
and the vertical component $|H_{\theta,\mathrm{vert}}|$. As the orientation of the transmission antennas is the main difference between both measurements,
the phase $\alpha_{k}$ with $k=(\theta,\mathrm{hor}), (\theta,\mathrm{vert})$ is the same. Then, these two subcomponents are combined to the meridional component $|H_{\theta}|$:
\begin{linenomath}
\begin{equation}
|H_{\theta}| = \cos(\Theta)|H_{\theta,\mathrm{hor}}| + \sin(\Theta)|H_{\theta,\mathrm{vert}}|.
\label{eq:HTheta}
\end{equation}
\end{linenomath}
Both meridional subcomponents are measured in the axis perpendicular to the LPDA main axis.
Therefore, the transmission antenna needs to be rotated by $90\rm{^{\circ}}$ and the flight path needs to start at the $90\rm{^{\circ}}$ rotated position
in comparison to the measurement of $|H_{\phi}|$. For the case of the $|H_{\theta,\mathrm{vert}}|$ measurement the transmitting antenna is vertically aligned.\\
As the receiving power is measured directly at the output of the LPDA amplifier, all matching effects from connecting a transmission line to the LPDA footpoint and the LPDA LNA are taken into account.
The VEL is calculated using Eq.~\eqref{eq:VEL}.
\section{Calibration Strategy}
To explain the LPDA calibration strategy a measurement of each of the three VEL components is presented. Several flights at different days with different environmental conditions were performed
and finally combined to give an average LPDA VEL.
Here, one of the measurements of each VEL component is presented to show the reconstruction procedure as well as the statistical precision of the measurements.
Furthermore, the corrections taking into account cable damping, background
measurements, misalignments of the transmitting antenna and shift of the
octocopter position are discussed in detail.
Afterwards, an overview of the measurement uncertainties is given.
\subsection{Example Measurement}
In the right diagrams of Fig.~\ref{fig:ExampleVEL} the measured
VEL components $|H_{\phi}|$, $|H_{\theta,\mathrm{hor}}|$ and $|H_{\theta,\mathrm{vert}}|$ at the output of the LPDA LNA as a function of the zenith angle $\Theta$ at $55\,\rm{MHz}$ are shown.
In the left drawings the respective antenna orientations are visible.
The antenna response pattern reveals the following features.
For the VEL component $|H_{\phi}|$, the LPDA is most sensitive in the zenith direction.
The pattern shows a side lobe at around $65\rm{^{\circ}}$. For $|H_{\theta,\mathrm{hor}}|$ the most sensitive direction is the zenith while at larger zenith angles the sensitivity is strongly reduced. At the zenith the components
$|H_{\phi}|$ and $|H_{\theta,\mathrm{hor}}|$ are equal which is expected as the antenna orientations are identical. The fluctuations in $|H_{\theta,\mathrm{hor}}|$ are larger than those in $|H_{\phi}|$ due to the larger dependencies on the octocopter
rotations. When flying towards the antenna, any acceleration causes a rotation around the pitch angle (Fig.~\ref{fig:Calib}) which does not influence $|H_{\phi}|$. However, for both
meridional subcomponents the pitch angle already changes the transmitting antenna orientation (Fig.~\ref{fig:ExampleVEL}). Therefore, it influences both measurements.
In comparison to the other components $|H_{\theta,\mathrm{vert}}|$ is much smaller.
Therefore, the LPDA is marginally sensitive to such a signal polarization especially at vertical incoming directions. All these results are frequency dependent.
\begin{figure}
\begin{center}
\raisebox{0.65cm}{\includegraphics[scale=0.26]{Plots/Kapitel4/Hor_New.png}}\includegraphics[scale=0.31]{Plots/Kapitel5/AERA008-EW-N-horizontal-55.pdf}\\
\raisebox{0.65cm}{\includegraphics[scale=0.26]{Plots/Kapitel4/VertHor_New.png}}\includegraphics[scale=0.31]{Plots/Kapitel5/AERA008-EW-E-vertical-horizontal-55.pdf}\\\
\raisebox{0.65cm}{\includegraphics[scale=0.26]{Plots/Kapitel4/VertVert_New.png}}\includegraphics[scale=0.31]{Plots/Kapitel5/AERA008-EW-E-vertical-vertical-55.pdf}\\
\caption{\it \textbf{(left)} NEC-2 realization of the setup to simulate the three VEL components \textbf{(from top to bottom)} $|H_{\phi}|$, $|H_{\theta,\mathrm{hor}}|$ and $|H_{\theta,\mathrm{vert}}|$. The meridional component $|H_{\theta}|$ is a combination
of $|H_{\theta,\mathrm{hor}}|$ and $|H_{\theta,\mathrm{vert}}|$. The distance between transmitting and receiving antenna is reduced and the transmitting antenna
is scaled by a factor of $3$ to make both antennas visible. For clarity, the LPDA and the transmitting antenna (assumed as a simple dipole) orientations
are sketched in the lower right corner of each picture in the XY-plane as well as in the XZ-plane.
\textbf{(right)} Measured VEL as function of the zenith angle (red dots) of three example flights for the three VEL components at $55\,\rm{MHz}$. }
\label{fig:ExampleVEL}
\end{center}
\end{figure}
\subsection{Corrections}
For the raw VEL determined according to Eq.~\eqref{eq:VEL} corrections for the experimental conditions have to be applied.
The VEL is averaged in zenith angle intervals of $5\rm{^{\circ}}$. This is motivated by the observed variations in the repeated measurements which were recorded on different days (see e.g. below Fig.~\ref{fig:HorVEL}).
All corrections to the VEL are expressed relative to the measured raw VEL at a zenith
angle of $(42.5\pm2.5)\rm{^{\circ}}$ and a frequency of $55\,\rm{MHz}$ and are listed in Tab.~\ref{tab:Corrections}. The corrections are partly zenith angle and/or frequency dependent.
\begin{table}
\begin{center}
\begin{tabular}{lrrr}
\hline
\hline
\textbf{corrections} & \textbf{$\Delta |H_{\phi}|$ [\%]} & \textbf{$\Delta |H_{\theta,\mathrm{hor}}|$ [\%]} & \textbf{$\Delta |H_{\theta,\mathrm{vert}}|$ [\%]}\\
\hline
background noise & $-0.1$ & $-0.5$ & $-0.9$ \\
cable attenuations & $+44.4$ & $+44.4$ & $+53.2$ \\
background noise + cable attenuation & $+44.3$ & $+43.7$ & $+51.8$ \\
octocopter influence & $+0.6$ & $+0.6$ & $-0.2$\\
octocopter misalignment and misplacement & $+0.3$ & -- & -- \\
height at take off and landing & $+1.8$ & $+15.8$ & $+5.8$ \\
height barometric formula & $-5.2$ & $-10.2$ & $-2.5$ \\
combined height & $-3.6$ & $-5.4$ & $+1.3$ \\
shift to optical method & $-14.5$ & $-4.8$ & $+0.2$ \\
combined height + shift to optical method & $-14.6$ & $-5.5$ & $-0.3$ \\
\hline
all & $+24.6$ & $+36.4$ & $+51.1$ \\
\hline
\hline
\end{tabular}
\caption{VEL corrections taking into account different kinds of corrections for the three measured VEL components $|H_{\phi}|$, $|H_{\theta,\mathrm{hor}}|$ and $|H_{\theta,\mathrm{vert}}|$
of the example flights at a zenith angle of $(42.5\pm2.5)\rm{^{\circ}}$ and a frequency of $55\,\rm{MHz}$ with $\Delta |H_{k}|$ = $\frac{|H_{k}|-|H_{k,0}|}{|H_{k,0}|}$
and $k=\phi, (\theta,\mathrm{hor}), (\theta,\mathrm{vert})$.}
\label{tab:Corrections}
\end{center}
\end{table}
The following paragraphs describe the corrections of the raw VEL at the LPDA LNA output from the measurement.
\subsubsection{Background Noise}
During the calibration background noise is also recorded. In a separate measurement the frequency spectrum of the background has been determined and is then subtracted from the calibration signal spectrum.
Typically, the background noise is several orders of magnitude below the signal strength. This is even the case for the component $|H_{\theta,\mathrm{vert}}|$ with lowest LPDA sensitivity.
For large zenith angles close to $90\rm{^{\circ}}$ and in the case of the component $|H_{\theta,\mathrm{vert}}|$ also for small zenith angles directly above the radio station, however, the background noise and
the signal can be of the same order of magnitude. In this case, the calibration signal spectrum constitutes an upper limit of the LPDA sensitivity.
If more than $50\,\rm{\%}$ of the events in a zenith angle bin of $5\rm{^{\circ}}$ are affected, no background is subtracted but half of the
measured total signal is used for calculating the VEL and a $100\%$ systematic uncertainty on the VEL is assigned.
\subsubsection{Cable Attenuation}
To avoid crosstalk in the LPDA read-out system, the read-out system was placed at a distance of about $25\,\rm{m}$ from the LPDA. The RG58 coaxial cable \cite{RG}, used
to connect the LPDA to the read-out system, has a frequency-dependent ohmic resistance that attenuates
the receiving power by a frequency-dependent factor $\delta$. To obtain the VEL at the LNA output the cable attenuation is corrected from lab measurements using the FSH4.
\subsubsection{Octocopter Influence}
During the LPDA VEL measurement the transmitting antenna is mounted underneath the octocopter which contains conductive elements and is powered electrically. Therefore, the octocopter itself may change the antenna response
pattern of the transmitting antenna with respect to the zenith angle. To find a compromise between signal reflections at the octocopter and stability during take off, flight and landing, the distance
between transmitting antenna and octocopter has been chosen to be $0.7\,\rm{m}$. The influence has been investigated by simulating the antenna response pattern of the transmitting antenna with and without
mounting underneath an octocopter. It is found that the average gain of the transmission antenna changes by $0.05\,\rm{dB}$ \cite{PHD}. At a zenith angle of $(42.5\pm2.5)\rm{^{\circ}}$ and a frequency of $55\,\rm{MHz}$
the octocopter influences the transmitting antenna VEL with $0.6\,\rm{\%}$.
\subsubsection{Octocopter Misalignments and Misplacements}
\label{subsec:OctcoMis}
Misalignments and misplacements of the octocopter during the calibration flight have a direct impact on the transmitting antenna position and orientation changing the signal polarization at the position of the AUT.
For this investigation the orientation of the transmission antenna is assumed to
correspond to a dipole, which holds to a good approximation.
The electric field $\vec{E}_{t}$ emitted from a dipole antenna with orientation $\hat{A}_{t}$ in the direction $\hat{n}$ in the far-field region is
proportional to $\vec{E}_{t} \sim ( \hat{n} \times \hat{A}_{t} ) \times \hat{n}$, and the amplitude is given by $|\vec{E}_{t}| = \sin(\alpha)$. Here, $\alpha$ describes the smallest angle between the transmitting
antenna alignment $\hat{A}_{t}$ and the direction from the transmitting antenna to the AUT denoted as $\hat{n}$ (see lower sketch of Fig.~\ref{fig:Calib}).
The orientation of the transmitting antenna $\hat{A}_{t}$ is
calculated by an intrinsic rotation of the initial orientation of the transmitting antenna rotating first by the yaw angle $G$, then by the pitch angle $P$ and finally, by the roll angle $R$.
The AUT sensitivity $\eta$ to the emitted electric field is then calculated by the absolute value of the scalar product of the electric field and the AUT orientation $\hat{A}_{r}$:
$\eta = |\vec{E}_{t} \cdot \hat{A}_{r}| = \sin(\alpha) \cos(\beta)$ with $\beta$ describing the smallest angle between $\vec{E}_{t}$ and $\hat{A}_{r}$ (see lower sketch of Fig.~\ref{fig:Calib}).
Finally, the correction factor $\epsilon$ of the power measured at the AUT is determined by the square of the quotient of the nominal and the real value of $\eta$.
In case of the horizontal component $|H_{\phi}|$ the VEL is systematically shifted to larger values for all zenith angles and frequencies due to the octocopter misalignment and misplacement.
The correction factor $\epsilon$ is used to determine the horizontal VEL $|H_{\phi}|$.
In both meridional subcomponents the VEL becomes small at large zenith angles and strongly dependent on the antenna alignments.
Therefore, in the meridional subcomponents $|H_{\theta,\mathrm{hor}}|$ and $|H_{\theta,\mathrm{vert}}|$ the effects of the octocopter misalignment and misplacement are included in the systematic uncertainties.
\subsubsection{Octocopter Flight Height}
\label{par:CopterHeight}
The octocopter flight height is determined by a barometer measuring the change of air pressure $\Delta p$ during the flight. The octocopter software assumes a linear dependency of $\Delta p$ and the octocopter
flight height over ground $h_{raw}$. Two corrections have been applied to the raw flight height. Firstly, it was observed that the flight height differs at take off and landing. Therefore, a linear time dependent
correction is applied which constrains the flight height over ground at take off and landing to zero. Secondly, AERA is located at a height of about $1550\,\rm{m}$ above sea level. Therefore, such a linear relation between
$\Delta p$ and $h_{raw}$ used by the octocopter software is not precise enough. A more realistic calculation considering an exponential model of the barometric formula \cite{BarometricFormula} as well as the height and
latitude dependent gravitation is used to determine the more precise octocopter height $h_{octo}$. An inverse quadratic relation between gravitation and the height above sea level with a value at sea level of
$g(0) = 9.797\,\rm{\frac{m}{s^{2}}}$ at the latitude of AERA is taken into account. The raw octocopter height as well as the height after all corrections of the $|H_{\phi}|$ example flight are shown on
the left side of Fig.~\ref{fig:CopterHeight} in comparison to the octocopter height determined with the optical method. Both methods agree at the level of $1.1\,\rm{\%}$ in the median.
The quotient of the octocopter height measured by the camera method and by the full corrected barometer method is shown in the histogram on the right side of Fig.~\ref{fig:CopterHeight}.
The optical method is used to correct for the small difference.
\begin{figure}
\begin{center}
{\includegraphics[scale=0.3]{Plots/Kapitel5/AERA008-EW-N-horizontal-Height.pdf}}\includegraphics[scale=0.3]{Plots/Kapitel5/AERA008-EW-N-horizontal-zShift.pdf}
\caption{\it \textbf{(left)} Corrections for the measured octocopter height with the raw
data denoted by the green rectangles. The black diamonds
refer to the height after linear correction for the start and end
positions. The blue circular symbols show the corrections for the linear
barometric formula used in the octocopter electronics. The octocopter height determined by the optical method is denoted by the red dots. All measurements
are shown as a function of the flight time. \textbf{(right)} Histogram of the quotient of the full corrected barometer height and measured height from the optical method.}
\label{fig:CopterHeight}
\end{center}
\end{figure}
\subsubsection{Octocopter Position Shift from Optical Method Position
Reconstruction}
While the octocopter position measured by the built-in sensors (air pressure, GPS) is recorded nearly each second, the cameras used in the optical method take pictures of the flight every $3\,\rm{s}$. Furthermore, it turned out that the
fluctuations of the built-in sensors are smaller in comparison to the optical method.
Nevertheless, the systematic uncertainties of the octocopter position reconstruction using the optical method are still much smaller. The uncertainties are described in detail in the following
subsection. To combine both advantages of high statistics and small uncertainties, the octocopter position measured by the built-in sensors is taken and then shifted towards the position measured with the optical
method. Therefore, the octocopter position in the XY-plane is shifted by the median distance and the octocopter height measured by the barometer is shifted by the median factor between both methods.
For the $|H_{\phi}|$ example flight the octocopter XY-position measured by GPS is shifted by $0.83\,\rm{m}$ to the west and $3.22\,\rm{m}$ to the south.
The full corrected flight height measured by the barometer is shifted by $1.1\,\rm{\%}$.
\subsection{Uncertainties}
In this subsection the statistical and systematic uncertainties are discussed using the $|H_{\phi}|$ example flight at a middle frequency of $f=55\,\rm{MHz}$ and a zenith angle bin of $(\Theta=42.5\pm2.5)\,\rm^{\circ}$
as mentioned above.
This zenith angle is chosen as most events at AERA are reconstructed coming from this direction. While some systematic uncertainties are stable between flights,
e.g., measurement of the power injected to the transmitting antenna or the transmitting antenna response pattern, others are flight dependent, e.g., the octocopter position and the measurement of the
receiving power at the AUT.
The VEL relative uncertainties are listed in Tab.~\ref{tab:Uncertainties}. These individual uncertainties are described in detail in the following subsections.
The constant systematic uncertainties add quadratically to $6.3\,\rm{\%}$ and the flight dependent systematic
uncertainty is $6.9\,\rm{\%}$.
\newcommand\tw{0.5cm}
\newcommand\tww{0.2cm}
\begin{table}[tbp]
\begin{center}
\begin{tabularx}{\textwidth}{Xrrr}
\hline \hline
\rule{0pt}{3ex}\textbf{source of uncertainty / \%} & systematic & statistical \\
\hline
\rule{0pt}{4ex}\textbf{flight dependent uncertainties} & \textbf{6.9} & \textbf{2.7}\\
\hspace*{\tww} transmitting antenna XY-position & $1.5$ & $1.0$ \\
\hspace*{\tww} transmitting antenna height & $0.1$ & $0.6$ \\
\hspace*{\tww} transmitting antenna tilt & $<0.1$ & $<0.1$ \\
\hspace*{\tww} size of antenna under test & $1.4$ & - \\
\hspace*{\tww} uniformity of ground & $<0.1$ & - \\
\hspace*{\tww} RSG1000 output power & $2.9$ & $2.3$ \\
\hspace*{\tww} influence of octocopter & $<0.1$ & - \\
\hspace*{\tww} electric-field twist & $0.4$ & $0.2$ \\
\hspace*{\tww} LNA temperature drift & $1.0$ & $0.6$ \\
\hspace*{\tww} receiving power & $5.8$ & - \\
\hspace*{\tww} background & $0.4$ & - \\
\rule{0pt}{4ex}\textbf{global uncertainties} & \textbf{6.3} & \textbf{$<$0.1} \\
\hspace*{\tww} injected power & $2.5$ & $<0.1$ \\
\hspace*{\tww} transmitting antenna gain & $5.8$ & - \\
\hspace*{\tww} cable attenuation & $0.5$ & $<0.1$ \\
\hline
\rule{0pt}{3ex}\textbf{all / \%} & \textbf{9.3} & \textbf{4.7} \\
\hline \hline
\end{tabularx}
\caption{Uncertainties of the horizontal VEL $|H_{\phi}|$ of the example flight at $55\,\rm{MHz}$ and $(42.5\pm2.5)\,\rm^{\circ}$.
While the overall systematic uncertainty is the quadratic sum of each single systematic uncertainty,
the overall statistical uncertainty is described by the observed signal fluctuation during the measurement.
The statistical uncertainty of each source of uncertainty describes the expected uncertainty,
e.g., from the manufacturer's information.}
\label{tab:Uncertainties}
\end{center}
\end{table}
\subsubsection{Transmitting Antenna Position}
The systematic uncertainty of the position reconstruction of the optical method was determined by comparing the reconstructed octocopter position with the
position measured by a DGPS which gives an accurate position
determination.
The combined mass of the transmission antenna and the additional DGPS exceeds the maximal payload capacity of the octocopter. Therefore, a separate flight with DGPS but without transmitting antenna
and source generator was performed. The octocopter positions measured with the optical method and the DGPS are compared in Fig.~\ref{fig:Camera-DGPS}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.29]{Plots/Kapitel5/GPS-Pos-N2.pdf}
\includegraphics[scale=0.29]{Plots/Kapitel5/N-KamerasystematicErrorY.pdf}
\includegraphics[scale=0.29]{Plots/Kapitel5/N-KamerasystematicErrorX.pdf}
\includegraphics[scale=0.29]{Plots/Kapitel5/N-KamerasystematicErrorZ.pdf}
\caption{\it Comparison of the octocopter position measured with the optical method and with an additional DGPS mounted at the octocopter during one flight. \textbf{(upper left)} Raw position data measured with
DGPS (lines) and the optical method (dots) as function of the flight time. The distance between the reconstructed octocopter position measured by optical method and DGPS
in X and Y direction are shown in the \textbf{(upper right)} and \textbf{(lower left)} figure. The difference of the octocopter height measured by the barometer and DGPS is shown in the \textbf{(lower right)} figure.
The systematic uncertainty in the XY-plane of the octocopter position is calculated by the quadratic sum of both median values (red dashed lines) in X and Y direction.
Similarly, the median of the height difference of both measurement setups is taken as systematic uncertainty of the octocopter height.}
\label{fig:Camera-DGPS}
\end{center}
\end{figure}
The systematic uncertainty of the octocopter position in the XY-plane is calculated using the quadratic sum of both median values (red dashed lines) in the X and Y direction which is smaller than
$1\,\rm{m}$. Equally, the systematic uncertainty of the octocopter height is $\sigma_{h}=0.06\,\rm{m}$. The influence on the VEL is determined by shifting the reconstructed octocopter position by these
uncertainties and redoing the VEL calculation given in Eq.~\eqref{eq:VEL} of each zenith angle bin separately for the XY-plane and the height. The VEL systematic uncertainty is given by half the difference of the upper
and lower shift of the VEL. The systematic uncertainty on the VEL at a zenith angle of $\Theta = 42.5\rm{^{\circ}} (2.5\rm{^{\circ}}, 72.5\rm{^{\circ}}) \pm 2.5\rm{^{\circ}}$
due to the octocopter's XY-position is $1.5\,\rm{\%}$ ($0.2\,\rm{\%}$, $2.9\,\rm{\%}$) and due to the octocopter's height is $0.1\,\rm{\%}$ ($0.2\,\rm{\%}$, $<0.1\,\rm{\%}$). \\
The statistical uncertainty of the octocopter's built-in sensors is determined in the following way.
The flight height measured by the barometer has to be corrected as described in section \ref{par:CopterHeight}
which causes further uncertainties during the flight. The statistical uncertainty of the octocopter height measured with the barometer is then determined by comparing the measured height
with the height measured by the DGPS (lower right panel of Fig.~\ref{fig:CopterPosition}). The statistical uncertainty are found to be $\sigma = 0.33\,\rm{m}$ which results in a $0.6\,\rm{\%}$ uncertainty in the VEL.
The horizontal position uncertainties are determined in a measurement where the octocopter remains stationary on the ground. The measurement is presented in Fig.~\ref{fig:CopterPosition}.
The diagrams show a statistical uncertainty of $\sigma = \sqrt{0.48^{2}+0.39^{2}}\,\rm{m} = 0.6\,\rm{m}$ in the XY-plane which results in a $1.0\,\rm{\%}$ uncertainty in the VEL.
All these uncertainties are smaller than those
of the optical method described by the widths of the distributions shown in Fig.~\ref{fig:Camera-DGPS} where the octocopter positions measured with DGPS and the camera method are compared.\\
\begin{figure}
\begin{center}
{\includegraphics[scale=0.29]{Plots/Kapitel5/GPS-Pos.pdf}}
{\includegraphics[scale=0.29]{Plots/Kapitel5/GPS-y.pdf}}
{\includegraphics[scale=0.29]{Plots/Kapitel5/GPS-x.pdf}}
{\includegraphics[scale=0.29]{Plots/Kapitel5/N-GPSsystematicErrorZ.pdf}}
\caption{\it The statistical uncertainties of the octocopter position reconstruction using the built-in sensors. The uncertainty of the horizontal position is determined in a measurement while the
octocopter is on ground and does not move. \textbf{(upper left)} Measured octocopter GPS-position with respect to the average position at $(0,0)$. Color coded is the time.
\textbf{(upper right)} Histogram of the distance between measured and average position in Y direction.
\textbf{(lower left)} Histogram of the distance between measured and average position in X direction.
\textbf{(lower right)} The statistical uncertainty of the octocopter height
measured with the barometer is determined by comparing the measured flight height with the height measured using a DGPS. Then, uncertainties arising from the height corrections are taken into account.
The histogram of the octocopter height difference over ground measured with the barometer compared to the DGPS measurement is shown.}
\label{fig:CopterPosition}
\end{center}
\end{figure}
The transmission antenna is mounted at a distance of $s_{Ant}=0.7\,\rm{m}$ beneath the octocopter. Hence, a tilt of the octocopter, described by the pitch and the roll angle,
changes the position in the XY-plane of the transmission antenna as well as its height over ground. In the case of the example flight, the average pitch (roll) angle of the octocopter is $-0.6\rm{^{\circ}}$ ($0.9\rm{^{\circ}}$)
which lead to a systematic uncertainty smaller than $0.1\,\rm{\%}$ at $55\,\rm{MHz}$ and $(42.5\pm2.5)\,\rm^{\circ}$.
\subsubsection{Size of AUT}
The size of the LPDA in the z-direction is $1.7\,\rm{m}$. The interaction point of the signal at each frequency is set to the center of the LPDA. Therefore, there is a systematic uncertainty
in the height interval between transmitting antenna and AUT which is conservatively estimated to be $0.85\,\rm{m}$. For the example flight, this systematic results in a VEL
systematic uncertainty of $1.4\,\rm{\%}$ at $55\,\rm{MHz}$ and $(42.5\pm2.5)\,\rm^{\circ}$.
\subsubsection{Uniformity of Ground Height}
The ground height above sea level at the octocopter starting position and at the LPDA is measured by DGPS. The ground is not completely flat but varies at the level of a few $\rm{cm}$
over a distance of $5\,\rm{m}$ which is incorporated as additional uncertainty on the height. The resulting influence on the VEL is less than $0.1\,\rm{\%}$.
\subsubsection{Emitted Signal towards the Antenna Under Test}
The uncertainty of the emitted signal contains effects from the power output of the RSG1000, the injected
power into the transmission antenna, the transmission response pattern, the influence of the octocopter on the pattern as well as the misalignment and misplacement of the transmitting antenna which changes the
emitted power transmitted towards the AUT and lead to a twist of the signal polarization at the AUT.\\
The manufacturer of the RSG1000 states a signal stability of $0.2\,\rm{dB}$ measured at a constant temperature of $20\,\rm{^{\circ}}$ which results in a statistical uncertainty of $2.3\,\rm{\%}$ in the VEL.
The calibration measurements were performed at temperatures between $15\,\rm{^{\circ}C}$ and $25\,\rm{^{\circ}C}$. Here, the manufacturer denotes a systematic uncertainty of $0.25\,\rm{dB}$ due to temperature shifts which
results in $2.9\,\rm{\%}$ in the VEL.\\
The injected power from the RSG1000 to the transmission antenna is measured twice in the lab using the FSH4 spectrum analyzer averaged over $100$ samples and a Agilent N9030A ESA spectrum analyzer
averaged over $1000$ samples. The systematic uncertainty of the FSH4 measurement is $0.5\,\rm{dB}$ and the systematic uncertainty of the Agilent N9030A ESA measurement is $0.24\,\rm{dB}$.
Both are combined yielding a total systematic uncertainty of $0.22\,\rm{dB}$ in the VEL. As there is a quadratic relation between injected power and the VEL (refer to Eq.~\eqref{eq:VEL})
the systematic uncertainty on the VEL is $2.5\,\rm{\%}$. The statistical uncertainties of these measurements are small due to the number of samples
and can be neglected.\\
The antenna manufacturer specifies a systematic uncertainty of the transmitting antenna pattern of $0.5\,\rm{dB}$ which results in a systematic uncertainty on the VEL of $5.8\,\rm{\%}$.
The influence of the octocopter on the transmission antenna pattern investigated with simulations is small \cite{PHD} and, therefore, a systematic uncertainty due to the octocopter influence on the transmission
antenna pattern can be neglected.\\
Misalignment and misplacement of the transmitting antenna lead to a twist of the signal polarization and furthermore, altered the signal strength at the AUT.
The AUT sensitivity to an electric field is given by $\eta = \sin(\alpha) \cos(\beta)$ with the angles $\alpha$ and $\beta$ as described in section \ref{subsec:OctcoMis}.
Both angles, and therefore $\eta$, depend on the octocopter rotation angles as
well as on the octocopter position.
The angle $\beta$ linearly depends on $\alpha$ and on the AUT orientation which is known with a precision of $1\rm{^{\circ}}$.
The uncertainty of all three octocopter rotation angles is estimated to be $1\rm{^{\circ}}$. In the case of the horizontal VEL
the uncertainty of $\alpha$ is described by the quadratic sum of two octocopter rotation angles and the angle which arises from the octocopter position uncertainties as well as the size of the AUT. For
the example flight, the resulting influence on the VEL is $0.4\,\rm{\%}$ at $55\,\rm{MHz}$ and $(42.5\pm2.5)\,\rm^{\circ}$.
In contrast, both meridional subcomponents are not corrected for the octocopter misalignment and misplacement. Here, the octocopter misalignment and misplacement is completely included in the systematic uncertainty.
Therefore, the systematic uncertainty of the VEL due to an octocopter misalignment and misplacement is larger for both meridional subcomponents than in the case of the horizontal component.
The systematic uncertainty on the VEL is calculated in the same way but using the nominal values of $\alpha$ and $\beta$ in each zenith angle bin of $5\rm{^{\circ}}$ instead. As
$\beta$ linearly depends on $\alpha$, only a further uncertainty on $\alpha$ given by the difference between the measured median values and nominal values of $\alpha$ is needed, quadratically added
and then propagated to the systematic uncertainty on the VEL.
In case of both meridional subcomponents, both angles $\alpha$ and $\beta$ depend on the zenith angle. Hence, this systematic uncertainty is strongly zenith angle dependent for both meridional subcomponents.\\
The uncertainties of the injected power to the transmitting antenna and the transmitting antenna pattern limit the overall calibration accuracy.
In comparison to other calibration campaigns at LOFAR or Tunka-Rex, a RSG1000 were used as signal source as well but a different transmitting antenna. Both RSG1000 signal sources
differ on a percent level only. However, the manufacturer of the transmitting antenna used at LOFAR and Tunka-Rex states a systematic uncertainty of the transmitting antenna pattern of $1.25\,\rm{dB}$ \cite{HillerPHD}. Hence, the
AERA calibration has a significantly smaller systematic uncertainty due to the more precise calibration of the transmitting antenna.\\
\subsubsection{Received Signal at the Antenna Under Test}
Within the uncertainty of the received signal all uncertainty effects of the received power at the AUT including the full signal chain from the LPDA to the spectrum analyzer as well as the LNA and cables are considered.
In the following a drift of the LPDA LNA gain due to
temperature fluctuations, the uncertainty of the received power using the FSH4 and the influence of background noise as well as the uncertainty of the cable attenuation measurements are discussed.\\
The LPDA LNA gain depends on the temperature. The gain temperature drift was measured in the laboratory and was determined to $0.01\,\rm{dB/K}$ using the FSH4 in the vector network analyzer mode \cite{PHD}.
The calibration measurements were performed at temperatures between $15\,\rm{^{\circ}C}$ and $25\,\rm{^{\circ}C}$ which results in a systematic uncertainty of $1\,\rm{\%}$ in the VEL due to temperature drifts of the LNA.
The measurements of the LPDA LNA gain due to temperature fluctuations using the FSH4 show fluctuations of the LNA gain at the level of $0.1\,\rm{dB}$ which results in an expected statistical uncertainty of $0.6\,\rm{\%}$ in the VEL.\\
The event power is measured using the FSH4 spectrum analyzer. The manufacturer states a systematic uncertainty of $0.5\,\rm{dB}$. The systematic uncertainty in the VEL
is then $5.8\,\rm{\%}$. Also the background noise is measured using the FSH4 in spectrum analyzer mode.
The systematic uncertainty of the VEL considering event power (P) and background noise (B) is $\sqrt{\frac{P^{2}+B^{2}}{P^{2}-B^{2}}} \frac{0.5}{2}\,\rm{dB}$. If the background noise is of
the same order of magnitude as the measured event power for more than $50\,\rm{\%}$ of events in a $5\rm{^{\circ}}$ zenith angle bin, the systematic uncertainty for
this zenith angle bin is set to $100\,\rm{\%}$. For the example flight, the systematic due to background noise results in an additional VEL
systematic uncertainty of $0.4\,\rm{\%}$ at $55\,\rm{MHz}$ and $(42.5\pm2.5)\,\rm^{\circ}$. A further background influence on the measured signal at the LPDA
due to the communication between the remote control and the octocopter is not expected, as they communicate at $2.4\,\rm{GHz}$ and
the LPDA is sensitive in the frequency range from $30\,\rm{MHz}$ to $80\,\rm{MHz}$.\\
The attenuation of the cable is measured with the FSH4 in network analyzer mode transmitting a signal with a power of $0\,\rm{dBm}$ and averaged over $100$ samples. Therefore, the statistical uncertainty
can be neglected.
The manufacturer states a systematic uncertainty of $0.04\,\rm{dB}$ for transmission measurements with a transmission higher than $-20\,\rm{dB}$ which applies in case of the cables.
This results in a systematic uncertainty of $0.5\,\rm{\%}$ in the VEL.
\subsection{Simulation of the Experimental Setup}
The calibration measurement is simulated using the NEC-2 simulation code. Here, the AUT, the transmission antenna and realistic ground properties are taken into account. At standard ground conditions the
ground conductivity is set to be $0.0014\,\rm{S/m}$ which was measured at the AERA site. Values of the conductivity of dry sand, which is the typical ground consistency at AERA,
are reported here \cite{Geo-Auger, Conductivity}. Measurements of the ground permittivity at the AERA site yield values between
$2$ and $10$ depending on the soil wetness \cite{PHD}. The standard ground permittivity is set to be $5.5$ in the simulation. The distance between both antennas is set to be $30.3\,\rm{m}$.
The VEL is calculated using Eq.~\eqref{eq:VEL} modified with Eq.~\eqref{eq:SimVEL} considering the manufacturer information for the response pattern of the transmitting antenna as well as the transfer function from the AUT output to
the system consisting of the transmission line from the LPDA footpoint to the LNA and the LNA itself. To investigate the simulation stability several simulations with varying antenna separations
and changing ground conditions were performed \cite{PHD}.
Antenna separations ranging from $25\,\rm{m}$ to $50\,\rm{m}$ were simulated and did not change the resulting VEL of the LPDA.
Hence, the simulation confirms that the measurement is being done in the far-field region. Furthermore, the influence of different ground conditions is investigated. Conductivity and permittivity
impact the signal reflections on ground. The LPDA VEL is simulated
using ground conductivities ranging from $0.0005\,\rm{\frac{S}{m}}$ to $0.005\,\rm{\frac{S}{m}}$ and using ground permittivities ranging from $2$ to $10$.
Within the given ranges the conductivity and permittivity independently influence the signal reflection properties of the ground.
In Fig.~\ref{fig:NEC2} the simulations of the horizontal and meridional VEL for these different ground conditions as function of the zenith angle at $55\,\rm{MHz}$ are shown.
\begin{figure}
\begin{center}
{\includegraphics[scale=0.29]{Plots/Kapitel5/Simulation/Conductivity/horizontal/Vergleich55.pdf}}{\includegraphics[scale=0.29]{Plots/Kapitel5/Simulation/Conductivity/vertical/Vergleich55.pdf}}
\vspace{0.1cm}
{\includegraphics[scale=0.29]{Plots/Kapitel5/Simulation/Permittivity/horizontal/Vergleich55.pdf}}{\includegraphics[scale=0.29]{Plots/Kapitel5/Simulation/Permittivity/vertical/Vergleich55.pdf}}
\caption{\it Simulations of the VEL for different ground conditions. A variation in conductivity is shown in the upper diagrams whereas a variation in permittivity is shown in the lower diagrams. In the \textbf{(left)}
diagrams the horizontal VEL $|H_{\phi}|$ and in the \textbf{(right)} diagrams the meridional VEL $|H_{\theta}|$ as function of the zenith angle $\Theta$ at $55\,\rm{MHz}$ are shown.}
\label{fig:NEC2}
\end{center}
\end{figure}
Different ground conductivities do not change the LPDA response pattern. In contrast the influence of the ground permittivity on the antenna response is slightly higher.
In the case of an applied ground permittivity of $2$ and of $10$, the influence
on the horizontal VEL is at the level of $1\,\rm{\%}$ averaged over all
frequencies and zenith angles with a scatter of less than $6\,\rm{\%}$.
The influence of the ground permittivity on the electric-field reconstruction is discussed in section \ref{subsec:UncerCR}.\\
Simulations of an electronic box beneath the LPDA show influences on the horizontal antenna VEL smaller than $0.3\,\rm{\%}$ which is negligible compared to the influence of the ground permittivity \cite{PHD}.
\section{Measurement of the LPDA Vector Effective Length}
In this section, the reproducibility and the combination of all measurements performed on different days and under different environmental conditions are discussed.
Furthermore, the combined results of the LPDA VEL are compared to the values obtained from the NEC-2 simulation.
\subsection{Horizontal Vector Effective Length}
Here, the results of the measurements of the horizontal VEL $|H_{\phi}|$ are presented. In total, five independent measurements were performed to determine $|H_{\phi}|$ as a function of the zenith angle $\Theta$.
The horizontal VEL $|H_{\phi}|$ in zenith angle intervals of $5\rm{^{\circ}}$ for three different measurements at $35\,\rm{MHz}$, $55\,\rm{MHz}$ and $75\,\rm{MHz}$ is shown on the left side of Fig.~\ref{fig:HorVEL}.
The constant systematic uncertainties of each flight are denoted by the light colored band and the flight dependent systematic uncertainties are indicated by the dark colored band.
Compared to the average $\overline{\mathrm{VEL}}$ from $5$ measurements the median value of the ratio $\sigma/\overline{\mathrm{VEL}}$ is $6\,\rm{\%}$ which is well compatible with the estimated uncertainties presented
in Tab.~\ref{tab:Uncertainties}. At the right side of Fig.~\ref{fig:HorVEL} all performed measurements to determine $|H_{\phi}|$ are combined in zenith angle intervals
of $5\rm{^{\circ}}$, weighted by the quadratic sum of the systematic and the statistical uncertainties of each flight. The gray band describes the constant systematic uncertainties whereas the
statistical and flight-dependent systematic uncertainties are combined within the error bars.
\begin{figure}
\begin{center}
{\includegraphics[scale=0.29]{Plots/Kapitel6/horizontalVEL/Reproduzierbarkeit/Vergleich35.pdf}}{\includegraphics[scale=0.29]{Plots/Kapitel6/horizontalVEL/Kombination/Vergleich35.pdf}}\\
\vspace{0.1cm}
{\includegraphics[scale=0.29]{Plots/Kapitel6/horizontalVEL/Reproduzierbarkeit/Vergleich55.pdf}}{\includegraphics[scale=0.29]{Plots/Kapitel6/horizontalVEL/Kombination/Vergleich55.pdf}}\\
\vspace{0.1cm}
{\includegraphics[scale=0.29]{Plots/Kapitel6/horizontalVEL/Reproduzierbarkeit/Vergleich75.pdf}}{\includegraphics[scale=0.29]{Plots/Kapitel6/horizontalVEL/Kombination/Vergleich75.pdf}}
\caption{\it \textbf{(left)} Mean horizontal VEL $|H_{\phi}|$ (dots) and standard deviation (error bars) of three different measurements and \textbf{(right)} the overall combinations in comparison
to the simulation (green curve) as a function of the zenith angle in $5\rm{^{\circ}}$ bins at \textbf{(from top to bottom)} $35\,\rm{MHz}$, $55\,\rm{MHz}$ and $75\,\rm{MHz}$.
The colored bands in the left diagrams describe the constant (light color) and flight-dependent (dark color) systematic uncertainties of each flight.
The gray band in the right diagrams describes the constant systematic uncertainties whereas the statistical and flight-dependent systematic uncertainties
are combined within the error bars.}
\label{fig:HorVEL}
\end{center}
\end{figure}
The constant systematic uncertainty of the combined horizontal VEL is $6.3\,\rm{\%}$ and the uncertainties considering flight dependent systematic and statistical uncertainties for the combined horizontal VEL
result in $4.7\,\rm{\%}$ at a zenith angle of $(42.5\pm2.5)\rm{^{\circ}}$ and a frequency of $55\,\rm{MHz}$. The overall uncertainty of the determined LPDA VEL in the horizontal polarization adds quadratically
to $7.9\,\rm{\%}$. The overall uncertainty of all other arrival directions and frequencies are shown on the left side of Fig.~\ref{fig:HorVELError}.
On the right side of Fig.~\ref{fig:HorVELError} a histogram of all overall uncertainties for all frequencies and all zenith angles up to $85\rm{^{\circ}}$ is shown.
For larger zenith angles the LPDA loses sensitivity and the systematic uncertainty exceeds $20\,\rm{\%}$. Therefore, angles beyond $85\rm{^{\circ}}$ are not considered in the following discussion.
Taking all intervals of the frequencies and zenith angles with equal weight the
median overall uncertainty including statistical and systematic uncertainties is $7.4^{+0.9}_{-0.3}\,\rm{\%}$.
\begin{figure}
\begin{center}
{\includegraphics[scale=0.3]{Plots/Kapitel6/horizontalVEL/Korrekturfaktor/3D-Uncertainty.pdf}}{\includegraphics[scale=0.3]{Plots/Kapitel6/horizontalVEL/Korrekturfaktor/HistError.pdf}}\\
\caption{\it \textbf{(left)} Overall uncertainty of the horizontal VEL $|H_{\phi}|$ including statistical and systematic uncertainties for all frequencies as a function of the zenith angle $\Theta$ up to
$85\rm{^{\circ}}$ in $5\rm{^{\circ}}$ bins. \textbf{(right)} Histogram of all overall uncertainties for all frequencies and all zenith angle bins previously mentioned. The median (average value $\mu$) is marked
as red dashed line (red line).}
\label{fig:HorVELError}
\end{center}
\end{figure}
The green curve in Fig.~\ref{fig:HorVEL} marks the simulation of $|H_{\phi}|$. The agreement between the combined measurements and the simulation of $|H_{\phi}|$ is illustrated in the diagram of their ratio versus
zenith angle $\Theta$ and frequency $f$ in the upper left panel of Fig.~\ref{fig:HorCorrection}.
In the upper right panel of Fig.~\ref{fig:HorCorrection} all ratios are filled into a histogram with entries weighted by $\sin(\Theta)$
in consideration of the decrease in field of view at small zenith angles. The combined measurement and the simulation agree to within $1\,\rm{\%}$ in the median.
The fluctuation described by the $68\,\rm{\%}$ quantile
is at the level of $12\,\rm{\%}$. The two lower panels of Fig.~\ref{fig:HorCorrection} show the median ratio as a function of the frequency (left) and as a function
of the zenith angle (right). In both cases, the red error bars mark the $68\,\rm{\%}$ quantile of the distributions.
\begin{figure}
\begin{center}
{\includegraphics[scale=0.29]{Plots/Kapitel6/horizontalVEL/Korrekturfaktor/3D-Faktor.pdf}}
\vspace{0.1cm}
{\includegraphics[scale=0.29]{Plots/Kapitel6/horizontalVEL/Korrekturfaktor/HistRatio-WeightedMeasSim.pdf}}
\vspace{0.1cm}
{\includegraphics[scale=0.29]{Plots/Kapitel6/horizontalVEL/Korrekturfaktor/Faktor_Freq.pdf}}
\vspace{0.1cm}
{\includegraphics[scale=0.29]{Plots/Kapitel6/horizontalVEL/Korrekturfaktor/Faktor_Zenith.pdf}}
\caption{\it Comparison of the combined horizontal VEL $|H_{\phi}|$ with the simulation. \textbf{(top left)} Ratio of the combination of all measurements and simulation for all frequencies as a function of
the zenith angle $\Theta$ up to $84\rm{^{\circ}}$ in $3\rm{^{\circ}}$ bins.
\textbf{(top right)} Histogram of all ratios of the combination of all measurements and simulation for all frequencies and all zenith angle bins previously mentioned weighted with $\sin(\Theta)$.
The median value is marked as the red dashed line.
\textbf{(bottom left)} Median (red dots) and the $68\,\rm{\%}$ quantile (red error bars) of the zenith angle weighted ratio distribution
as a function of the frequency. \textbf{(bottom right)} Median (red dots) and the $68\,\rm{\%}$ quantile (red error bars) of the ratio distribution
as a function of $\Theta$. The gray band indicates the constant systematic uncertainty of the measurement and the red dashed lines mark the overall zenith angle weighted average in both
lower diagrams.}
\label{fig:HorCorrection}
\end{center}
\end{figure}
\subsection{Meridional Vector Effective Length}
In this subsection, the results of the meridional VEL $|H_{\theta}|$ are discussed. For both subcomponents $|H_{\theta,\mathrm{hor}}|$ and $|H_{\theta,\mathrm{vert}}|$ three independent measurements were taken and averaged.
The averaged components are combined to determine $|H_{\theta}|$ as a function
of the zenith angle $\Theta$ using Eq.\eqref{eq:HTheta}. In Fig.~\ref{fig:VerVEL} all performed measurements of $|H_{\theta}|$ are combined in zenith angle intervals of $5\rm{^{\circ}}$, weighted by the quadratic sum of the
systematic and the statistical uncertainties of each flight. The gray band describes the constant systematic uncertainties whereas
the statistical and flight-dependent systematic uncertainties are combined within the red error bars.
\begin{figure}
\begin{center}
{\includegraphics[scale=0.29]{Plots/Kapitel6/verticalVEL/Kombination/Vergleich35.pdf}}\\
{\includegraphics[scale=0.29]{Plots/Kapitel6/verticalVEL/Kombination/Vergleich55.pdf}}\\
{\includegraphics[scale=0.29]{Plots/Kapitel6/verticalVEL/Kombination/Vergleich75.pdf}}
\caption{\it Combination of all measurements of the meridional VEL $|H_{\theta}|$ (red dots) as a function of the zenith angle $\Theta$ in comparison to the simulation (green curve)
for three different frequencies \textbf{(from top to bottom)} $35\,\rm{MHz}$, $55\,\rm{MHz}$ and $75\,\rm{MHz}$.
The gray band describes the constant systematic uncertainties whereas the statistical and flight-dependent systematic uncertainties are combined within the error bars.}
\label{fig:VerVEL}
\end{center}
\end{figure}
The constant systematic uncertainty of the combined VEL is $6.3\,\rm{\%}$. The uncertainties considering flight
dependent systematic and statistical uncertainties of the combined VEL result in $11.2\,\rm{\%}$ at a zenith angle of $(42.5\pm2.5)\rm{^{\circ}}$ and a frequency of $55\,\rm{MHz}$.
The overall uncertainty of the determined LPDA VEL in the meridional polarization adds quadratically to $12.9\,\rm{\%}$.
The overall uncertainty of all other arrival directions and frequencies are shown on the left side of Fig.~\ref{fig:VerVELError}. On the right side of Fig.~\ref{fig:VerVELError}, a histogram of
all overall uncertainties for all frequencies and all zenith angles up to $65\rm{^{\circ}}$ is shown.
For larger zenith angles the LPDA loses sensitivity and the systematic uncertainty exceeds $20\,\rm{\%}$. Therefore, these angles are not considered in the following discussion.
Taking all intervals of the frequencies and zenith angles with equal weight the
median overall uncertainty including statistical and systematic uncertainties is $10.3^{+2.8}_{-1.7}\,\rm{\%}$.
This is larger than the uncertainty of the horizontal component $|H_{\phi}|$. The reasons are that firstly, the meridional
component $|H_{\theta}|$ is a combination of two measurements of $|H_{\theta,\mathrm{hor}}|$ and $|H_{\theta,\mathrm{vert}}|$ whereas $|H_{\phi}|$ is directly measured. Secondly, the number of measurements is smaller than in
the case of $|H_{\phi}|$ and thirdly, the horizontal component is corrected for the octocopter misplacement and misalignment in comparison to the meridional subcomponents where this effect is included in the systematic
uncertainties.
\begin{figure}
\begin{center}
{\includegraphics[scale=0.3]{Plots/Kapitel6/verticalVEL/Korrekturfaktor/3D-Uncertainty.pdf}}{\includegraphics[scale=0.3]{Plots/Kapitel6/verticalVEL/Korrekturfaktor/HistError.pdf}}\\
\caption{\it \textbf{(left)} Overall uncertainty of the meridional VEL
$|H_{\theta}|$ including statistical and systematic uncertainties for all
frequencies as a function of the zenith angle $\Theta$ up to $65\rm{^{\circ}}$
in $5\rm{^{\circ}}$ bins. \textbf{(right)} Histogram of all overall
uncertainties for all frequencies and all $\Theta$ up to $65\rm{^{\circ}}$. The
median (average value $\mu$) is marked
as red dashed line (red line).}
\label{fig:VerVELError}
\end{center}
\end{figure}
The green curve in Fig.~\ref{fig:VerVEL} indicates the simulation of $|H_{\theta}|$. The agreement between the combination of all measurements and the simulations of $|H_{\theta}|$ is illustrated
by the diagram of their ratio versus zenith angle $\Theta$ and frequency $f$ shown in the upper left panel of Fig.~\ref{fig:VerCorrection}.
In the upper right panel all ratios for all zenith angles and frequencies are filled into a histogram with entries weighted by $\sin(\Theta)$ in consideration of the
decrease in field of view at small zenith angles. The combined measurement and the simulation agree to within $5\,\rm{\%}$ in the median. The fluctuation described by the $68\,\rm{\%}$ quantile
is at the level of $26\,\rm{\%}$. The two lower panels of Fig.~\ref{fig:VerCorrection} show the median ratio as a function of the
frequency (left) and as a function of the zenith angle (right). In both cases, the red error bars mark the $68\,\rm{\%}$ quantile of the distributions.
\begin{figure}
\begin{center}
{\includegraphics[scale=0.29]{Plots/Kapitel6/verticalVEL/Korrekturfaktor/3D-Faktor.pdf}}
\vspace{0.1cm}
{\includegraphics[scale=0.29]{Plots/Kapitel6/verticalVEL/Korrekturfaktor/HistRatio-WeightedMeasSim.pdf}}
\vspace{0.1cm}
{\includegraphics[scale=0.29]{Plots/Kapitel6/verticalVEL/Korrekturfaktor/Faktor_Freq.pdf}}
\vspace{0.1cm}
{\includegraphics[scale=0.29]{Plots/Kapitel6/verticalVEL/Korrekturfaktor/Faktor_Zenith.pdf}}
\caption{\it Comparison of the combined meridional VEL $|H_{\theta}|$ with the simulation. \textbf{(top left)} Ratio of combination of all measurements and simulation
for all frequencies as a function of the zenith angle $\Theta$ up to $63\rm{^{\circ}}$ in $3\rm{^{\circ}}$ bins.
\textbf{(top right)} Histogram of all ratios of the combination of all measurements and simulation for all frequencies and all zenith angle bins previously mentioned weighted with $\sin(\Theta)$.
The median value is marked as the red dashed line.
\textbf{(bottom left)} Median (red dots) and the $68\,\rm{\%}$ quantile (red error bars) of the zenith angle weighted correction factor distribution
as a function of the frequency. \textbf{(bottom right)} Median (red dots) and the $68\,\rm{\%}$ quantile (red error bars) of the ratio distribution
as a function of $\Theta$. The gray band indicates the constant systematic uncertainty of the measurement and the red dashed lines mark the overall zenith angle weighted average in both lower diagrams.}
\label{fig:VerCorrection}
\end{center}
\end{figure}
\subsection{Interpolation to all Arrival Directions and Frequencies}
The horizontal and meridional VEL magnitudes are measured in the LPDA axis with
highest sensitivity to the respective VEL component (refer to section
\ref{sec:Calib}) and in frequency bins of $5\,\rm{MHz}$.
To achieve an overall LPDA calibration for all incoming directions and frequencies the measurement is combined with simulations.
The LPDA VEL pattern is simulated using the NEC-2 simulation code. In contrast
to the previous simulations presented in this work, only the LPDA with the
following amplifier stage but without the transmitting antenna is taken into
account. This original simulation of the LPDA pattern is then combined with the
results from the calibration campaign. The calibrated LPDA VEL pattern is
obtained by multiplying the full pattern of the simulated VEL with the ratio of
measured to simulated VEL magnitudes shown in
Figs.~\ref{fig:HorCorrection}~and~\ref{fig:VerCorrection}.
The ratios are linearly interpolated between the measurements at each zenith angle and each frequency bin.
The respective frequency and zenith angle dependent ratios are applied at all azimuth angles. With this method, the magnitude of the VEL
is normalized to the calibration measurements, while the VEL phase comes entirely from the original simulation.
\section{Influence on Cosmic-Ray Signal Reconstruction}
In this section the influence of the modified LPDA pattern on the cosmic-ray signal reconstruction is discussed.
In the first part of this section the influence of the differences between simulated and measured VEL on the electric field as well as on the radiation energy
for one event with a specific arrival direction are presented. In the second part the influence of the uncertainty of both components of the VEL on the electric-field is discussed.
\subsection{Influence of Modified Pattern on one Example Event}
To reconstruct the electric field of a measured air shower induced radio signal the Auger software framework
{\mbox{$\overline{\textrm{Off}}$\hspace{.05em}\protect\raisebox{.4ex}{$\protect\underline{\textrm{line}}$}}} \cite{Offline} is used. To show the influence of the improved VEL,
an air shower measured in $9$ stations at AERA with a zenith angle of $30\rm{^{\circ}}$ and an azimuth angle of $14\rm{^{\circ}}$ south of east is presented as an example. The energy of the primary cosmic ray
is reconstructed to $1.1\times10^{18}\,\rm{eV}$ using information from the surface detector.
In Fig.~\ref{fig:EFieldTrace} the electric field reconstructed at the station with highest signal-to-noise ratio (SNR) is shown once using the simulated antenna response with and once without the corrections owing to the measurements of the VEL magnitude
in both components. For clarity only one polarization component of the electric field is shown.
\begin{figure}
\begin{center}
{\includegraphics[scale=0.55]{Plots/Kapitel7/EFieldTrace0.pdf}}
\caption{\it \textbf{(top)} Reconstructed electric-field trace at the station with highest SNR in the east-west polarization of a signal measured at AERA with a zenith angle of $30\rm{^{\circ}}$ and
an azimuth angle of $14\rm{^{\circ}}$
south of east using the simulated LPDA pattern (blue line) and using the modified pattern considering the correction factors between measurement and simulation (green line). The residual between both
reconstructed traces as function of the time is shown in the \textbf{(lower)} diagram. The measured energy fluence in the east-west polarization changes from $100\frac{\rm{eV}}{\rm{m^{2}}}$ to
$112\frac{\rm{eV}}{\rm{m^{2}}}$. }
\label{fig:EFieldTrace}
\end{center}
\end{figure}
The general shape of the electric-field trace is the same for both reconstructions. The trace of the modified LPDA pattern exhibits an up to $7\,\rm{\%}$ larger amplitude.
The measured energy fluence that scales with the amplitude squared in the east-west polarization at this station with highest SNR changes from $100\frac{\rm{eV}}{\rm{m^{2}}}$ to $112\frac{\rm{eV}}{\rm{m^{2}}}$.
The total energy fluence of all polarizations changes from $141\frac{\rm{eV}}{\rm{m^{2}}}$ using the simulated antenna response pattern to $156\frac{\rm{eV}}{\rm{m^{2}}}$ using the modified antenna
response pattern which is an effect at the level of $9\,\rm{\%}$. The reconstructed radiation energy of the full event changes from $7.96\,\rm{MeV}$ to $8.54\,\rm{MeV}$. The ratio of these radiation energies is $0.93$.
\subsection{Uncertainty of the Cosmic-Ray Signal Reconstruction}
\label{subsec:UncerCR}
In this subsection the systematic uncertainty of the cosmic-ray signal reconstruction that results from the overall uncertainty of the antenna VEL magnitude and from the uncertainty due to different ground permittivities is determined.
In the first case, the VEL magnitude is shifted up and down by one standard deviation of the overall uncertainty. The VEL phase remains unchanged.
In the case of the uncertainty due to different ground permittivities the antenna pattern with a ground permittivity of $2$ and of $10$ are used (see Fig.~\ref{fig:NEC2}).
The respective VEL is denoted as $H^\mathrm{down}$ and $H^\mathrm{up}$.
The antenna response is applied to a simulated electric-field pulse using once $H^\mathrm{up}$ and once $H^\mathrm{down}$, to obtain the corresponding
voltage traces $\mathcal{U}^\mathrm{up}$ and $\mathcal{U}^\mathrm{down}$ according to Eq.~\eqref{eq:AntResponse}. Then, the original VEL is used to reconstruct the electric-field pulse once from
$\mathcal{U}^\mathrm{up}$ and once from $\mathcal{U}^\mathrm{down}$. From the difference of the two resulting electric-field pulses, the systematic uncertainty of the amplitude or the energy fluence can be determined.
Both uncertainties resulting from the antenna VEL magnitude uncertainty and resulting from different ground permittivities, are then combined quadratically. \\
An additional uncertainty on the electric-field trace can arise due to an
uncertainty on the VEL phase.
An uncertainty in the VEL phase leads to a signal distortion of the radio pulse
resulting in an increased signal pulse width and a smaller electric-field amplitude or vice versa. However, the energy fluence of the RD pulse which is given by
the integral over the electric-field trace remains constant.
Hence, a VEL phase uncertainty propagates to an additional uncertainty in the electric-field amplitude whereas the energy fluence does not change due to a VEL phase uncertainty.
Therefore, the systematic uncertainty of the energy fluence due to the VEL uncertainty is discussed in the following.\\
The radio pulse is approximated with a bandpass-limited Dirac pulse and the polarization is adjusted according to the dominant geomagnetic emission process.
As the uncertainty of the VEL and the polarization of the electric-field pulse depend on the incoming signal direction, different directions in bins of $10\rm{^{\circ}}$
in the azimuth angle and in bins of $5\rm{^{\circ}}$ in the zenith angle are simulated. Due to the changing polarization also the relative influences of the $|H_{\phi}|$ and $|H_{\theta}|$ components change with direction.
The resulting systematic uncertainty of the energy fluence is presented in Fig.~\ref{fig:EFieldUncertainty}. The square root of the energy fluence is shown because the energy fluence scales quadratically with the
electric-field amplitude and the cosmic-ray energy. Hence, the systematic uncertainty of the square root of the energy fluence is the relevant uncertainty in most analyses.
\begin{figure}
\begin{center}
{\includegraphics[scale=0.3]{Plots/Kapitel7/uncertainties_energy_fluence.pdf}}{\includegraphics[scale=0.3]{Plots/Kapitel7/uncertainties_energy_fluence_histo.pdf}}
\caption{\it \textbf{(left)} Systematic uncertainty of the square root of the energy fluence for all arrival directions taking into account a signal polarization due to the dominant geomagnetic emission process.
The square root of the energy fluence is shown because the energy fluence scales quadratically with the electric-field amplitude and the cosmic-ray energy. Hence, the uncertainties of the square root
of the energy fluence is the relevant uncertainty in most analyses. \textbf{(right)} Histogram of the systematic uncertainty of the square root of the energy fluence of signals with zenith angles smaller
than $80\rm{^{\circ}}$ (blue) and of signals with zenith angles smaller than
$60\rm{^{\circ}}$ (green).}
\label{fig:EFieldUncertainty}
\end{center}
\end{figure}
For most regions the systematic uncertainty is at the level of $10\,\rm{\%}$. The uncertainty increases only at large zenith angles ($\theta > 60\rm{^{\circ}}$) due to the increased uncertainty of $|H_{\theta}|$.
An azimuthal pattern appears at $90\rm{^{\circ}}$ and $270\rm{^{\circ}}$. At these azimuth angles the uncertainty is smaller because the electric-field pulse
is polarized in the $\vec{e}_{\phi}$ component and only the more precise $|H_{\phi}|$ component contributes.
For incoming signal directions with zenith angles smaller than $60\rm{^{\circ}}$ the systematic uncertainty of the square root of the energy fluence owing to the antenna calibration and
different ground permittivities is at most $14.2\,\rm{\%}$ with a median of $8.8^{+2.1}_{-1.3}\,\rm{\%}$.
\section{Conclusion}
In this work, the results of an absolute antenna calibration are presented
performed on a radio station equipped with a logarithmic periodic dipole
antenna (LPDA). The station belongs to the AERA field of radio stations
at the site of the Pierre Auger Observatory. The calibrated LPDA is representative
of all the LPDAs which are mechanically and
electrically identical at the percent level.\\
The radio stations are used to reconstruct the electric field
emitted by cosmic particle induced air showers which gives, e.g., a
precise measure of the energy contained in the electromagnetic shower.
The accuracy of the reconstructed shower energy is limited by the
uncertainty in the absolute antenna calibration such that reduction of
the uncertainties was most desirable.\\
The frequency and directional dependent sensitivity of the LPDA
has been probed by an octocopter carrying a calibrated radio source with
dedicated polarization of the emitted radio signals. The measured LPDA
response has been quantified using the formalism of the vector effective
length and decomposed in terms of a horizontal and a meridional component.\\
All experimental components involved in the calibration campaign were
quantified with respect to their uncertainties. Special emphasis was put
on the precision in the position reconstruction of the source which was
supported by a newly developed optical system with two cameras used in
conjunction with on-board measurements of inclination, GPS, and
barometric height. To ensure reproducible results, all calibration
measurements were repeated by several flights on different days under
different environmental conditions.\\
The combination of all measurements gives an overall accuracy for the
horizontal component of the vector effective length of $7.4^{+0.9}_{-0.3}\,\rm{\%}$, and
for the meridional component of $10.3^{+2.8}_{-1.7}\,\rm{\%}$. Note that
for air showers with zenith angles below $60\rm{^{\circ}}$
the horizontal component gives the dominant contribution.
The obtained accuracy is to be compared with a previous balloon based
measurement probing a smaller phase space of the horizontal component
with a systematic uncertainty of $12.5\,\rm{\%}$.\\
The measurements of the new calibration campaign enable
thorough comparisons with simulations of the calibration setup including
ground condition dependencies using the NEC-2 program. The
measurements were used to correct the simulated pattern at multiple
points in the phase space described by arrival direction, frequency and
polarization of the waves. While the median of all correction factors
are close to unity at standard ground conditions, corrections of the
simulated vector effective length
vary with an rms of $12\,\rm{\%}$ for the horizontal component, and with
rms of $26\,\rm{\%}$ for the meridional component.
The simulations have been further used to confirm that the measurements
have been done in the far-field region. Additionally, the LPDA
sensitivity to different ground conditions has been investigated showing
that the LPDA is insensitive to different ground conductivities and the
sensitivity to different permittivity is only at the level of $1\,\rm{\%}$.\\
The effect of the correction factors on the simulated vector
effective length has been demonstrated in the reconstruction of
one example radio event measured with AERA.\\
Finally, the uncertainty of the two VEL components are propagated onto the square root of the energy fluence that is
obtained by unfolding the antenna response from the measured voltage traces.
For incoming directions up to $60\rm{^{\circ}}$, the expected systematic
uncertainty in the square root of the energy fluence due to the LPDA
calibration is $8.8^{+2.1}_{-1.3}\,\rm{\%}$ in the median.
\acknowledgments
\begin{sloppypar}
The successful installation, commissioning, and operation of the Pierre Auger Observatory would not have been possible without the strong commitment and effort from the technical and administrative staff in Malarg\"ue. We are very grateful to the following agencies and organizations for financial support:
\end{sloppypar}
\begin{sloppypar}
Argentina -- Comisi\'on Nacional de Energ\'\i{}a At\'omica; Agencia Nacional de Promoci\'on Cient\'\i{}fica y Tecnol\'ogica (ANPCyT); Consejo Nacional de Investigaciones Cient\'\i{}ficas y T\'ecnicas (CONICET); Gobierno de la Provincia de Mendoza; Municipalidad de Malarg\"ue; NDM Holdings and Valle Las Le\~nas; in gratitude for their continuing cooperation over land access; Australia -- the Australian Research Council; Brazil -- Conselho Nacional de Desenvolvimento Cient\'\i{}fico e Tecnol\'ogico (CNPq); Financiadora de Estudos e Projetos (FINEP); Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de Rio de Janeiro (FAPERJ); S\~ao Paulo Research Foundation (FAPESP) Grants No.\ 2010/07359-6 and No.\ 1999/05404-3; Minist\'erio de Ci\^encia e Tecnologia (MCT); Czech Republic -- Grant No.\ MSMT CR LG15014, LO1305 and LM2015038 and the Czech Science Foundation Grant No.\ 14-17501S; France -- Centre de Calcul IN2P3/CNRS; Centre National de la Recherche Scientifique (CNRS); Conseil R\'egional Ile-de-France; D\'epartement Physique Nucl\'eaire et Corpusculaire (PNC-IN2P3/CNRS); D\'epartement Sciences de l'Univers (SDU-INSU/CNRS); Institut Lagrange de Paris (ILP) Grant No.\ LABEX ANR-10-LABX-63 within the Investissements d'Avenir Programme Grant No.\ ANR-11-IDEX-0004-02; Germany -- Bundesministerium f\"ur Bildung und Forschung (BMBF); Deutsche Forschungsgemeinschaft (DFG); Finanzministerium Baden-W\"urttemberg; Helmholtz Alliance for Astroparticle Physics (HAP); Helmholtz-Gemeinschaft Deutscher Forschungszentren (HGF); Ministerium f\"ur Innovation, Wissenschaft und Forschung des Landes Nordrhein-Westfalen; Ministerium f\"ur Wissenschaft, Forschung und Kunst des Landes Baden-W\"urttemberg; Italy -- Istituto Nazionale di Fisica Nucleare (INFN); Istituto Nazionale di Astrofisica (INAF); Ministero dell'Istruzione, dell'Universit\'a e della Ricerca (MIUR); CETEMPS Center of Excellence; Ministero degli Affari Esteri (MAE); Mexico -- Consejo Nacional de Ciencia y Tecnolog\'\i{}a (CONACYT) No.\ 167733; Universidad Nacional Aut\'onoma de M\'exico (UNAM); PAPIIT DGAPA-UNAM; The Netherlands -- Ministerie van Onderwijs, Cultuur en Wetenschap; Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO); Stichting voor Fundamenteel Onderzoek der Materie (FOM); Poland -- National Centre for Research and Development, Grants No.\ ERA-NET-ASPERA/01/11 and No.\ ERA-NET-ASPERA/02/11; National Science Centre, Grants No.\ 2013/08/M/ST9/00322, No.\ 2013/08/M/ST9/00728 and No.\ HARMONIA 5 -- 2013/10/M/ST9/00062; Portugal -- Portuguese national funds and FEDER funds within Programa Operacional Factores de Competitividade through Funda\c{c}\~ao para a Ci\^encia e a Tecnologia (COMPETE); Romania -- Romanian Authority for Scientific Research ANCS; CNDI-UEFISCDI partnership projects Grants No.\ 20/2012 and No.194/2012 and PN 16 42 01 02; Slovenia -- Slovenian Research Agency; Spain -- Comunidad de Madrid; Fondo Europeo de Desarrollo Regional (FEDER) funds; Ministerio de Econom\'\i{}a y Competitividad; Xunta de Galicia; European Community 7th Framework Program Grant No.\ FP7-PEOPLE-2012-IEF-328826; USA -- Department of Energy, Contracts No.\ DE-AC02-07CH11359, No.\ DE-FR02-04ER41300, No.\ DE-FG02-99ER41107 and No.\ DE-SC0011689; National Science Foundation, Grant No.\ 0450696; The Grainger Foundation; Marie Curie-IRSES/EPLANET; European Particle Physics Latin American Network; European Union 7th Framework Program, Grant No.\ PIRSES-2009-GA-246806; European Union's Horizon 2020 research and innovation programme (Grant No.\ 646623); and UNESCO.
\end{sloppypar}
| proofpile-arXiv_065-7673 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\new{Commercially available microfluidic devices generally consist of closed
channels. These are of advantage in order to prevent evaporation and
allow to pump fluids by applying a pressure difference between inflow and
outflow boundaries. However, keeping the channels clean is a serious problem
due to clogging. Open microfluidic systems are an alternative route, where
droplets and rivulets are confined to chemically patterned substrates
containing wetting surface domains on a less wetting
substrate~\cite{DOERFLER,DOERFLER-24,DOERFLER-25,DOERFLER-26,DOERFLER-27,DOERFLER-28}. While flow in chemical channels cannot be induced by pressure differences, capillary forces due to wettability gradients~\cite{DOERFLER-29}, or electrowetting~\cite{DOERFLER-33} are a possible alternative. Another practical possibility are shear forces induced by a covering immiscible fluid~\cite{DOERFLER-41}. The advantage of this approach is also that it prevents evaporation and contamination, e.g. with dirt or dust.}
\new{In microfluidic systems and in open systems in particular,
the separation and coalescence of micro and nano droplets is a common phenomenon. However, it occurs not just in microfluidics, but in various
situations, such as cloud formation, film formation~\cite{TenNij00}
or inkjet printing~\cite{KarRie13,LiuMeiWan14,ThoTipJue14}. }
While sessile droplets with a contact angle of \SI{90}{\degree} are
comparable to freely suspended droplets, sessile droplets of different
contact angles are omnipresent and can have a significantly altered
behaviour, as we show in this work.
Many different
phenomena are found in this process, like jumping of the coalescing
droplets~\cite{LiuGhiFen14}, a transition from coalescence to
noncoalescence~\cite{KarHanFerRie14,KarRie13}, variations in the meniscus
shape~\cite{BilKin05} and resulting behavior~\cite{HerLubEdd12},
mixing~\cite{SunMac95,ZhaObeSwa15} or the growth rate dependence of the
meniscus on the contact angle~\cite{SuiMagSpe13,GroSteRaa13}. Furthermore, a
large interest in droplets and bubbles submerged in a liquid
exists~\cite{LohZha15,BerPosGun84,BhuWanMaa08,BorDamSch07}. Often experiments
on droplets are performed submerged~\cite{BaiWhi88}, in order to reduce the
influence of gravity, to scale diffusion~\cite{SuNee13,DucNee06} or in order to
study altered friction behavior~\cite{Raa04,SchHar11}.
Of particular interest is the initial coalescence dynamics, just after the drops
are brought into contact. The two drops become connected by a small liquid
bridge that rapidly grows in time (Fig.~\ref{fig:Experiment}). This rapid motion
is due to the large curvature that induces a very large (negative) Laplace
pressure, driving the liquid into the bridge between the two drops. This process
has for example been studied extensively for freely suspended drops
\cite{PauBurNag11,PauBurNag211,EggLisSto99,DucEggJos03,AarLekGuo05,SprShi14}.
Here we concentrate on drops on a substrate, which further complicates the
geometry of the coalescence process
\cite{RisCalRoy06,NarBeyPom08,LeeKanYoo12,HerLubEdd12,EddWinSno13-2,SuiMagSpe13,MitMit15}.
An important observation is that the dominant direction of the flow is oriented
from the center of the drops towards the bridge, such that the relevant scaling
laws can be inferred from quasi-two-dimensional arguments
\cite{RisCalRoy06,NarBeyPom08,HerLubEdd12,EddWinSno13-2,SuiMagSpe13}.
Interestingly, for inertia-dominated coalescence it was shown that droplets of a
\SI{90}{\degree} contact angle behave differently from those of a lower contact
angle. Even small deviations from \SI{90}{\degree} lead to a faster growth of
the bridge height in time, which can be described in terms of scaling laws. In
particular, the inertial coalescence changes from $t^\frac{1}{2}$ for
\SI{90}{\degree}, to a $t^\frac{2}{3}$ power law for smaller angles
\cite{EddWinSno13-2,SuiMagSpe13}. In both cases the shape of the liquid bridge
exhibits a self-similar dynamics, but with different horizontal and vertical
scales when the contact angle reaches \SI{90}{\degree}.
In this paper we focus on the coalescence of sessile drops immersed in another
liquid, as it is relevant in open microfluidics. This has been addressed in the
viscous regime \cite{MitMit15}, for which both the drops and the surrounding
fluid were highly viscous.
Here we perform lattice Boltzmann simulations in the inertial regime and
define the outer fluid to be of the same density as the coalescing
droplets.
We investigate how the bridge dynamics changes as a function of the
contact angle and, to the best of our knowledge, for the first time investigate
whether self-similar behavior can be identified in the velocity field. A
detailed comparison to experiments~\cite{EddWinSno13-2} of drops in air is
provided, pointing out similarities and differences with respect to immersed
droplets. The paper starts with a description of the lattice Boltzmann method,
and we pay particular attention to the initiation of the coalescence
(Sec.~\ref{sec:method}). The central results are presented in
Sec.~\ref{sec:results} and the paper closes with a discussion in
Sec~\ref{sec:discussion}.
\begin{figure}[h]\centering
\includegraphics{Experiment-figure0}
\caption{Snapshot of the bridge shape during coalescence from the experiment by
Eddi~et~al.~\cite{EddWinSno13-2}. In this example the droplets have a
contact angle of \SI{90}{\degree}. The bridge height $h_b$, initial droplet
radius $r_0$ and the horizontal scale $w$ are marked in the image.
\label{fig:Experiment}} \end{figure}
\section{Simulation Method}\label{sec:method}
\subsection{The lattice Boltzmann method}
The coalescence of droplets is a quasi 2D problem
\cite{RisCalRoy06,HerLubEdd12,EddWinSno13-2}, so in favor of numerical speed
we choose to perform our simulations in 2D.
To investigate the coalescence of droplets on a substrate~\cite{DOERFLER}, we use the lattice
Boltzmann method (LBM)~\cite{Raa04,Suc01,BenSucSau92} in a D2Q9
configuration~\cite{Sri14,QiaYueSuc95}, that can be described by
\begin{multline}
f_{i}^\alpha(\vec{x}+\vec{c}} % _{i_i \Delta t,t+\Delta t) - \distf_{i}^\alpha(\vec{x},t)=\\
-\frac{1}{\tau} % ^{\alpha^\alpha} \left(\distf_i^\alpha(\vec{x},t) -
\distf_{i_{eq}}^\alpha(\vec{x},t)\right),\label{eq:lbm}
\end{multline}
where $\distf_i^\alpha(\vec{x},t)$ is a probability distribution function of
particles with mass $m$ of component $\alpha$ at position $\vec{x}$ and time
$t$, following the discretized velocity direction $\vec{c}} % _{i_i$. The left hand
side of \eqref{eq:lbm} is the streaming step, where the probability
distribution functions of the fluid $\alpha$ is distributed to the surrounding
lattice sites. The timestep $\Delta t$, the lattice constant $\Delta x$ and the mass
$m$ of this process are chosen unity for simplicity. On the right hand
side of \eqref{eq:lbm}, the collision step, these distributions relax towards
an equilibrium distribution
\begin{multline}
\distf_{i_{eq}}^\alpha(\vec{x},t)=\\ w} % _{i_i \rho^\alpha \left[
1+\frac{\vec{c}} % _{i_i\cdot \vec{u}} % ^{\alpha}{c_s^2} +
\frac{(\vec{c}} % _{i_i\cdot\vec{u}} % ^{\alpha^\alpha)^2}{c_s^2}
-\frac{(LBu^{\alpha})^2}{2c_s^2}\right]
\end{multline}
on a timescale determined by the relaxation time $\tau} % ^{\alpha$. The relaxation
time is directly proportional to the kinematic viscosity as
$\nu^\alpha=\frac{2\tau} % ^{\alpha^\alpha-1}{6}$. For simplicity $\tau} % ^{\alpha^\alpha$ is chosen unity here.
Forces can
be added by shifting the equilibrium distribution function and thereby
implicitly adding an acceleration~\cite{ShaChe94}. Multiple components may
coexist on every lattice site. Via forces, these can interact with each other.
Here we follow the method described by Shan and Chen~\cite{ShaChe94}
\begin{equation}
\vec{F}^\alpha (x,t) =
- \rho^\alpha(\vec{x},t) g} % ^{\alpha \overline{\alpha}^{\alpha\overline{\alpha}} \sum\limits_{i=1}^{9}
\rho^{\overline{\alpha}}(\vec{x}+c_i,t)c_i.\label{eq:SCForcePsifun1}
\end{equation}
These interaction forces cause the separation of fluids and a surface-tension
$\gamma$. Here we restrict ourselves to two fluids and refer to them as ``red''
and ``blue'' fluids. The width of fluid interfaces and the resulting surface
tension are governed by the interaction strength parameter
$g} % ^{\alpha \overline{\alpha}^{\alpha\overline{\alpha}}$, which is chosen as $0.9$ for all shown
simulations. This results in a surface tension of $\gamma=1.18 \frac{\Delta x
m}{ {\Delta t}^{2}}$.
Our choice of parameters implies that the viscous length scale
$\frac{\rho^\alpha{\nu^\alpha}^2}{\gamma}$
is comparable to the lattice unit $\Delta x$. The resulting scale for the
coalescence is thus much larger than the viscous length, ensuring we are in the
inertial regime of coalescence~\cite{EggLisSto99,PauBurNag11,EddWinSno13-2}.
In this setup the droplets sit on a horizontal flat substrate. The horizontal
no slip boundary sites $w$ defining the substrate are modified to include a
pseudo-density~\cite{SchHar11} equal to the average of the surrounding fluid
sites. Interactions as described in \eqref{eq:SCForcePsifun1}, with interaction
parameters $g} % ^{\alpha \overline{\alpha}^{\alpha w}$ and $g} % ^{\alpha \overline{\alpha}^{\overline{\alpha} w}$ cause a contact
angle~\cite{HuaHauTho07,SchHar11}. In equation \eqref{eq:SCForcePsifun1}
$g^{\alpha w}$ and $g^{\overline{\alpha} w}$ act in place of the interaction
strength parameter and scale the absolute interaction force of the wall on the
fluids, but $g} % ^{\alpha \overline{\alpha}^{\alpha w} - g} % ^{\alpha \overline{\alpha}^{\overline{\alpha} w}$ defines the
contact angle~\cite{HuaHauTho07}. The parameters are chosen as $g^{\alpha w} =
- g^{\overline{\alpha} w}$ to minimize absolute forces.
Using full slip boundaries to effectively mirror the system at the symmetry
axis, the computational domain and therefore computational cost is halved.
The computational costs have to be considered for the two following reasons:
as we are interested in obtaining the meniscus height over time with sufficient
accuracy, we need to scale our entire system to a large resolution.
A second numerical effect to consider is that, because of the finite width of the
interface, the interface and the resulting fluid behavior can be
overrepresented. To avoid this, the interface thickness should be small, as
compared to the droplet radius~\cite{GroSteRaa13}.
The droplet radius at a contact angle of \SI{90}{\degree} is chosen to be $900
\Delta x$, so that the interface thickness results in about \SI{0.5}{\percent} of the
droplet radius. It was found empirically that this drop size is sufficient to
achieve reproducible results for the resulting hydrodynamics.
All fluid volumes and system dimensions are kept constant and only the wall
interaction parameters $g} % ^{\alpha \overline{\alpha}^{\alpha w} = - g^{\overline{\alpha} w}$ and a
horizontal shift length, that brings the droplets into contact, are adjusted for
all subsequent simulations of decreasing contact angles.
\subsection{Initialization of coalescence}
Both in experiment and in simulations, it is challenging to initiate coalescence and
to define the time $t=0$ that marks the start of the coalescence.
Here we provide technical details on how the simulations were performed.
A first problem is that the width of the diffuse interface and the fluid
pressures are dependent on the $g} % ^{\alpha \overline{\alpha}^{\alpha\overline{\alpha}}$ fluid
parameters, so that it is not possible to predict beforehand the correct
densities at all positions. This is why equilibration during initialization is
required.
A lack of correct initialization will lead to strong artefacts, such as
an enclosed bubble for droplets
of a \SI{90}{\degree} contact angle that can be avoided with careful initialization.
To
do so, we first equilibrate a horizontally centered single droplet at the
wetting wall before a second drop is introduced.
The introduction of a second drop, and the initiation of coalescence is a subtle
matter by itself. Here we shift the droplet to a system boundary with a full
slip boundary condition. This effectively mirrors the droplet, as is depicted
in the schematic figure~\ref{fig:simsketch}. This is a magnified section of the
system at the meniscus after shifting the droplet. Here, the effectively
mirrored part is shown in slightly opaque and density profiles at different
cross sections of the diffuse interface are sketched in Figs.~\ref{fig:gradl}
and \ref{fig:gradtwo}. The mirrored part of the system shown in
Fig.~\ref{fig:simsketch} does not need to be simulated, which reduces the
simulation time. The density gradients shown in Fig.~\ref{fig:gradl} of the two
fluids exemplary show a transition from a majority of one fluid to the other, as
for instance in the 1D cross section in Fig.~\ref{fig:simsketch}, marked with
"Diffuse Interface". This schematic representation of a diffuse interface
identifies that the position of an according sharp interface is not clearly
defined. Accordingly, the shift to the full slip boundary can be executed in
different ways, like shown in Fig.~\ref{fig:gradtwo}. Here the schematic red
density gradient stays in place, while the density gradient of the other droplet
is shifted. The second gradient is drawn multiple times, transitioning from left
to right.
\begin{figure}
\subfigure{
\label{fig:simsketch}
\includegraphics[width=\dimexpr0.95\linewidth+4\subfigcapmargin]{simsketch-figure0}
}\\
\subfigure{
\label{fig:gradl}
\includegraphics[width=.45\linewidth]{gradl-figure0}
}
\subfigure{
\label{fig:gradtwo}
\includegraphics[width=.45\linewidth]{gradtwo-figure0}
}
\caption{(Color online)
\subref{fig:simsketch} Schematic drawing of droplets on a substrate, before
coalescence. The opaque half is mirrored through a free slip boundary. The
density line cross section of Fig.~\subref{fig:gradl} is indicated by the dashed line ``Diffuse
Interface'' and the corresponding cross section of \subref{fig:gradtwo} is
indicated with ``Shifting Interfaces''.
\subref{fig:gradl} Schematic of the density of red and blue fluid in a 1D
cross section across the diffuse interface, as can be found along the dashed line ``Diffuse Interface'' in \subref{fig:simsketch}.
The width of the diffuse interface is about six lattice sites.
\subref{fig:gradtwo} The density cross section at the interface after moving the
droplet to initialize coalescence. Only the density of the droplets is shown.
Different possible options to shift the droplet are depicted by a left-right
transition of the right droplet's density field. The symbols correlate to those
in Fig.~\ref{fig:tNot}.
}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth,height=0.8\linewidth]{tNot-figure0}
\caption{(Color online) The meniscus height in time of differently shifted
droplets with a contact angle of \SI{85}{\degree}. This shift was
implemented to trigger the coalescence after proper initialization of the
diffuse interface. Overlapping droplets coalesce with an initial bridge
height $\ge0$. It can be seen that droplets with a small separation coalesce
as well, but with a delay. Further separation of the droplets results in a
system where the droplets do not coalesce for $\ge10000$ timesteps. It can
be seen that the growth rate after initial coalescence, apart from minor
initialization effects, is identical for different droplet shifts. The inset
displays the same data, manually shifted. It can be seen that, apart from a
small startup phase, these curves coincide. The initial timestep of
coalescence is therefore highly dependent on two parameters: The choice of
the density defined as the interface position and the initial
shift.\label{fig:tNot}}
\end{figure}
To investigate the effect of the precise location of the second drop, we
investigate the growth dynamics for different shifts, i.e.\ different initial
positions. As an example a droplet with a contact angle of \SI{85}{\degree} is
moved by the default distance of $2$ lattice sites. The result is shown in
figure~\ref{fig:tNot}, where the bridge height $h_b$ in time $t$ for different
values of shift is recorded. As expected, overlapping droplets coalesce with an
initial non-zero bridge height. By contrast, small separations between the
drops cause a delay of the coalescence process by multiple timesteps. In this
case diffusion occurs across the small separation of droplets, so that it takes
some time before a detectable bridge has formed between the two drops. Further
separation of the droplets causes the coalescence not to occur for several
thousand timesteps. After $\approx 10^3$ timesteps all curves fall on a
$t^\frac{2}{3}$ power law. In the inset of Fig.~\ref{fig:tNot} it is shown that
a collapse of the data can be achieved by manual shift of the time, which is a
robust way to identify an appropriate definition of $t=0$. This procedure will
be followed for all plots in the remainder of the paper.
\section{Results}\label{sec:results}
\begin{figure*}[tb]
\noindent
\begin{tabular}{@{}*{4}{p{\dimexpr .20\textwidth-.8ex}@{\hskip
1ex}}p{\dimexpr .20\textwidth-.8ex}}
t=$1000 \Delta t$&t=$2000 \Delta t$&t=$3000 \Delta t$&t=$4000 \Delta t$&t=$5000 \Delta t$\\
{%
\includegraphics[height=\linewidth,width=\linewidth]{73-1-figure0}%
}
&
{%
\includegraphics[height=\linewidth,width=\linewidth]{73-2-figure0}%
}
&
{%
\includegraphics[height=\linewidth,width=\linewidth]{73-3-figure0}%
}
&
{%
\includegraphics[height=\linewidth,width=\linewidth]{73-4-figure0}%
}
&
{%
\includegraphics[height=\linewidth,width=\linewidth]{73-5-figure0}%
}\\
t=$2000 \Delta t$&t=$4000 \Delta t$&t=$6000 \Delta t$&t=$8000 \Delta t$&t=$10000 \Delta t$\\
{%
\includegraphics[height=\linewidth,width=\linewidth]{90-1-figure0}%
}
&
{%
\includegraphics[height=\linewidth,width=\linewidth]{90-2-figure0}%
}
&
{%
\includegraphics[height=\linewidth,width=\linewidth]{90-3-figure0}%
}
&
{%
\includegraphics[height=\linewidth,width=\linewidth]{90-4-figure0}%
}
&
{%
\includegraphics[height=\linewidth,width=\linewidth]{90-5-figure0}%
}\\
\end{tabular}
\caption{Time series of coalescence, zoomed into the meniscus shape. Upper row:
Droplets with a contact angle of \SI{73}{\degree} at every 1000 timesteps. Lower
row: Droplets with a contact angle of \SI{90}{\degree} at every 2000 timesteps.
The number of timesteps are displayed above the
snapshots.\label{fig:timeseries}}
\end{figure*}
In Fig.~\ref{fig:timeseries} we show snapshots of the coalescence process at the
exemplary contact angles of \SI{73}{\degree} and \SI{90}{\degree} (as is the
case in the experiments of~\cite{EddWinSno13-2}). To represent the interface we
use a bilinear interpolation of a threshold density which allows to obtain
smooth data. The time series shows that a thin bridge appears between the two
droplets, which grows both in height and width as time evolves. To quantify this
evolution, we track the bridge height $h_b(t)$ for a broad range of contact
angles. This is shown in Fig.~\ref{fig:scalingmatch}, where we show $h_b$
(scaled with the drop radius $r_0$) as a function of time (scaled with the
inertio-capillarity time $\sqrt{\rho r_0^3/\gamma}$). The closed symbols
represent simulations for various contact angles. For contact angles below
\SI{90}{\degree}, the initial dynamics is consistent with a $t^\frac{2}{3}$
power law until the bridge $h_b$ becomes comparable to the drop size $r_0$. This
is perfectly in line with experiments. When the contact angle is
\SI{90}{\degree}, however, the slope of the data is smaller and suggests a
smaller exponent, approaching the experimentally observed $t^\frac{1}{2}$
scaling. For a more detailed comparison, we include in
Fig.~\ref{fig:scalingmatch} the data from~\cite{EddWinSno13-2}, which
corresponds to experiments of water drops that coalesce in air. The experimental
data shown here was shifted upwards, by a factor of $2$, for the purpose of
better visualization.
However, even without this shift the experimental data
lies about a factor $2$ above the numerical data.
This quantitative difference
can possibly be attributed to the fact that the simulations consider droplets
that are immersed into an outer fluid of equal density. The transport inside the
outer fluid does slow down the dynamics with respect to the case of drops in
air, which is consistent with the observations in Fig.~\ref{fig:scalingmatch}.
Therefore a quantitative match of these is not to be expected, but
in terms of scaling laws the simulations comply with experiments.
As the wettability of the substrate alters the contact angle of the fluid,
it alters the capillary pressure of the droplets.
Due to the capillary pressure driving the coalescence,
the contact angle alters the rate at which the meniscus grows.
The representative scaling laws for this behavior, as well as the behavior of the fluid interface are discussed below.
\begin{figure}
\includegraphics[width=\linewidth]{scalingmatch-figure0}
\caption{(Color online) The bridge height as a function of time for different
contact angles~$\theta$. The bridge height is scaled with the drop size,
while time is rescaled by the inertio-capillary time. Closed symbols
correspond to simulation results. Open symbols are experimental data for
water drops in air, taken from \cite{EddWinSno13-2}.
\new{For visual clarity, all experimental data has been multiplied with a factor of $2$, to avert overlay.}
\label{fig:scalingmatch}}
\end{figure}
\subsection{Results for $\theta < \SI{90}{\degree}$}
To make further use of the simulation results, let us briefly revisit the usual
scaling arguments for coalescence. The situation is best understood for contact
angles $\theta < \SI{90}{\degree}$, for which the horizontal scale and vertical
scale are simply proportional to one another: the ratio of the two lengths is
set by the tangent of the contact angle. This can for example be seen from
Fig.~\ref{fig:timeseries}, showing that the width of the meniscus increases as
well as $h_b$ during the growth. Since $h_b$ sets the characteristic scale of
the bridge, the capillary pressure can be estimated as
\begin{equation}
P_{\text{cap}}\propto\frac{\gamma}{h_b}.
\end{equation}
Similarly, the inertial pressure is obtained as
\begin{equation}
P_{\text{iner}}\propto\rho\left(\frac{h_b}{t}\right)^2,
\end{equation}
which then leads to the observed
\begin{equation}
h_b\propto t^\frac{2}{3}.
\end{equation}
To further test the idea that the dynamics is governed by the growing length
scale $h_b(t)$, one can attempt a collapse of the bridge profiles during the
growth process. This is shown for the case $\theta=\SI{73}{\degree}$ in
Fig.~\ref{fig:SelfSim73}, where we overlay the meniscus shapes for different
times, after rescaling the horizontal and vertical scales with $h_b(t)$. The
scaled profiles indeed exhibit an excellent collapse. This confirms that the
bridge growth is characterized by a universal spatial profile, and that the
temporal dependence can be effectively absorbed in the growing length scale
$h_b(t)$. The self-similarity only applies for the initial stages of
coalescence, so the data shown are until the bridge height reaches about one
third of the initial drop height.
This limits the data to parts of the droplet deformed by the coalescence,
where the scaling law is applicable.
Small deviations far from the meniscus can be attributed to this effect.
Figure~\ref{fig:SelfSim73} also shows the
corresponding experimental plot from~\cite{EddWinSno13-2}, for which
self-similarity was convincingly demonstrated as well. The numerical bridge
shapes (for immersed drops) differ slightly from the experimental profiles, but
the same principle of self-similarity is valid during the initial stages of
coalescence.
\begin{figure}
\includegraphics[width=\linewidth]{SelfSim73-figure0}
\caption{(Color online) \label{fig:SelfSim73}Rescaled bridge shape for droplets
with a contact angle of \SI{73}{\degree} reveal a self-similar bridge growth.
Closed symbols correspond to simulation results,
open symbols are experimental data for water drops in air \cite{EddWinSno13-2}.}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{vel73-figure0}
\caption{(Color online) Streamlines of the velocities in coalescing droplets
with a contact angle of \SI{73}{\degree}, to the right of the symmetry axis.
The positions of the velocities are rescaled like the interface positions.
The amplitude of the velocities is scaled as $v_x h_b^{\frac{1}{2}}$ and
$v_y h_b^{\frac{1}{2}}$. The amplitude of the field is shown by varying
the width of the streamlines linearly with the amplitude of the underlying vectors.
\label{fig:vel73}}
\end{figure}
Interestingly, the simulations allow one to extract information that is not
easily accessible through experiments, such as the fluid velocities $v_x$ and
$v_y$. Inspired by the idea of self-similarity, we rescale the streamline
patterns by again normalizing $x$ and $y$ by $h_b(t)$. The result is shown in
Fig.~\ref{fig:vel73}, where the streamlines are obtained after scaling the
velocities with $h_b^{\frac{1}{2}}$ (different times are visualized using
different colors). The streamline patterns exhibit an excellent collapse. It can
be seen that the coalescence causes a recirculating flow, with a vortex located
to the right of the symmetry axis, driving the meniscus upwards. The necessary
rescaling of the velocity vectors can be understood by considering $u \propto
h_b/t \propto t^{-1/3}$. This implies that $u h_b^{1/2} = {\rm const.}$, as was
indeed used in preparing Fig.~\ref{fig:vel73}.
\subsection{Results for $\theta = \SI{90}{\degree}$}
Let us now turn to the case of droplets with $\theta= \SI{90}{\degree}$, for
which the interfaces are tangent when brought into contact. As a consequence of
this geometry, the horizontal and vertical scales are no longer the same. We
therefore introduce the width of the bridge $w$, indicated in
Fig.~\ref{fig:Experiment}, as the horizontal scale that is much smaller than
$h_b$. The geometry is such that
\begin{equation}
w\propto \frac{h_b^2}{r_0},
\end{equation}
and subsequently, the scaling laws need to account for this disparity of
horizontal and vertical scales. The usual argument is that the capillary
pressure reads
\begin{equation}\label{eq:pcap}
P_{\text{cap}}\propto\frac{\gamma}{w} \propto\frac{\gamma r_0}{h_b^2},
\end{equation}
which can be balanced with the inertial pressure
\begin{equation}\label{eq:piner}
P_{\text{iner}}\propto\rho\left(\frac{h_b}{t}\right)^2.
\end{equation}
This leads to
\begin{equation}
h_b\propto t^\frac{1}{2},
\end{equation}
and explains why the meniscus growth differs from the 2/3 law observed for
smaller contact angles (cf. Fig.~ \ref{fig:scalingmatch}).
Once more, we will test these scaling ideas by searching for self-similar
dynamics, both for the bridge shape and for the velocity profiles. The first of
these tests is provided in Fig.~\ref{fig:SelfSim90}, where the bridge profiles
for $\theta= \SI{90}{\degree}$ are rescaled with $w \sim h_b^2/r_0$ on the
horizontal axis and with $h_b$ in the vertical direction. A collapse is indeed
observed, confirming the necessity of taking different horizontal and vertical
scales. In addition the numerical profiles exhibit a perfect agreement with the
experimental results for the bridge shape~\cite{EddWinSno13-2}.
Intriguingly,
however, we have not been able to obtain a convincing self-similarity for the
velocity fields for $\theta= \SI{90}{\degree}$. Following the logic above, one
would expect for the horizontal velocity $v_x \propto w/t \propto h_b^2/R t
\propto t^0$, while for the vertical velocity $v_y \propto h_b/t \propto
t^{-1/2}$. However, the best ``collapse" was obtained by empirically scaling the
velocities respectively as $v_x h_b^{\frac{1}{3}}$ and $v_y h_b^{\frac{1}{2}}$,
and the result is shown in Fig.~\ref{fig:vel90}. One again observes a
recirculating flow that leads to the bridge growth, but the associated vortex
structure is not perfectly self-similar. In particular, we note that the vortex
appears to become smaller in time, after rescaling, suggesting that $h_b$ and
$w$ are not the correct scales for the velocity field. The velocities of
strongest amplitude lie underneath the meniscus and are nearly only vertical.
Mass conservation in this case is achieved by enlarging the respective area in
time. This means that the usual scaling arguments of equations \ref{eq:pcap}
and \ref{eq:piner} might actually be too simplistic. For example, the
self-similar scaling of the meniscus profiles implies that the typical curvature
scales as $h_b/w^2 \sim r_0^2/h_b^3$, and not as $1/w \sim r_0/h_b^2$, as was
assumed in (\ref{eq:pcap}). This would change the coalescence exponent from 1/2
to 2/5, which does not concur with simulations and previous experiments.
This observation on the velocity field suggests that the pressure scaling (\ref{eq:piner}) needs to be revised.
The flow field is more intricate than the scaling argument allows to believe.
Uncertainties in the definition of the scaling argument undermine this requirement.
This would be an interesting topic for future work, for which a larger range of numerical data would be required to conclusively infer the relevant scaling laws.
Droplets with a contact angle of \SI{90}{\degree} only differ from freely suspended coalescing droplets by a minor amount of surface friction.
Therefore the scaling argument for freely floating droplets might need to be revisited as well.
\begin{figure}
\includegraphics[width=\linewidth]{SelfSim90-figure0}
\caption{(Color online) Rescaled bridge shape for droplets
with a contact angle of \SI{90}{\degree} reveal a self-similar bridge growth.
Note that the horizontal and vertical axis are scaled differently.
Closed symbols correspond to simulation results,
open symbols are experimental data for water drops in air \cite{EddWinSno13-2}.}
\label{fig:SelfSim90}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{vel90-figure0}
\caption{(Color online) Streamlines of the velocities in coalescing droplets
with a contact angle of \SI{73}{\degree}, to the right of the symmetry axis.
The positions of the velocities are rescaled like the interface positions.
The amplitude of the velocities is scaled with the empirical values of $v_x
h_b^{\frac{1}{3}}$ and $v_y h_b^{\frac{1}{2}}$. The amplitude of the field
is shown by varying the width of the streamlines linearly with the amplitude
of the underlying vectors. \label{fig:vel90}}
\end{figure}
\section{Conclusion}\label{sec:discussion}
We simulated the coalescence of submerged droplets with different
contact angles and compared our results to experimental data. Similar growth
rates for the bridge height in time, equally dependent on the contact angle
could be found in this case. Despite quantitative differences of the interface position
of the fluid interface between experimental data of droplets in air and the
simulation of submerged droplets, the same rescaling argument revealed a self
similarity in time.
Being able to use the same scaling arguments to scale the interface position of both the experimental droplets in air and the
submerged simulated droplets shows the universality of the scaling argument.
We applied this scaling law to the velocity field and, for droplets of a \SI{73}{\degree} contact angle, revealed the underlying velocities that cause the coalescence and give reasons for the scaling of the amplitude of the velocities.
For droplets of a \SI{90}{\degree} contact angle we presented that the velocities causing the coalescence are more intricate than the scaling laws indicate.
This shows that, though these scaling laws seem to work for the interface position, the underlying estimate for the relevant scaling for the velocity appears to be inconsistent with the internal flow structure. Clearly, our simulations show that the droplet internals are more complex than usually assumed.
\new{Our findings have implications for the design of devices in open microfluidics where different fluids are transported on chemically patterned substrates. Understanding the formation, transport and the coaleascence of droplets in particular is mandatory to optimize these devices and ascertain a reliable long-term functionality.}
\section*{Acknowledgment}
This research was carried out under project number M61.2.12454b in the framework
of the Research Program of the Materials innovation institute M2i (www.m2i.nl).
We highly acknowledge Oc\'e-Technologies B.V.\ for financial support and the
J\"ulich Supercomputing Centre for the required computing time.
| proofpile-arXiv_065-7676 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file. Please follow the steps and style guidelines outlined below for submitting your author response.
Note that the author rebuttal is optional and, following similar guidelines to previous CVPR conferences, it is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers. It is NOT intended to add new contributions (theorems, algorithms, experiments) that were not included in the original submission. You may optionally add a figure, graph or proof to your rebuttal to better illustrate your answer to the reviewers' comments.
Per a passed 2018 PAMI-TC motion, reviewers should not request additional experiments for the rebuttal, or penalize authors for lack of additional experiments. This includes any experiments that involve running code, e.g., to create tables or figures with new results. \textbf{Authors should not include new experimental results in the rebuttal}, and reviewers should discount any such results when making their final recommendation. Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers.
The rebuttal must adhere to the same blind-submission as the original submission and must comply with this rebuttal-formatted template.
\subsection{Response length}
Author responses must be no longer than 1 page in length including any references and figures. Overlength responses will simply not be reviewed. This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font.
\section{Formatting your Response}
{\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.}
All text must be in a two-column format. The total allowable width of the text
area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54 cm) high.
Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch
(0.8 cm) space between them. The top margin should begin
1.0 inch (2.54 cm) from the top edge of the page. The bottom margin should be
1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \times
11$-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the
bottom edge of the page.
Please number all of your sections and any displayed equations. It is important
for readers to be able to refer to any particular equation.
Wherever Times is specified, Times Roman may also be used. Main text should be
in 10-point Times, single-spaced. Section headings should be in 10 or 12 point
Times. All paragraphs should be indented 1 pica (approx. 1/6 inch or 0.422
cm). Figure and table captions should be 9-point Roman type as in
Figure~\ref{fig:onecol}.
List and number all bibliographical references in 9-point Times, single-spaced,
at the end of your response. When referenced in the text, enclose the citation
number in square brackets, for example~\cite{Authors14}. Where appropriate,
include the name(s) of editors of referenced books.
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{1in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the response. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Many readers (and reviewers), even of an electronic copy, will choose to print your response in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
{\small
\bibliographystyle{ieee_fullname}
\section{Introduction}
After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file.
Please follow the steps and style guidelines outlined below for submitting your author response.
The author rebuttal is optional and, following similar guidelines to previous CVPR conferences, is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers.
It is NOT intended to add new contributions (theorems, algorithms, experiments) that were absent in the original submission and NOT specifically requested by the reviewers.
You may optionally add a figure, graph, or proof to your rebuttal to better illustrate your answer to the reviewers' comments.
Per a passed 2018 PAMI-TC motion, reviewers should refrain from requesting significant additional experiments for the rebuttal or penalize for lack of additional experiments.
Authors should refrain from including new experimental results in the rebuttal, especially when not specifically requested to do so by the reviewers.
Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers.
Just like the original submission, the rebuttal must maintain anonymity and cannot include external links that reveal the author identity or circumvent the length restriction.
The rebuttal must comply with this template (the use of sections is not required, though it is recommended to structure the rebuttal for ease of reading).
\subsection{Response length}
Author responses must be no longer than 1 page in length including any references and figures.
Overlength responses will simply not be reviewed.
This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide.
Note that this \LaTeX\ guide already sets figure captions and references in a smaller font.
\section{Formatting your Response}
{\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.}
All text must be in a two-column format.
The total allowable size of the text area is $6\frac78$ inches (17.46 cm) wide by $8\frac78$ inches (22.54 cm) high.
Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them.
The top margin should begin 1 inch (2.54 cm) from the top edge of the page.
The bottom margin should be $1\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper;
for A4 paper, approximately $1\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page.
Please number any displayed equations.
It is important for readers to be able to refer to any particular equation.
Wherever Times is specified, Times Roman may also be used.
Main text should be in 10-point Times, single-spaced.
Section headings should be in 10 or 12 point Times.
All paragraphs should be indented 1 pica (approx.~$\frac{1}{6}$ inch or 0.422 cm).
Figure and table captions should be 9-point Roman type as in \cref{fig:onecol}.
List and number all bibliographical references in 9-point Times, single-spaced,
at the end of your response.
When referenced in the text, enclose the citation number in square brackets, for example~\cite{Alpher05}.
Where appropriate, include the name(s) of editors of referenced books.
\begin{figure}[t]
\centering
\fbox{\rule{0pt}{0.5in} \rule{0.9\linewidth}{0pt}}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:onecol}
\end{figure}
To avoid ambiguities, it is best if the numbering for equations, figures, tables, and references in the author response does not overlap with that in the main paper (the reviewer may wonder if you talk about \cref{fig:onecol} in the author response or in the paper).
See \LaTeX\ template for a workaround.
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered.
Please ensure that any point you wish to make is resolvable in a printed copy of the response.
Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print.
Readers (and reviewers), even of an electronic copy, may choose to print your response in order to read it.
You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it is almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below
{\small\begin{verbatim}
\usepackage{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.pdf}
\end{verbatim}
}
{\small
\bibliographystyle{ieee_fullname}
\section{Conclusion \label{sec:conc}}
This paper exploited the possibility of adding ultrasound audio as a new modality for metric scale 3D human pose estimation. By estimating a audio pose kernel, and encoding it to the physical space, we introduced a neural network pipeline that can accurately predict up to scale 3D human pose. We tested our algorithm on two unseen environments, and the results are promising, shading light towards potential application in smart home, AR/VR, human robot interaction, and beyond. We also proposed a new dataset DatasetName, calling for attentions from future researchers to this interesting audio-visual 3D pose estimation problem.
\section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
Please refer to the author guidelines on the 2022~web page for a
discussion of the policy on dual submissions.
\subsection{Paper length}
Papers, excluding the references section,
must be no longer than eight pages in length. The references section
will not be included in the page count, and there is no limit on the
length of the references section. For example, a paper of eight pages
with two pages of references would have a total length of 10 pages.
{\bf There will be no extra page charges for 2022.}
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler.
(\LaTeX\ users may use options of cvpr.cls to switch between different
versions.)
Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $095.5$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics:
\url{http://www.pamitc.org/documents/mermin.pdf}.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports.)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith; it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors14} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors14b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the CVPR70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
\medskip
\noindent
FAQ\medskip\\
{\bf Q:} Are acknowledgements OK?\\
{\bf A:} No. Leave them for the final copy.\medskip\\
{\bf Q:} How do I cite my results reported in open challenges?
{\bf A:} To conform with the double blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\medskip\\
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to
\cite{Alpher02,Alpher03,Authors14}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be kept
within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm)
high.
Page numbers should be in footer with page numbers, centered and .75
inches from the bottom of the page and make it start at the correct page
number rather than the 4321 in the example. To do this fine the line (around
line 20)
\begin{verbatim}
\setcounter{page}{4321}
\end{verbatim}
where the number 4321 is your assigned starting page.
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors14}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Please refer to the author guidelines on the 2022~web page for a discussion
of the use of color in your document.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
Please direct any questions to the production editor in charge of these
proceedings at the IEEE Computer Society Press:
\url{https://www.computer.org/about/contact}.
{\small
\bibliographystyle{ieee_fullname}
\section{PoseKernel Dataset}
We collect a new dataset called \textit{PoseKernel} dataset. It is composed of more than 10,000 frames of synchronized videos and audios from six locations including living room, office, conference room, laboratory, etc. For each location, more than six participants were asked to perform as shown in Figure~\ref{fig:environments}. (We plan to release the dataset when paper publishes.)
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{environments.PNG}
\caption{We collect our PoseKernel dataset in different environments with at least six participants per location, totalling more than 10,000 poses. }
\label{fig:environments}
\end{figure}
The cameras, speakers, and microphones are spatially calibrated using off-the-shelf structure-from-motion software such as COLMAP~\cite{schoenberger2016sfm} by scanning the environments with an additional camera and use the metric depth from the RGB-D cameras to estimate the true scale of the 3D reconstruction. We manually synchronize the videos and speakers by a distinctive audio signal, e.g., clapping, and the speakers and microphones are hardware synchronized by a field recorder (e.g., Zoom F8n Recorder) at a sample rate of 96 kHz.
For each scene, video data are captured by two RGB-D Azure Kinect cameras. These calibrated RGB-D cameras are used to estimate the ground truth 3D body pose using state-of-the-art pose estimation methods such as FrankMocap~\cite{rong2021frankmocap}. Multiple RGB-D videos are only used to generate the ground truth pose for training. In the testing phase, only a single RGB video is used.
Four speakers and four microphones are used to generate the audio signals. Each speaker generates a chirp audio signal sweeping frequencies between 19 kHz to 32 kHz. We use this frequency band because it is audible from consumer-grade microphones while inaudible for humans. Therefore, it does not interfere with human generated audio. In order to send multiple audio signals from four speakers, we use frequency-division multiplexing within the frequency band. Each chirp duration is 100 ms, resulting in 10 FPS reconstruction. At the beginning of every capture session, we capture the empty room impulse response for each microphone in the absence of humans.
We ask participants to perform a wide range of daily activities, e.g., sitting, standing, walking, and drinking, and range of motion in the environments. To evaluate the generalization on heights, in our test data, we include three minors (height between 140 cm and 150 cm) with consent from their guardians. All person identifiable information including faces are removed from the dataset.
\section{Introduction}
Since the projection of the 3D world onto an image loses scale information, 3D reconstruction of a human's pose from a single image is an ill-posed problem.
To address this limitation, human pose priors have been used in existing lifting approaches~\cite{tome,habibie,chang,pavlakos,yushuke,llopart,li21} to reconstruct the plausible 3D pose given the 2D detected pose by predicting relative depths. The resulting reconstruction, nonetheless, still lacks \textit{metric scale}, i.e., the metric scale cannot be recovered without making an additional assumption such as known height or ground plane contact.
This fundamental limitation of 3D pose lifting precludes applying it to realworld downstream tasks, e.g., smart home facilitation, robotics, and augmented reality, where the precise metric measurements of human activities are critical, in relation to the surrounding physical objects.
In this paper, we study a problem of metric human pose reconstruction from a single view image by incorporating a new sensing modality---audio signals from consumer-grade speakers (Figure~\ref{fig:teaser}). Our insight is that while traversing a 3D environment, the transmitted audio signals undergo a characteristic transformation induced by the geometry of reflective physical objects including human body. This transformation is subtle yet highly indicative of the body pose geometry, which can be used to reason about the metric scale reconstruction. For instance, the same music playing in a room sounds differently based on the presence or absence of a person, and more importantly, as the person moves.
We parametrize this transformation of audio signals using a time-invariant
transfer function called \textit{pose kernel}---an impulse response of audio induced by a body pose, i.e., the received audio signal is a temporal convolution of the transmitted signal with the pose kernel. Three key properties of pose kernel enables metric 3D pose lifting in a generalizable fashion: (1)~metric property: its impulse response is equivalent to the arrival time of the reflected audio, and therefore, it provides metric distance from the receiver (microphone); (2)~uniqueness: the envelope of pose kernel is strongly correlated with the location and pose of the target person; (3)~invariance: it is invariant to the geometry of surrounding environments, which allows us to generalize it to unseen environments.
While highly indicative of pose and location of the person in 3D, the pose kernel is a time-domain signal. Integrating it with the spatial-domain 2D pose detection is non-trivial. Further, generalization to new scenes requires precise 3D reasoning where existing audio-visual learning tasks such as source separation in an image domain and image representation learning~\cite{tian2021cyclic,ephrat:2018,gao:2019,owens2018audio} are not applicable.
We address this challenge in 3D reasoning of visual and audio signals, by learning to fuse the pose kernels from multiple microphones and the 2D pose detected from an image, using a 3D convolutional neural network (3D CNN): (1) we project each point in 3D onto the image to encode the likelihood of landmarks (visual features); and (2) we spatially encode the time-domain pose kernel in 3D to form audio features. Inspired by the convolutional pose machine architecture~\cite{wei2016cpm}, a multi-stage 3D CNN is designed to predict the 3D heatmaps of the joints given the visual and audio features. This multi-stage design increases effective receptive field with a small convolutional kernel (e.g., $3\times 3\times 3$) while addressing the issue of vanishing gradients.
In addition, we present a new dataset called \textit{PoseKernel} dataset. The dataset includes more than
10,000 poses from six locations with more than six participants per location,
performing diverse daily activities including sitting, drinking, walking, and jumping.
We use this dataset to evaluate the performance of our metric lifting method and show that it significantly outperforms state-of-the-art lifting approaches including mesh regression (e.g., FrankMocap~\cite{rong2021frankmocap}) and joint depth regression (e.g., Tome et al.~\cite{tome}). Due to the scale ambiguity of state-of-the-art approaches, the accuracy is dependent on the heights of target persons. In contrast, our approach can reliably recover the 3D poses regardless the heights, applicable to not only adults but also minors.
\noindent\textbf{Why Metric Scale?} Smart home technology is poised to enter our daily activities, in particular, for monitoring fragile populations including children, patients, and the elderly. This requires not only 3D pose reconstruction but also holistic 3D understanding in the context of metric scenes, which allows AI and autonomous agents to respond in a situation-aware manner. While multiview cameras can provide metric reconstruction, the number of required cameras to cover the space increases quadratically as area increases. Our novel multi-modal solution can mitigate this challenge by leveraging multi-source audios (often inaudible) generated by consumer grade speakers (e.g., Alexa).
\noindent\textbf{Contributions} This paper makes a major conceptual contribution that sheds a new light on a single view pose estimation by incorporating with audio signals. The technical contributions include (1) a new formulation of the pose kernel that is a function of the body pose and location, which can be generalized to a new scene geometry, (2) the spatial encoding of pose kernel that facilitates fusing visual and audio features, (3) a multi-stage 3D CNN architecture that can effectively fuse them together, and (4) a strong performance of our method, outperforming state-of-the-art lifting approaches with meaningful margin.
\section{Summary and Discussion} \label{sec:discussion}
This paper presented a new method to reconstruct 3D human body pose with the metric scale from a single image by leveraging audio signals. We hypothesized that the audio signals that traverse a 3D space are transformed by the human body pose through reflection, which allows us to recover the 3D metric scale pose. In order to prove this hypothesis, we use a human impulse response called pose kernel that can be spatially encoded in 3D. With the spatial encoding of the pose kernel, we learned a 3D convolutional neural network that can fuse the 2D pose detection from an image with the pose kernels to reconstruct 3D metric scale pose. We showed that our method is highly generalizable, agnostic to the room geometry, spatial arrangement of camera and speakers/microphones, and audio source signals.
The main assumption of the pose kernel is that the room is large enough to minimize its shadow effect: in theory, there exist room impulse responses that can be canceled by the pose because the human body can occlude the room impulse response behind the person. This shadow effect is a function of room geometry, and therefore, it is dependent on the spatial arrangement of camera and speakers. In practice, we use a room, or open space larger than 5 m$\times$5 m where the impact of shadow can be neglected.
\section{Method}
We make use of audio signals as a new modality for metric human pose estimation.
We learn a pose kernel that transforms audio signals, which can be encoded in 3D in conjunction with visual pose prediction as shown in Figure~\ref{fig:pose_kernel}.
\subsection{Pose Kernel Lifting}
We cast the problem of 3D pose lifting as learning a function $g_{\boldsymbol{\theta}}$ that predicts a set of 3D heatmaps $\{\mathbf{P}_i\}_{i=1}^N$ given an input image $\mathbf{I} \in [0,1]^{W\times H \times 3}$ where
$\mathbf{P}_i: \mathds{R}^3\rightarrow [0,1]$ is the likelihood of the $i^{\rm th}$ landmark over a 3D space, $W$ and $H$ are the width and height of the image, respectively, and $N$ is the number of landmarks. In other words,
\begin{align}
\{\mathbf{P}_i\}_{i=1}^N = g_{\boldsymbol{\theta}} (\mathbf{I}), \label{Eq:pose}
\end{align}
where $g_{\boldsymbol{\theta}}$ a learnable function parametrized by its weights $\boldsymbol{\theta}$ that lift a 2D image to the 3D pose. Given the predicted 3D heatmaps, the optimal 3D pose is given by $\mathbf{X}^*_i = \underset{\mathbf{X}}{\operatorname{argmax}}~\mathbf{P}_i(\mathbf{X})$ so that $\mathbf{X}^*_i$ is the optimal location of the $i^{\rm th}$ landmark. In practice, we use a regular voxel grid to represent $\mathbf{P}$.
We extend Equation~(\ref{Eq:pose}) by leveraging audio signals to reconstruct a metric scale human pose, i.e.,
\begin{align}
\{\mathbf{P}_i\}_{i=1}^N = g_{\boldsymbol{\theta}} (\mathbf{I}, \{k_j(t)\}_{j=1}^M), \label{Eq:time}
\end{align}
where $k_j(t)$ is the \textit{pose kernel} heard from the $j^{\rm th}$ microphone---a time-invariant audio impulse response with respect to human pose geometry that transforms the transmitted audio signals, as shown in Figure~\ref{fig:pose_kernel}. $M$ denotes the number of received audio signals\footnote{The number of audio sources (speakers) does not need to match with the number of received audio signals (microphones).}.
The pose kernel transforms the transmitted waveform as follows:
\begin{align}
r_j(t) = s(t) * (\overline{k}_j(t) + k_j(t)),
\end{align}
where $*$ is the operation of time convolution, $s(t)$ is the transmitted source signal and $r_j(t)$ is the received signal at the location of the $j^{\rm th}$ microphone. $\overline{k}_j(t)$ is the empty room impulse response that accounts for transformation of the source signal due to the static scene geometry, e.g., wall and objects, in the absence of a person. $k_j(t)$ is the pose kernel measured at the $j^{\rm th}$ microphone location that accounts for signal transformation due to human pose.
The pose kernel can be obtained using the inverse Fourier transform, i.e.,
\begin{align}
k_j(t) = \mathcal{F}^{-1} \{K_j(f)\},~~~K_j(f) = \frac{R_j(f)}{S(f)} - \overline{K}_j(f),
\end{align}
where $\mathcal{F}^{-1}$ is the inverse Fourier transformation, and $R_j(f)$, $S(f)$, and $\overline{K}_j(f)$ are the frequency responses of $r(t)$, $s(t)$, and $\overline{k}_j(t)$, respectively, e.g., $R(f) = \mathcal{F}\{r(t)\}$.
Since the pose kernel is dominated by direct reflection from the body, it is agnostic to scene geometr
\footnote{
The residual after subtracting the room response still includes multi-path effects involving the body. However, we observe that such effects are negligible in practice, and the pose kernel is dominated by the direct reflection from the body. Therefore, it is agnostic to scene geometry. See Section~\ref{sec:discussion} for a discussion on multi-path shadow effect.}. The scene geometry is factored out by the empty room impulse response $\overline{k}_j(t)$ and the source audios $s(t)$ are canceled by the received audios $r(t)$, which allows us to generalize the learned $g_{\boldsymbol{\theta}}$ to various scenes.
\subsection{Spatial Encoding of Pose Kernel}
We encode the time-domain pose kernel of the $j^{th}$ microphone, $k_j(t)$ to 3D spatial-domain where audio and visual signals can be fused.
A transmitted audio at the speaker's location $\mathbf{s}_{\rm spk}\in \mathds{R}^3$ is reflected by the body surface at $\mathbf{X}\in \mathds{R}^3$ and arrives at the microphone's location $\mathbf{s}_{\rm mic}\in \mathds{R}^3$. The arrival time is:
\begin{align}
t_{\mathbf{X}} = \frac{\|\mathbf{s}_{\rm spk} - \mathbf{X}\|+\|\mathbf{s}_{\rm mic} - \mathbf{X}\|}{v}, \label{Eq:delay}
\end{align}
where $t$ is the arrival time, and $v$ is the constant speed of sound (Figure~\ref{Fig:spatial_encoding}).
The pose kernel is a superposition of impulse responses from the reflective points in the body surface, i.e.,
\begin{align}
k_j(t) = \sum_{\mathbf{X}\in \mathcal{X}} A(\mathbf{X}) \delta(t-t_{\mathbf{X}}), \label{Eq:reflector}
\end{align}
where $\delta(t-t_{\mathbf{X}})$ is the Dirac delta function (impulse response) at $t= t_{\mathbf{X}}$. $t_{\mathbf{X}}$ is the arrival time of the audio signal reflected by the point $\mathbf{X}$ on the body surface $\mathcal{X}$. $A(\mathbf{X})$ is the reflection coefficient (gain) at $\mathbf{X}$.
Equation~(\ref{Eq:delay}) and (\ref{Eq:reflector}) imply two important spatial properties of the pose kernel. (i)
Since the locus of points whose sum of distances to the microphone and the speaker is an ellipsoid, Equation~(\ref{Eq:delay}) implies that the same impulse response can be generated by any point on this ellipsoid.
(ii) Due to the constant speed of sound, the response of the arrival time can be interpreted as that of the spatial distance by evaluating the pose kernel at the corresponding arrival time, $t_{\mathbf{X}}$:
\begin{align}
\mathcal{K}_j(\mathbf{X}) = k_j(t)|_{t = t_{\mathbf{X}}},
\label{Eq:encoding}
\end{align}
where $\mathcal{K}_j(\mathbf{X})$ is the spatial encoding of the pose kernel at $\mathbf{X}\in \mathds{R}^3$.
\setlength{\columnsep}{10pt}
\begin{wrapfigure}{r}{0.4\linewidth}
\vspace{-9mm}
\begin{center}
\includegraphics[width=1\linewidth]{geom.pdf}
\end{center}
\vspace{-5mm}
\caption{Pose kernel spatial encoding.}
\label{Fig:spatial_encoding}
\vspace{-5mm}
\end{wrapfigure}
Let us illustrate the spatial encoding of pose kernel. Consider a point object $\mathbf{X} \in \mathds{R}^2$ that reflects an audio signal from the speaker $\mathbf{s}_{\rm spk}$ which is received by the microphone $\mathbf{s}_{\rm mic}$ as shown in Figure~\ref{Fig:spatial_encoding}. The received audio is delayed by $t_{\mathbf{X}}$, which can represented as a pose kernel $k(t) = A(\mathbf{X})\delta (t-t_{\mathbf{X}})$. This pose kernel can be spatially encoded as $\mathcal{K}(\mathbf{X})$ because the speed of the sound is constant. Note that there exists the infinite number of possible locations of $\mathbf{X}$ given the pose kernel because any point
(e.g., $\widehat{\mathbf{X}}$) on the ellipse (dotted ellipse) has constant sum of distances from the speaker and microphone.
\begin{figure}[t]
\begin{center}
\subfigure[Empty room response]{\label{fig:empty}\includegraphics[width=0.85\linewidth]{toy1.pdf}}\\\vspace{-3mm}
\subfigure[Object response]{\label{fig:object}\includegraphics[width=0.85\linewidth]{toy2.pdf}}\\\vspace{-3mm}
\subfigure[Rotated object response]{\label{fig:rotate}\includegraphics[width=0.85\linewidth]{toy3.pdf}}\\\vspace{-3mm}
\subfigure[Translated object response]{\label{fig:translate}\includegraphics[width=0.85\linewidth]{toy4.pdf}}
\end{center}
\vspace{-7mm}
\caption{Visualization of the spatial encoding (left column) of time-domain impulse response (right column) through a sound simulation. The elliptical patterns can be observed by the spatial encoding where their focal points coincide with the locations of the speaker and microphone. (a) We visualize the empty room impulse response. (b) When an object is present, a strong impulse response that is reflected by the object surface can be observed. We show full responses that include the pose kernel. (b) Due to the object rotation, the kernel response is changed. (c) We observe delayed pose kernel due to translation. }
\label{fig:encoding_pose_kernel}
\vspace{-5mm}
\end{figure}
Figure \ref{fig:encoding_pose_kernel} illustrates (a) the empty room impulse response and (b,c,d) the full responses with the pose kernels by varying the location and pose of an object.
The left column shows the pose kernel $k_j(t)$ encoded to the physical space, while the right column shows the actual signal.
Due to the fact that no bearing information is included from audio signal, each peak in the pose kernel $k_j(t)$ corresponds to a possible reflector location on the ellipse of which focal points coincide with the locations of the speaker and microphone.
With the spatial encoding of the pose kernel, we reformulate Equation~(\ref{Eq:time}):
\begin{align}
\{\mathbf{P}_i(\mathbf{X})\}_{i=1}^N = g_{\boldsymbol{\theta}} (\phi_v(\mathbf{X};\mathbf{I}),~~ \underset{j}{\operatorname{max}}~ \phi_a(\mathcal{K}_j(\mathbf{X}))), \label{Eq:space}
\end{align}
where $\phi_v$ and $\phi_a$ are the feature extractors for visual and audio signals, respectively.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{network}
\caption{We design a 3D convolutional neural network to encode pose kernels (audio) and 2D pose detection (image) to obtain the 3D metric reconstruction of a pose. We combine audio and visual features using a series of convolutions (audio features from multiple microphones are fused via max-pooling.). The audio visual features are convolved with a series of $3\times 3\times 3$ convolutional kernels to predict the set of 3D heatmaps for joints. We use multi-stage prediction, inspired by the convolutional pose machine architecture~\cite{wei2016cpm}, which can effectively increase the receptive field while avoiding vanishing gradients. }
\label{fig:architecture}
\vspace{-0.05in}
\end{figure*}
Specifically, $\phi_v$ is the visual features evaluated at the projected location of $\mathbf{X}$ onto the image $\mathbf{I}$, i.e.,
\begin{align}
\phi_v(\mathbf{X};\mathbf{I}) = \{\mathbf{p}_i(\Pi \mathbf{X})\}_{i=1}^N,
\end{align}
where $\mathbf{p}_i \in [0,1]^{W\times H}$ is the likelihood of the $i^{\rm th}$ landmark in the image $\mathbf{I}$. $\Pi$ is the operation of 2D projection, i.e., $\mathbf{p}_i(\Pi \mathbf{X})$ is the likelihood of the $i^{\rm th}$ landmark at 2D projected location $\Pi \mathbf{X}$.
$\phi_a(\mathcal{K}_j(\mathbf{X}))$ is the audio feature from the $j^{\rm th}$ pose kernel evaluated at $\mathbf{X}$. We use the max-pooling operation to fuse multiple received audio signals, which is agnostic to location and ordering of audio signals.
This facilitates scene generalization where the learned audio features can be applied to a new scene with different audio configurations (e.g., the number of sources, locations, scene geometry).
We learn $g_{\boldsymbol{\theta}}$ and $\phi_a$ by minimizing the following loss:
\begin{align}
\mathcal{L} = \sum_{\mathbf{I},\mathcal{K}, \widehat{\mathbf{P}}\in \mathcal{D}} \|g_{\boldsymbol{\theta}} (\phi_v,~ \underset{j}{\operatorname{max}}~ \phi_a(\mathcal{K}_j)) - \{\widehat{\mathbf{P}}_i\}_{i=1}^N\|^2,
\end{align}
where $\{\widehat{\mathbf{P}}_i\}_{i=1}^N$ is the ground truth 3D heatmaps, and $\mathcal{D}$ is the training dataset. Note that this paper focuses on the feasibility of metric lifting by using audio signals where we use an off-the-shelf human pose estimator $\{\mathbf{p}_i\}_{i=1}^N$~\cite{8765346}.
\subsection{Network Design and Implementation Details}
We design a 3D convolution neural network (3D CNN) to encode 2D pose detection from an image (using OpenPose~\cite{8765346}) and four audio signals from microphones. Inspired by the design of the convolution pose machine~\cite{wei2016cpm}, the network is composed of six stages that can increase the receptive field while avoiding the issue of the vanishing gradients. The 2D pose detection is represented by a set of heatmaps that are encoded in the $70\times 70\times 50$ voxel grid via inverse projection, which forms 16 channel 3D heatmaps. For the pose kernel from each microphone, we spatially encode over a $70\times 70\times 50$ voxel grid that are convolved with three 3D convolutional filters followed by max pooling across four audio channels. Each grid is 5 cm, resulting in 3.5 m$\times$3.5 m$\times$2.5 m space. These audio features are combined with the visual features to form the audio-visual features. These features are transformed by a set of 3D convolutions to predict the 3D heatmaps for each joint. The prediction, in turn, is combined with the audio-visual features to form the next stage prediction.
The network architecture is shown in Figure \ref{fig:architecture}.
We implemented the network with PyTorch, and trained it on a server using 4 Telsa v100 GPUs. SGD optimizer is used, and learning rate is 1. The model has been trained for 70 epochs (around 36 hours) until convergence.
\section{Old texts}
\section{Dataset}
* We collected synchronized audio and vision data in 6 different environments across 6 different people.
* Data is recording using a Zoom F8n field recorder with commercial speakers and mics.
* Each person is asked to perform daily tasks in the environments while our audio and vision sensors are recording.
* In all environments, audio sensor and camera are registered to the same environment using colmap.
* The result is a dataset consisting of 5000 frames
\section{Implementation}
* We train using a batch size of xxx, and learning rate of xxx.
* Data is being trained on a server with 8xv100 GPUs for xx hours.
\section{Evaluation}
* We perform different kinds of test showing that our algorithm is working across different environments
* We compare with a base line where no audio sensor is used, and conducted a ablation study showing the performance gain with increased number of audio sensors
In this section, we will elaborate on how audio impulse response is related to geometry, and how we eventually encode audio responses into the 3D space for pose landmark likelihood estimation. We will start from a simple example illustrating how audio changes with respect to geometry, and then dive deep into the mathematics behind.
\textbf{Will sound change according to geometry (or pose)?}
Consider a simple example where a cubic object is placed in a room.
We use a speaker to shine a signal towards the object, and use a microphone to capture the signal bouncing off the object and room, as shown in figure \ref{fig:encoding_pose_kernel}.
We rotate the object around its center and plot the \textit{cross correlation} value between the received signal when (1) object is at pose $\theta$ ($R(\theta)$) and (2) object is at initial pose $\theta_0$ ($R(\theta_0)$)
We can see that cross correlation changes with different pose $\theta$.
\begin{figure}[hbt].
\centering
\vspace{-0.05in}
\includegraphics[width=\linewidth]{dummy.pdf}
\vspace{-0.25in}
\caption{(a) Simulation scenario, (b) Signal cross correlation value changes with respect to pose $\theta$}
\label{fig:encoding_pose_kernel}
\vspace{-0.05in}
\end{figure}
This is because, different pose of the object creates different reflection profile (or echos) of the sound, and eventually leading to the variation of sound signal received by the microphone.
\textbf{Inferring geometry from sound:}
Now let us bring more maths to the problem.
We first consider an open space without room echos.
Assume the speaker is placed at location $P_s$, and microphone is placed at location $P_m$.
Denote the transmitted sound from speaker as $s(t)$ and received sound captured from microphone as $r(t)$.
As we talked about in section 3.1, we can estimate an impulse response $k(t)$ caused by sound reflections through deconvolving $r(t)$ with $s(t)$.
As shown in figure \ref{fig:impulse}(a), there are a bunch of peaks in the impulse response $k(t)$. We use a Dirac delta function $\delta(t-t_i)$ to represent each peak at time $t = t_i$
Thus the impulse response can be represented as:
\begin{equation}
k(t) = \sum_{i = 1} ^N A_i \cdot\delta(t - t_i)
\end{equation}
where $A_i$ is the amplitude of the peak, and $N$ is the total number of peaks.
\begin{figure}[hbt].
\centering
\vspace{-0.05in}
\includegraphics[width=\linewidth]{dummy.pdf}
\vspace{-0.25in}
\caption{(a) Impulse response illustration. (b) A peak in impulse response corresponds to a potential reflector on the eclipse.}
\label{fig:impulse}
\vspace{-0.05in}
\end{figure}
For a given peak $A_i \delta(t-t_i)$ at time instance $t = t_i$, it actually corresponds to a reflection path of the audio signal. This is true because after convolution with original source signal $s(t)$, the result is a delayed and attenuated copy of original source signal $s(t)$, or in other words, creating echo $s^i(t)$ where
\begin{equation}
s^i(t) = A_i \delta(t - t_i) * s(t) = A_i s(t - t_i)
\end{equation}
This means, each peak $A_i \delta(t-t_i)$ in the impulse response $k$, corresponds to a signal path with time of flight equals to $t_i$.
Then, basic physics tells us the distance between the microphone speaker pair and the reflector would be
\begin{equation}
\label{equation:distance}
d_{im} + d_{is} = t_i \cdot v
\end{equation}
where $v$ is the speed of sound, and $d_{im,s}$ are the distances between the $i^th$ reflector and the microphone, speaker.
This implies, the reflector is on an ecilipse trajectory with the speaker and mic at the 2 focuses, as shown in figure \ref{fig:impulse}(b).
Then let us consider the case where the reflector is inside a room.
By measuring in the scenario with and without person, we can get $2$ impulse responses $k(t)$ and $k_0(t)$, and a subtraction will give the human related impulse response $k_j(t)$ (measured at the $j^{th}$ speaker and microphone pair) as:
\begin{equation}
k_j(t) = k(t) - k_0(t)
\end{equation}
Finally by applying the physics from equation \ref{equation:distance}, we connect this human related impulse response $k_j$ with human pose geometry.
\textbf{Spatial encoding of audio impulse response:}
We encode audio impulse response to 3D and creates an audio heatmap $W_j\in [0,1]^{D\times D\times D}$ in the physical frame (discretized into $D\times D\times D$), according to the audio-geometry math above.
One important thing to note, however, is that, unlike vision, the speaker and microphone that we use do not offer bearing information. Any point in the 3D space with the same distance to the speaker/mic pair will provide exactly the same time-of-flight, and thus contributes exactly the same to the audio impulse response.
To fully recover this omni-directional characteristic, we use a the following method for data encoding:
Assume the microphone is located at $P_{jm}(x_m,y_m,z_m)$, and speaker is located at $P_{jm}(x_s,y_s,z_s)$ in the global frame.
Then for any given point $P(x,y,z)$, the 3D heatmap value $W_j(x, y, z)$ would be:
\begin{equation}
W_j(x, y, z) = k_j((d_{jm} + d_{js}) / v)
\end{equation}
where $k_j$ is the human related impulse response and $d_{jm,s}$ are the distances between point $P$ and the microphone, speaker location $P_{jm,s}$, i.e.,
\begin{align}
d_{jm} &= ||(x_0,y_0,z_0)-(x_m,y_m,z_m)||_2 \\
d_{js} &= ||(x_0,y_0,z_0)-(x_s,y_s,z_s)||_2
\end{align}
\textbf{Spatial encoding of vision data:}
Vision data is encoded similarly based on propagation model.
The major difference, is, however, vision ray is directional, but does not offer timing information.
We hope a fusion of audio and vision will provide us both bearing, and depth, and thus a 3D human pose.
Recall that for any given image, each pixel on the 2D image corresponds to one ray in the 3D space, obtained from camera intrinsic matrix.
We use OpenPose \cite{openpose} to get 2D joint location for 15 joints on the human body.
These 15 2D joint location are then projected into the 3D space to form 15 signal ray.
Any given point on each of these rays is a possible 3D joint location.
For the purpose of data smoothing, we use a Gaussian representation of the ray to create a 3D heatmap.
Finally, after encoding audio and vision data to the 3D space, we directly regress joint heatmap from this 3D space.
\subsection{Objective function and Implementation}
We used 3D CNN as building blocks to directly regress pose from the $D\times D\times D$ heatmaps for audio and vision.
A multi stage 3D CNN pipeline is proposed, and short cuts are introduced to prevent potential gradient vanishing.
We also adopted a max pooling layer for audio signals after audio feature extracting, so that neural network will not memorize the audio sensor sequence for the sake of best generalizability.
We estimate the joint heatmaps for each of the joints, and a mean square loss is reported at the end of each stage.
The final loss is the sum of all losses for each stage:
\begin{equation}
L = \sum_{i = 1}^N L_i
\end{equation}
\subsection{Overview}
Single view human pose lifting is the problem of estimating 3D human pose from a 2D image, defined by the following equation, where $X$ is the 3D human pose, and $I$ is the 2D image:
\begin{equation}
\label{equ:problem_orig}
X = f(I)
\end{equation}
Due to the lack of depth information, an infinitive number of poses will match with the 2D image. In other words, this is an ill defined problem.
Sound, given its relatively slow speed of propagation, offers a unique opportunity of distance estimation.
Imagine we have a couple of speakers and microphones in the environment.
By emitting sound from speakers and collecting signals reflected off human body at the microphone side, we can estimate the time-of-flight of sound, which can translate to depth measurement (for each body joint).
While theoretically plausible, there are, of course a lot of challenges given the dispersive nature of sound, causing heavy multipaths, especially in indoor environments.
This paper focus on tackling all the challenges and including a new modality, audio $A$, into the pipeline of 3D pose estimation, mathematically denoted as:
\begin{equation}
\label{equ:problem_our}
X = f(I, A)
\end{equation}
More specifically, to extract as much spatial information as we can from audio, we deployed multiple audio sensors (speakers and microphones) around the environment, giving us $k$ different audio measurements $A_1, A_2, ..., A_k$ for each pose:
\begin{equation}
\label{equ:problem_our}
X = f(I, A_1, A_2, ..., A_k)
\end{equation}
A signal processing pipeline $h(\cdot)$ is proposed to effectively cope with the indoor audio multipath, and encode 1D audio data to 3D heatmap space.
Similarly, vision information is also encoded to 3D space through a function $g(\cdot)$:
\begin{equation}
\label{equ:problem_our_specific}
X = f(g(I), h(A_1), h(A_2), ..., h(A_k))
\end{equation}
Finally a 3D CNN is trained on the audio and vision 3D heatmaps to regress human joint locations.
In the rest of this section, we will first give an experimental study showing the feasibility of the idea from signal's perspective.
Then we will elaborate on each component of the pipeline, especially how we design $f$, $g$, and $h$.
Let us first start with feasibility study.
\subsection{Feasibility study: can audio infer geometry?}
\textbf{Acoustic simulation result:}
\textbf{Real world toy example:}
Now let us move to the real world experiment of human pose estimation.
We ask the person to stand still in the same location, while doing different poses.
Similarly, we also record the \textit{cross correlation} value $xcorr$ between two human poses $P_1$ and $P_2$.
We ask the person to do a fixed list of poses, for multiple times.
Table \ref{table:toy} shows the result.
It is evident that if for the same pose, the xcorr value is high, while for different poses, xcorr value is low.
This implies the promise of using audio as a method for pose estimation.
\subsection{How to infer geometry: signal processing background}
However, most of the times, the echos are merged with the original sound signal, making it difficult to interpret.
To extract the reflection profile (or more technically the time domain impulse response $h$), we perform a deconvolution between the receiver signal $R$ and the source signal $S$ \cite{}.
\begin{equation}
h = R *^{-1} S
\end{equation}
In an ideal case where there is no multipath reflection, each object in the environment will correspond to one Dirac delta function in the impulse response.
Assume the speaker and microphone are co-located and synchronized, the location of the peaks exactly represent the time-of-flight.
\textbf{Estimating distance in rich multipath indoor environment}:
Then let us move to a more realistic scenario.
In a typical indoor room environment, where there are a lot of echos, instead of getting a bunch of clear Delta functions in the impulse response, what we get is a complex time domain function with a lot of peaks merged together.
Figure \ref{fig:rir}(a) shows a typical room impulse response.
Each of these peak corresponds to a reflecting surface in the environment, e.g., walls, ceilings, furniture, etc.
Given the massive number of reflecting surfaces and their complex reflecting property, understanding geometry from this complex signal would be prohibitive if not impossible.
However, recall that our goal is to reconstruct human pose, so we are not exactly interested in those room reflections.
What we really need to extract are those human related reflections.
Now, imagine we have 2 time domain impulses, one with person and one without person, as shown in figure \ref{fig:rir}(a) and (b).
If we align them properly and perform a subtraction, the static environmental reflections will be cancelled out, and the result is human related reflections, as shown in figure \ref{fig:rir}(c).
Different peaks in figure \ref{fig:rir}(c) ideally corresponds to reflections from different body parts, and this serves as the physics foundation of the paper.
With the increasing popularity of always-on mics in today's smart voice assistants, it is easy to perform a scan of the room, and get an accurate empty room impulse response.
Building on this, we designed our system.
\begin{figure}[hbt].
\centering
\vspace{-0.05in}
\includegraphics[width=\linewidth]{dummy.pdf}
\vspace{-0.25in}
\caption{}
\label{fig:rir}
\vspace{-0.05in}
\end{figure}
\subsection{Data encoding}
To make our model as general possible,
we adopt a physics based approach to model how signal propagates.
This is because, both audio and vision follows the physics way of propagation.
We try to directly regress the 3D pose from the 3D space based on signal propagation physics.
Let us start with audio data encoding.
\\
\textbf{Audio data encoding:}
Recall that we already have the one dimensional impulse response (after subtracting empty room response), the next task would be connecting this with 3D geometry.
Note that the speaker and microphone that we use do not offer bearing information, any point in the 3D space with the same distance to the speaker/mic pair will provide exactly the same response.
This implies the polar coordinate system based 3D encoding method.
Assume the sensor’s location is denoted as $(x_0,y_0,z_0)$ in the global frame.
Then for any given location $P(x,y,z)$, the response equals to $RIR(|(x_0,y_0,z_0)-(x,y,z)|)$\\
\textbf{Vision data encoding:}
Vision data is encoded similarly based on propagation model.
The major difference, is, however, vision ray is directional, but does not offer timing information.
We hope a fusion of audio and vision will provide us both bearing, and depth, and thus a 3D human pose.
Recall that for any given imagre, each pixel on the 2D image corresponds to one ray in the 3D space.
We use OpenPose \cite{} to get 2D joint location for 15 joints on the human body.
These 15 2D joint location are then projected into the 3D space to form 15 signal ray.
Any given point on each of these rays is a possible 3D joint location.
For the purpose of data smoothing, we use a Gaussian representation of the ray to create a 3D heatmap.\\
\textbf{Groundtruth encoding}
Similar with Convolutional Pose Machine \cite{}, for joint labels, we also use a 3D gaussian distribution located at the groundtruth joint location in the 3D space.
* Finally a 3D CNN is trained to regress the vision rays and audio RIR heatmap to every joint location
\subsection{Network architecture}
* We use 3D convolution layer as the basic building block of our network.
* Input has 4 audio heatmap channels and 15 vision Ray heatmap channels.
* Output is 15 channels of joint heatmaps
* Considering the high dimensionality of our data, we have multiple blocks of network.
* Each block output is compared with groundtruth to obtain a L2 loss.
* We added shortcuts to avoid the potential gradient vanishing problem
* We train the network for minimizing the sum of all L2 losses.
\section{Related work}
This paper is primarily concerned with integrating information from audio signals with single view 3D pose estimation to obtain metric scale. We briefly review the related work in these domains.
\noindent\textbf{Vision based Lifting} While reconstructing 3D pose (a set of body landmarks) from a 2D image is geometrically ill-posed, the spatial relationship between landmarks provides a geometric cue to reconstruct the 3D pose~\cite{cjtaylor}. This relationship can be learned from datasets that include 2D and 3D correspondences such as Human3.6M~\cite{ionescu2013human36m}, MPI-INF-3DHP~\cite{mono-3dhp2017} (multiview), Surreal~\cite{varol17_surreal} (synthetic), and 3DPW~\cite{vonMarcard2018} (external sensors). Given the 3D supervision, the spatial relationship can be directly learned via supervised learning~\cite{tome, sun2018,habibie,chang}. Various representations have been proposed to effectively encode the spatial relationship such as volumetric representation~\cite{pavlakos}, graph structure~\cite{cai19,zhao19,ci19,xu21}, transformer architecture~\cite{yushuke,llopart,li21}, compact designs for realtime reconstruction~\cite{vneck,xneck}, and inverse kinematics~\cite{li_cvpr21}. These supervised learning approaches that rely on the 3D ground truth supervision, however, show limited generalization to images of out-of-distribution scenes and poses due to the domain gap. Weakly supervised, self-supervised, and unsupervised learning have been used to address this challenge. For instance, human poses in videos are expected to move and deform continuously over time, leading to a temporal self-supervision~\cite{hossain}. A dilated convolution that increases temporal receptive fields is used to learn the temporal smoothness~\cite{pavllo,Tripathi20}, a global optimization is used to reconstruct temporally coherent pose and camera poses~\cite{arnab19}, and spatio-temporal graph convolution is used to capture pose and time dependency~\cite{cai19, yu20,liu20}. Multiview images provide a geometric constraint that allows learning view-invariant visual features to reconstruct 3D pose. The predicted 3D pose can be projected onto other view images~\cite{rhodin18, rhodin_eccv18,wendt}, stereo images are used to triangulate a 3D pose which can be used for 3D pseudo ground truth for other views~\cite{kocabas,iskakov, iqbal}, and epipolar geometry is used to learn 2D view invariant features for reconstruction~\cite{he20,yao19}. Adversarial learning enables decoupling the 3D poses and 2D images, i.e., 3D reconstruction from a 2D image must follow the distribution of 3D poses, which allows learning from diverse images (not necessarily videos or multiview)~\cite{chen,kudo,wandt19}. A characterization and differentiable augmentation of datasets, further, improves the generalization~\cite{gong, wang20}. With a few exceptions, despite remarkable performance, the reconstructed poses lack the metric scale because of the fundamental ambiguity of 3D pose estimation. Our approach leverages sound generated by consumer-grade speakers to lift the pose in 3D with physical scale.
\noindent\textbf{Multimodal Reconstruction}
Different modalities have been exploited for the purpose of 3D sensing and reconstruction, include RF based \cite{rfpose,rfavatar,wipose,jin2018towards,guan2020through}, inertial based \cite{shen2016smartwatch,yang2020ear}, and acoustic based \cite{yun2015turning,fan2021aurasense, fan2020acoustic,christensen2020batvision,wilson2021echo,senocak2018learning,chen2020soundspaces}.
Various applications including self-driving car \cite{guan2020through}, robot manipulation and grasping \cite{wang2016robot, wang2019multimodal, watkins2019multi, nadon2018multi}, simultaneous localization and mapping (SLAM) \cite{terblanche2021multimodal, doherty2019multimodal, akilan2020multimodality, singhal2016multi, sengupta2019dnn} benefited from multimodal reconstruction.
Audio, given its ambient nature, has attracted unique attention in multimodal machine learning \cite{liu2018towards,rodriguez2018methodology, ngiam2011multimodal,ghaleb2019metric, burnsmulti, mroueh2015deep}.
However, few works \cite{yun2015turning,christensen2020batvision,wilson2021echo} appear in the area of multimodal geometry understanding using audio as a modality, due to the heavy audio multipath posing various difficulties in 3D understanding.
Human pose, given its diverse nature, is especially challenging for traditional acoustic sensing, thus is sparsely studied.
While similar signals like WiFi and FMCW radio have been used for human pose estimation \cite{rfpose,rfavatar,wipose}, audio signal, given its lower speed of propagation, offers more accurate distance measurement than RF-based.
We address the challenge of audio multipath and uncover the potential of audio in accurate metric scale 3D human pose estimation. Specifically, we present the first method that combines audio signals with the 2D pose detection to reason about the 3D spatial relationship for metric reconstruction. Our approach is likely to be beneficial for various applications including smart home, AR/VR, robotics.
\section{Results}
We evaluate our method on the PoseKernel dataset by comparing with state-of-the-art and baseline algorithms.
\noindent\textbf{Evaluation Metric} We use the mean per joint position error (MPJPE) and the percentage of correct keypoint (PCK) in 3D as the main evaluation metrics. For PCK, we report the PCK\@$t$ where $t$ is the error tolerance in cm.
\noindent\textbf{Baseline Algorithms} Two state-of-the-art baseline algorithms are used. (1) Lifting from the Deep, or \texttt{Vis.:LfD}~\cite{tome} is a vision based algorithm that regresses the 3D pose from a single view image by learning 2D and 3D joint locations
together. To resolve the depth ambiguity, a statistical model is learned to generate a plausible 3D reconstruction. This algorithm predicts 3D pose directly where we apply the Procrustes analysis to align with image projection. (2) FrankMocap (\texttt{Vis.:FrankMocap}~\cite{rong2021frankmocap}) leverages the pseudo ground truth 3D poses on in-the-wild images that can be obtained by EFT~\cite{joo2020eft}. Augmenting 3D supervision improves the performance of 3D pose reconstruction. This algorithm predicts the shape and pose using the SMPL parameteric mesh model~\cite{SMPL:2015}. None of existing single view reconstruction approaches including these baseline methods produces metric scale reconstruction.
Given their 3D reconstruction, we scale it to a metric scale by using the average human height in our dataset (1.7m).
\noindent\textbf{Our Ablated Algorithms} In addition to the state-of-the-art vision based algorithms, we compare our method by ablating our sensing modalities.
(1) \texttt{Audio$\times$4} uses four audio signals to reconstruct the 3D joint locations to study the impact of the 2D visual information. (2) \texttt{Vis.+Audio$\times$2} uses a single view image and two audio sources to predict the 3D joint location in the 3D voxel space. (3) \texttt{Ours} is equivalent to Vision+Audio$\times$4.
\subsection{PoseKernelLifter Evaluation}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{qual1.pdf}
\caption{Qualitative results. We test our pose kernel lifting approach in diverse environments including (a) basement, (b) living room, (c) laboratory, etc. The participants are asked to perform daily activities such as sitting, squatting, and range of motion. (d): A failure case of our method: severe occlusion.}
\label{fig:result_vis}
\vspace{-0.05in}
\end{figure*}
\vspace{-0.05in}
Among the six environments in PoseKernel dataset, we use 4 environments for training and 2 environments for testing. The training data consists of diverse poses performed by six
adult (whose heights range between 155 cm and 180 cm) and two minors (with heights 140 cm and 150 cm). The testing data includes two adult and one minor participants whose heights range between 140 cm and 180 cm.
\noindent\textbf{Comparison} We measure the reconstruction accuracy using MPJPE metric summarized in Table~\ref{mpjpe}. As expected, state-of-the-art vision based lifting approaches (\texttt{Vis.:LfD} and \texttt{Vis.:Frank}) that predict 3D human pose in a scale-free space are sensitive to the heights of the subjects, resulting in 18 $\sim$ 40 cm mean error for adults and 40 $\sim$ 60 cm for minors, i.e., the error is larger for minor participants because their heights are very different from the average height 1.7 m. \texttt{Vis.:Frank} outperforms \texttt{Vis.:LfD}, because \texttt{Vis.:Frank} uses a larger training data, and thus estimates poses more accurately. Nonetheless, our pose kernel that is designed for metric scale reconstruction significantly outperforms these approaches. The performance is not dependent on the heights of the participants. In fact, it produces around 20\% smaller error for minor participants that the adult participants because of their smaller scale. Similar observations can be made in PCK summarized in Table~\ref{pck}.
\noindent\textbf{Ablation Study} We ablate the sensing components of our pose kernel approach. As summarized in Tables~\ref{mpjpe} and \ref{pck}, the 3D metric lifting leverage the strong cue from visual data. Without visual cue, i.e., $\texttt{Audio$\times$4}$, the reconstruction is highly erroneous, while combining with audios as a complementary signal (\texttt{Vis.+Audio$\times$2} and \texttt{Ours}) significantly improve the accuracy.
While providing metric information, reconstructing the 3D human pose from audio signals alone (\texttt{Audio$\times$4}) is very challenging because the signals are (1) non-directional: a received signal is integration of audio signals over all angles around the microphone which does not provide bearing angle unlike visual data; (2) non-identifiable: the reflected audio signals are not associated with any semantic information, e.g., hand, arm, and head, so it is difficult to tell where a specific reflection is coming from; and (3) slow: due to the requirement of linear frequency sweeping (10 Hz), the received signals are blurred in the presence of body motion, which is equivalent to an extremely blurry image created by 100 ms exposure with a rolling shutter. Nonetheless, augmenting audio signals improve 3D metric reconstruction regardless the heights of the participants.
\noindent\textbf{Generalization} We report the results in completely different testing environments, which show the strong generalization ability of our method. For each environment, the spatial arrangement of the camera and audio/speakers is different, depending on the space configuration. Figure \ref{fig:result_vis} visualizes the qualitative results of our 3D pose lifting method where we successfully recover the metric scale 3D pose in different environments. We also include a failure cases in the presence of severe occlusion as shown in Figure \ref{fig:result_vis}(d).
| proofpile-arXiv_065-7721 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} \label{introduction}
The question of whether the Universe is spatially open, flat or
closed, which can be quantitatively addressed by determining the
cosmic curvature parameter (hereafter $\Omega_k$) has been the
subject of recent intense discussions. Nowadays, a large amount of
independent observations have provided strong evidence supporting
the spatial flatness of our Universe within the current precision
\citep{Cai2016,Li2016c,Wei2017,Wang2017,Rana2017}, which is well
consistent with the predictions of different inflationary models
\citep{Ichikawa2006,Virey2008}. However, the recent Planck 2018
results, which provided the latest measurements of temperature and
polarization of the cosmic microwave background anisotropy
\citep{Planck Collaboration}, tend to favor a spatially closed
Universe over $2\sigma$ confidence level
($\Omega_k=-0.044^{+0.018}_{-0.015}$). It should be pointed out that
in several recent works \citep{Liu19} the validity of the result
might be as problematic, due to its strong dependency on the assumed
non-flat $\Lambda$CDM model. Such tension becomes one of the major
puzzles in modern cosmology. On the one hand, the constraints on
$\Omega_k$ also have important consequences for properties of dark
energy \citep{Clarkson2007,Gong2007}, considering the correlation
between the cosmic curvature and dark energy model used in fitting
different observational data, such as the Baryon acoustic
oscillation, Hubble parameter, and angular size measurement
\citep{Ryan19}. Others have argued that the derived values of the
cosmic curvature are highly dependent on the validity of the
background FLRW metric, a more fundamental cosmological assumption
which has been investigated in many recent studies with strongly
lensed SNe Ia and gravitational waves \citep{Denissenya2018,Cao19a}.
In any case, in order to better understand the curvature tension and
the nature of dark energy, it is necessary to emphasize the
importance of determining model-independent measurements of the
spatial curvature with different geometrical methods. This could
also be reason why providing and forecasting fits on $\Omega_k$ from
current and future astrophysical observations has become an
outstanding issue in modern cosmology
\citep{Cai2016,Li2016c,Wei2017,Wang2017,Rana2017}.
In this paper, we focus on the Distance Sum Rule (DSR) in the
framework of strong gravitational lensing (SGL) by early-type
galaxies \citep{Takada2015,Denissenya2018,Ooba2018}. More
specifically, the ratios of two angular diameter distance
$D_{ls}/D_s$ (the source-lens/lens distance ratio) can be directly
inferred from the observations of Einstein radii, with precise
measurements of central velocity dispersions of the lensing galaxies
\citep{Bolton08,Cao2012,Cao2015b}. Meanwhile, the distances at
redshifts $z_l$ and $z_s$ are always measured from several popular
distance indicators covering these redshifts, such as SNe Ia, Hubble
parameters \citep{Clarkson2008,Clarkson2007,Shafieloo2010,Li2016c}
and intermediate-luminosity radio quasars \citep{Cao17a,Cao17b}
acting as standard candles, cosmic chronometers and standard rulers,
respectively. However, the uncertainty of the latest $\Omega_k$
constraint was quite large due to the limited sample size of
available SGL data \citep{Xia2017,Qi2019}, focusing on 118
galactic-scale strong lensing systems from the Sloan Lens ACS Survey
(SLACS), BOSS emission-line lens survey, Lens Structure and
Dynamics, and Strong Lensing Legacy Survey \citep{Cao2015b}.
Besides, only a small fraction of the lensing data can be utilized,
due to the mismatch of redshifts between the lensing systems
(especially the background sources) and the current SNe Ia sample.
Such disadvantage of this method will be more severe in the near
future, when the source redshift of galactic-scale strong lensing
systems is expected to reach $z\sim 5$ in the forthcoming Large
Synoptic Survey Telescope (LSST) \citep{Oguri10,Vermai2019}.
Therefore, the direct luminosity distances with high redshifts would
significantly contribute to a robust measurement of the cosmic
curvature, which has been demonstrated in a recent analysis of UV
and X-ray quasars \citep{Risaliti2018}.
On the other hand, in the gravitational wave (GWs) domain, one could
use the GW signals from inspiralling and merging compact binaries to
derive luminosity distances \citep{Schutz86}. Such methodology has
been realized by Advanced LIGO and VIRGO detectors, with the
detection of different types of signals including a binary neutron
star system \citep{Abbott16,Abbott17}. Specially, the Hubble diagram
of these so-called standard sirens could be directly constructed and
applied in cosmology, with the redshifts measured from their
electromagnetic (EM) counterparts. Looking ahead, the next
generation detectors like the DECi-Hertz Interferometric
Gravitational Observatory (DECIGO), a future Japanese space GW
antenna, will extend the detection limit of Advanced LIGO and
Einstein Telescope to the earlier stage of the Universe ($z\sim 5$)
generating the detections of $\sim 10^4$ NS-NS binaries per year. In
addition, the detection of these binary systems using the
second-generation technology of space-borne DECIGO takes place in
the inspiral phase long time ($\sim 5$ years) before they enter the
LIGO frequency range, with signal-noise-ratio (SNR) much higher than
that of the current and future ground-based GW detectors. Therefore,
in this study we propose with the future standard siren sample from
DECIGO, the largest compilation of SGL data expected from LSST can
be used to infer the cosmic curvature resulting in more precise
constraints. This paper is organized as follows. In Sec. 2 and 3, we
will briefly introduce the methodology and the simulated data
(DECIGO standard sirens and LSST strong lenses) in this analysis.
The forecasted constraints on the cosmic curvature are presented in
Sec. 4. Finally, we give summaries and discussions in Sec. 5.
Throughout the paper, the flat $\Lambda$CDM is taken as the fiducial
cosmological model in the simulation, with $\Omega_m=0.315$ and
$H_0=67.4$ km/s/Mpc from the latest \textit{Planck} observations
\citep{Planck Collaboration}.
\section{Methodology}
As one of the basic assumptions in cosmology, the cosmological
principle (i.e., the Universe is homogeneous and isotropic at large
scales) has been widely applied in different cosmological studies.
Now the FLRW metric is introduced to describe the space-time of the
Universe (where the speed of light $c=1$)
\begin{equation}\label{eq1}
ds^2=-dt^2+a^2(t)(\frac{1}{1-Kr^2}dr^2+r^2d\Omega^2),
\end{equation}
where $K=+1, 0, -1$ denotes the spatial curvature for closed,
flat and open geometries, respectively. Note that the curvature parameter is directly related to the
constant $K$ as $\Omega_k=-k/a_0^2 H_0^2$. Now we respectively denote the dimensionless comoving distances
$d_l\equiv d(0, z_l)$, $d_s\equiv d(0, z_s)$ and $d_{ls}\equiv d(z_l, z_s)$, in the framework of
strong lensing system with a source (at redshift $z_s$) observed on the image plane (at redshift $z_l$).
Note that the dimensionless comoving distances ($d$)
\begin{equation}
d(z_l, z_s)=\frac{1}{\sqrt{|\Omega_k|}}S_K\left(
\sqrt{|\Omega_k|}\int^{z_s}_{z_l}\frac{H_0dz'}{H(z')} \right),
\end{equation}
where
\begin{equation}\label{eq3}
S_K(x)=\left\{
\begin{array}{lll}
\sin(x)\qquad \,\ \Omega_k&<0, \\
x\qquad\qquad \,\ \Omega_k&=0, \\
\sinh(x)\qquad \Omega_k&>0. \\
\end{array}
\right.
\end{equation}
are connected with the angular diameter distance ($D_A$) as $d(z_l, z_s)= (1+z_s)H_0 D_A(z_l,z_s)$. As was originally proposed in \citet{Ratra88}, the distance sum rule in non-flat FLRW models gives
\begin{equation}\label{eq4}
d_{ls}={d_s}\sqrt{1+\Omega_kd_l^2}-{d_l}\sqrt{1+\Omega_kd_s^2}.
\end{equation}
with $d'(z)>0$ and a one-to-one correspondence between the cosmic
time $t$ and redshift $z$ \citet{Bernstein06}. Such simple relation,
which was first used to obtain model-independent measurements of the
spatial curvature \citep{Clarkson2008,Qi2019}, has also been
recently discussed to test the validity of the FLRW metric in the
Universe \citep{Qi2019b} based on different types of gravitational
lensing events. Now we could rewrite this fundamental relation so
that the strong lensing observations (from LSST lenses) and
luminosity distances (from DECIGO standard sirens) are encoded
\begin{equation} \label{smr}
\frac{d_{ls}}{d_s}=\sqrt{1+\Omega_Kd_l^2}-\frac{d_l}{d_s}\sqrt{1+\Omega_K
d_s^2}.
\end{equation}
The left item can be derived from the source/lens distance ratio
$d_{ls}/d_s=D^{A}_{ls}/D^{A}_s$, based on high-resolution imaging
and spectroscopic observations in SGL systems \citep{Cao2015b}.
For a strong lensing system with early-type galaxy acting as
intervening lens, one of its typical feature is the Einstein radius
($\theta_E$) depends on the source/lens distance ratio
($d_{ls}/d_s$), the lens velocity dispersion ($\sigma$), and the
density profiles of the lens galaxies. Such methodology was
originally proposed in \citet{Futamase01} and extended to different
SGL samples \citep{Bolton08,Cao2012,Cao2015b,Chen19}, with the aim
of quantitatively studying the redshift evolution of cosmic equation
of state \citep{Li16,Liu19}, measuring the speed of light at
different redshifts \citep{Cao18,Cao20}, and testing the General
Relativity at large scale \citep{Cao17c,Collett18}. In this paper,
three different models will be included in our analysis to describe
the mass distribution of early-type galaxies: Singular Isothermal
Ellipsoid (SIE) lens model, Power-law lens model, and Extended
power-law model \citep{Chen19,Zhou20}. If the lens mass profile can
be approximately described by SIE, the distance ratio is expressed
as \citep{Koopmans06}
\begin{equation} \label{SIE_E}
\frac{d_{ls}}{d_s}=\frac{c^2\theta_E}{4\pi \sigma_{SIE}^2}
=\frac{c^2\theta_E}{4\pi \sigma_{0}^2f_E^2},
\end{equation}
where $\sigma_{SIS}$ and $c$ respectively denote the SIS (Singular
Isothermal Sphere) velocity dispersion and the speed of light. The
parameter $f_E$ is introduced to quantify different systematics that
could change the observed multiple image separation or generate the
difference between $\sigma_{SIS}$ and the observed velocity
dispersion of stars ($\sigma_{0}$). The relation between the
measurement of $\sigma_{0}$ from spectroscopy and that estimated
from the SIE model has been extensively discussed in
\citet{Ofek2003,Cao2012}. In the second case, we take into account a
spherically symmetric power-law mass distribution ($\rho\sim
r^{-\gamma}$, $r$ is the spherical radial coordinate from the lens
center,) to generalize the simplest Singular Isothermal Sphere lens
model, considering the non-negligible deviation from SIS
($\gamma=2$) based on recent observations of the density profiles
of early-type galaxies \citep{Koopmans06,Humphrey10,Sonnenfeld13a}.
Now the corresponding distance ratio is rewritten as
\citep{Ruff2011,Koopmans06,Bolton2012}
\begin{equation} \label{sigma_gamma}
\frac{d_{ls}}{d_s}=\frac{c^2\theta_E}{4\pi
\sigma_{ap}^2}\left(\frac{\theta_{ap}}{\theta_E}\right)^{2-\gamma}f^{-1}(\gamma),
\end{equation}
where $f(\gamma)$ is a function of the radial mass profile slope
($\gamma$) and $\sigma_{ap}$ denotes the projected, luminosity
weighted average of the velocity dispersion inside the circular
aperture $\theta_{ap}$ (See \citet{Cao2015b} for the derivation of
equivalent $\sigma_{ap}$ within rectangular apertures). One of the
advantages of such power-law lens model is based on the assumption
that the distribution of stellar mass follows the same power law as
that of the total mass, with the vanishing of velocity anisotropy
\citep{Koopmans05}. Consequently, we take into account these
uncertainties by introducing a general mass model for the early-type
lens galaxies, with the total (i.e. luminous plus dark-matter) mass
density distribution ($\rho(r)\sim r^{-\alpha}$) and the luminosity
density profile ($\nu(r)\sim r^{-\delta}$). Here we choose to
consider the anisotropy of the stellar velocity dispersion in this
analysis, which is quantified by the a new parameter $\beta(r) = 1 -
{\sigma^2_\theta} / {\sigma^2_r}$, where $\sigma_\theta$ and
$\sigma_r$ are the tangential and radial velocity dispersions,
respectively. In the framework of such extended power-law model, the
distance ratio can be computed from the radial Jeans equation in
spherical coordinate system \citep{Koopmans06}, by projecting the
dynamical mass to the lens mass within the Einstein radius
\begin{eqnarray}\label{sigma_alpha_delta}
\nonumber
\frac{d_{\rm ls}}{d_{\rm s}}&=& \left(\frac{c^2}{4\sigma_{ap}^2}\theta_{\rm E}\right)\frac{2(3-\delta)}{\sqrt{\pi}(\xi-2 \beta)(3-\xi)} \left( \frac{\theta_{\rm ap}}{\theta_{\rm E}}\right)^{2-\alpha}\\
&\times&\left[\frac{\lambda(\xi)-\beta\lambda(\xi+2)}{\lambda(\alpha)\lambda(\delta)}\right]~,
\end{eqnarray}
where $\xi=\alpha+\delta-2$,
$\lambda(x)=\Gamma(\frac{x-1}{2})/\Gamma(\frac{x}{2})$. It is
apparent that this extended power-law model will reduce to the
power-law lens model when $\delta=\alpha$, i.e., the distribution of
stellar mass follows the same power law as that of the total mass.
Combing the above equations with the error propagation formula Eq.~(6), we could obtain the uncertainty of SGL systems
($\sigma_{SGL}$) for different lens models, based on the
observational uncertainties of the Einstein radius and velocity
dispersion. The distance information $d(z)$ in the right items of
Eq.~(\ref{smr}) is determined by luminosity distances from
gravitational wave data. In this paper, we will present an updated
estimation of the cosmic curvature or the FLRW metric from the
largest SGL sample by LSST and future GW observations by DECIGO.
\begin{figure*}
\begin{center}
\includegraphics[width=0.45\linewidth]{fig1_1.eps}
\includegraphics[width=0.45\linewidth]{fig1_2.eps}
\end{center}
\caption{The scatter plot of the simulated LSST lensing systems,
with the gradient color denoting the uncertainty the Einstein radius
and lens velocity dispersion. }
\end{figure*}
\section{Simulations from DECIGO and LSST}
\subsection{Strong lenses from LSST}
As one of the most important wide-area and deep surveys besides the
Dark Energy Survey (DES) \citep{Frieman2004}, the upcoming Large
Synoptic Survey Telescope (LSST) is expected to monitor $\sim 10^5$
strong gravitational lenses in the most optimistic discovery
scenario, by repeatedly scanning nearly half of the sky for ten
years \citep{Collett15}. Such tremendous increase of known
galaxy-scale lenses by orders of magnitude will produce extensive
cosmological applications in the near future
\citep{Cao17c,Cao18,Ma19,Cao20}. With high-quality imaging and
spectroscopic data, the Einstein radius of multiple images and the
lens velocity dispersion can be measured precisely and accurately.
In order to assess the performance of forthcoming optical imaging
surveys, the simulation of a realistic population of galaxy-galaxy
strong lenses has been performed \citep{Collett15}. The results
showed that although $\sim 10^5$ strong gravitational lenses are
discoverable in LSST, only a fraction of SGL sub-sample is available
for our curvature estimation, given the expensiveness of substantial
follow-up efforts and dedicated spectroscopic observations
(spectroscopic velocity dispersion, spectroscopic confirmation of
the lens and source redshift) \citep{Hlozek19}. Therefore, in this
paper we will simulate a particularly well-selected sub-sample of
LSST lenses with the observations of the foreground deflector and
the background source population
\footnote{github.com/tcollett/LensPop}, following the recent
analysis of multi-object and single-object spectroscopy to enhance
Dark Energy Science from LSST \citep{Mandelbaum19}. More
specifically, it is more realistic to focus only on 5000
well-measured systems with intermediate-mass early-type galaxies
acing as strong gravitational lenses, with the velocity dispersion
of 200 km/s $<\sigma_{ap} <$ 300 km/s. Such criteria is strongly
supported by the recent findings that for systems with velocity
dispersion between 200 km/s to 300 km/s, there is a good consistency
between the measurement of $\sigma_0$ from spectroscopy and those
estimated from the SIS lens model \citep{Treu06,Cao2016}. In our
simulations, we choose the velocity dispersion function (VDF) from
DSS Data Release 5 \citep{Choi07} to describe the number density of
these lensing galaxies, the mass distributions of which are well
quantified by the singular isothermal sphere (SIS) model. We take
the fractional uncertainty of the Einstein radius and the observed
velocity dispersion following the uncertainty budget proposed in
\citet{Liu20}.
To assess the analysis of Einstein radius extraction from future
LSST survey, some recent attempts have been made to investigate the
effect of line-of-sight contamination \citep{Hilbert09,Collett16},
which found that for monitorable strong lenses such effect could
introduce a 1-3\% uncertainty in the Einstein radius measurements.
Such error strategy has been extensively used in the simulation of
LSST lens sample, with high-quality (sub-arcsecond) imaging data in
general \citep{Cao17c}. Based on the observations of 32 strong
lensing systems from Strong Lensing Legacy Survey (SL2S), with both
Canada-France-Hawaii Telescope (CFHT) near-infrared ground-based
images or Hubble Space Telescope (HST) imaging data
\citep{Sonnenfeld13}, \citet{Liu20} recently investigated the
possible anti-correlation between the fractional uncertainty of the
Einstein radius ($\Delta \theta_E$) and $\theta_E$. The results
showed that different error strategies should be applied to strong
lenses with different Einstein radii ($\theta_E$), due to the fact
that strong lensing systems with smaller $\theta_E$ will be
accompanied with larger statistical uncertainties. Therefore,
different from the previous work which took a constant precision for
each SGL system observed with HST-like image quality \citep{Cao17c},
we take 8\%, 5\% and 3\% as the average Einstein radius precision
for each system, which could be classified as small Einstein radii
lenses ($0.5"<\theta_E<1.0"$), intermediate Einstein radii lenses
($1"\leq\theta_E<1.5"$), and large Einstein radii lenses
($\theta_E\geq1.5$) with HST+CFHT imaging data.
Moreover, \citet{Liu20} recently proposed that other intrinsic
properties of the lensing system (such as the total mass or the
brightness of the lensing galaxy) could significantly change the
observational precision of lens velocity dispersion. The lessons in
the statistical analysis of 70 intermediate-mass lenses (with
average velocity dispersion of $\sigma_{ap}\sim 230$ km/s) observed
by Sloan Lens ACS survey (SLACS) \citep{Bolton08} showed the
fractional uncertainty is strongly correlated with the lens surface
brightness in the $i$-band. To incorporate this effect, we consider
the anti-correlation between these two quantities and take the
best-fitted correlation function obtained in \citet{Liu20} to
simulate the velocity dispersion uncertainty for each LSST lens.
Note that such strategy is different from that of the previous work,
which assigned an overall error of 5\% on each SGL system observed
with detailed follow-up spectroscopic information from other
ground-based facilities \citep{Cao2015b,Zhou20}. The Einstein radius
and velocity dispersion distributions of the simulated LSST lenses
are plotted in Fig.~1.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\linewidth]{DECIGO.eps}
\end{center}
\caption{The luminosity distance measurements from 10,000 simulated
GW events observable by the space detector DECIGO.}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.95\linewidth]{figdistribution.eps}
\end{center}
\caption{Redshift distributions for the GWs from DECIGO, and strong
lensing systems (including the source and the lens) from LSST.}
\end{figure}
\begin{table*}
\begin{center}
\begin{tabular}{c| c c c c c c c}
\hline
& $\Omega_k$ & $f_E$ &$\gamma_0$&$\gamma_1$ &$\alpha$ & $\delta$ &$\beta$ \\
\hline
SIS & $0.0001^{+0.012}_{-0.013}$ & $1.000^{+0.002}_{-0.002}$ & $\Box$ & $\Box$ & $\Box$ & $\Box$ & $\Box$\\
\hline
Power-law spherical & $-0.016^{+0.035}_{-0.037}$ & $\Box$ &$2.001^{+0.001}_{-0.001}$ & $0.002^{+0.003}_{-0.003}$ & $\Box$ & $\Box$ & $\Box$ \\
\hline
Extended power-law & $-0.007^{+0.050}_{-0.047}$ & $\Box$ & $\Box$ & $\Box$ &$2.003^{+0.011}_{-0.011}$ & $1.968^{+0.527}_{-0.516}$ &$-0.067^{+0.605}_{-0.325}$ \\
\hline
\end{tabular}
\caption{Summary of constraints on the cosmic curvature $\Omega_k$
and lens model parameters for three types of lens models (See the
text for definitions)} \label{SIE_table}
\end{center}
\end{table*}
\subsection{Standard sirens from DECIGO}
It is well known that GW from inspiraling binaries could provide a
new, independent probe of the cosmic expansion using standard sirens
\citep{Abbott16,Abbott17}. More importantly, the standard siren
method, which focuses on binary neutron star mergers coupled with
electromagnetic (EM) measurements of the redshift, has already been
used to produce constraints on the Hubble constant at lower redshift
\citep{Zhang20}, the cosmic opacity and distance duality relation at
much higher redshifts \citep{Qi2019b,Qi2019c}.
Using laser interferometry, the DECi-Hertz Interferometric
Gravitational Observatory (DECIGO) is a space mission designed to
open the DECi-Hertz frequency range to GW observations
\citep{Seto01,Kawamura11}. Compared with the ground-based detectors
and other space-based detector such as Laser Interferometric Space
Antenna (LISA), DECIGO is expected to detect different populations
of GW sources in this unique frequency range of 0.1-1 Hz, including
primordial black holes, intermediate-mass black hole binary
coalescences, neutron star binary coalescences, and black hole-
neutron star binaries in their inspiral phase \citep{Kawamura11}.
The loudest objects in this band of the GW sky are expected to be
mergers of neutron star binary coalescences, long time before they
enter the LIGO frequency range. Such advantage, which significantly
increases the precision of inferences made from chirp signals
\citep{Ola20}, which yield orders of magnitude more candidate
standard sirens in the earlier period of the Universe. More
specifically, following the recent estimation of \citet{Kawamura19},
DECIGO will bring us the yearly detection of 10,000 NS-NS systems at
redshift $z\sim$5, based on the frequency of the binary coalescences
given above. Note that although GW may provide us some information
about the source redshifts \citep{Messenger12,Messenger14},
observations of the EM counterparts or host galaxies by ground-based
telescopes are still necessary for these expected GW signals
\citep{Cutler09}. This offers the exciting possibility we explore:
the ability of deep-redshift standard sirens observed by DECIGO to
validate or refute the flat geometry inferred by the newest Planck
observations.
Following the simulation process by \citet{Geng20}, we generate mock
DECIGO NS-NS standard siren observations based on a flat
$\Lambda$CDM cosmology and assume that their redshift is known. The
NS mass distributions is chosen uniformly in [1,2] $M_\odot$. For
each coalescing NS-NS system with physical masses ($m_1$ and $m_2$)
and symmetric mass ratio ($\eta=m_{1}m_{2}/M_{t}^{2}$), one could
derive the Fourier transform of the GW waveform as
\begin{equation}
\widetilde{h}(f)=\frac{A}{D_{L}(z)}M_{z}^{5/6}f^{-7/6}e^{i\Psi(f)},
\end{equation}
where $A=(\sqrt{6}\pi^{2/3})^{-1}$ quantifies the geometrical
average over NS-NS system's inclination angle, while $D_{L}(z)$
denotes the luminosity distance to the source with the redshifted
chirp mass of $M_{z}=(1+z)\eta^{3/5}M_{t}$. As proposed in previous
work \citep{Maggiore08}, the frequency-dependent phase caused by
orbital evolution ($\Psi(f)$) can be derived from 1.5 (or higher)
post-Newtonian (PN) approximation. For the purpose of uncertainty
estimation, different sources of uncertainties are included in our
simulation of luminosity distance. On the one hand, focusing only on
the inspiral phase of the GW signal, the instrumental uncertainty
for a nearly face-on case is given by
$\sigma^{inst}_{D_{L,GW}}=\frac{2D_{L,GW}}{\rho}$, where $\rho$ is
the combined SNR for the network of space-borne detector and the
factor of 2 is included to quantify the maximal effect of the
inclination on the SNR \citep{Zhao11}. On the other hand, the
lensing uncertainty caused by the weak lensing is the other crucial
point of our idea, due to large scale structure that could
potentially bias the results especially at high redshifts
\citep{Sathyaprakash2010}. More specifically, following the
procedure extensively applied in the literature \citep{Zhang20}, it
was recently proposed that such weak lensing uncertainty could be
modeled as $\sigma^{lens}_{D_{L}}/D_{L}=0.044z$ for the space-based
GW detectors \citep{Cutler09}. Therefore, the total uncertainty on
the luminosity distance is given by
\begin{eqnarray}
\sigma_{D_{L,GW}}&=&\sqrt{(\sigma_{D_{L,GW}}^{\rm inst})^2+(\sigma_{D_{L,GW}}^{\rm lens})^2} \nonumber\\
&=&\sqrt{\left(\frac{2D_{L,GW}}{\rho}\right)^2+(0.05z D_{L,GW})^2}.
\label{sigmadl}
\end{eqnarray}
With the luminosity distance from the standard sirens, the
uncertainty in $\sigma_{GW}$ can be expressed as a function of
$D_{L,GW}^s$, $D_{L,GW}^l$, $\sigma_{D_{L,GW}}^s$, and
$\sigma_{D_{L,GW}}^l$ through the error propagation formula
[Eq.~(5)].
Now the final key question required to be answered is: how to
describe the redshift distribution of GW events that can be detected
by DECIGO? Given the analytical fit of DECIGO noise spectrum
including the shot noise, the radiation pressure noise and the
acceleration noise \citep{Kawamura19,Kawamura06,Nishizawa10,Yagi11},
the simulated luminosity distances from 10,000 standard sirens in
DECIGO is presented in Fig.~2, with the redshift distribution
follows the form provided by
\citet{Sathyaprakash2010,Cutler06,Schneider01}. We refer to
\citet{Geng20} for more simulation details of DECIGO standard
sirens. For a good comparison, Fig.~3 illustrates the perfect
redshift coverage of the simulated DECIGO sample, compared with
lensing observations including the source and lens from LSST.
\section{Results and discussion}
In this section, we describe the observational constraint on the
cosmic curvature using the observational data-set summarized in
Sect. 3. In particular, we simultaneously fit the spatial curvature
parameter and lens model parameters to the LSST lens sample and
luminosity distance data from DECIGO, and find the best-fit of
$\Omega_K$ in the DSR. The statistical quantity $\chi^2$ is written
as
\begin{equation}
\chi^2(\textbf{p},\Omega_k)=\sum_{i=1}^{N} \frac{\left({\cal
D}_{GW}({z}_i;\Omega_k)- {\cal
D}_{SGL}({z}_i;\textbf{p})\right)^2}{\sigma_{\cal D}(z_i)^2},
\end{equation}
with the two factors contributing to the uncertainty of distance
ratio from the observables of the strong lensing systems in LSST and
the luminosity distance measurements from standard sirens in DECIGO.
We assume that the two uncertainties of LSST lenses and DECIGO
standard sirens are uncorrelated and thus they add in quadrature of
$\sigma_D^2=\sigma_{SGL}^{2}+\sigma_{QSO}^2$. In order to calculate
the posterior distribution of the model parameters, we use the
Python module emcee
\footnote{https://emcee.readthedocs.io/en/stable/}, which is an
Affine Invariant Markov chain Monte Carlo (MCMC) Ensemble sampler
\citep{Foreman13}, to survey the posterior distribution in parameter
space and to maximize the likelihood function ${\cal L} \sim
\exp{(-\chi^2 / 2)}$.
We assume three kinds of spherically symmetric mass distributions
(SIS, power-law model, and extended power-law model) for the lensing
galaxies in the cosmic curvature analysis. The 1D and 2D
marginalized distributions with 1$\sigma$ and 2$\sigma$ confidence
level contours for $\Omega_K$ and relevant lens parameters
constrained from the combined LSST+DECIGO data are shown in
Fig.~4-6.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\linewidth]{figsis.eps}
\end{center}
\caption{The 2-D regions and 1-D marginalized distribution with the
1-$\sigma$ and 2-$\sigma$ contours of all parameters, in the
framework of SIS lens models.}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.95\linewidth]{figpower.eps}
\end{center}
\caption{The 2-D regions and 1-D marginalized distribution with the
1-$\sigma$ and 2-$\sigma$ contours of all parameters, in the
framework of power-law lens profile.}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.95\linewidth]{figextend.eps}
\end{center}
\caption{The 2-D regions and 1-D marginalized distribution with the
1-$\sigma$ and 2-$\sigma$ contours of all parameters, in the
framework of extended power-law lens profile.}
\end{figure}
In the framework of the simplest SIS model for the first application
of the method described above, we obtain the best-fit value of
curvature parameter and the corresponding $1\sigma$ (more precisely
68\% confidence level) uncertainties:
$\Omega_K=0.0001^{+0.012}_{-0.013}$, which indicates that the cosmic
curvature can be accommodated at very high precision ($\Delta
\Omega_K=10^{-2}$) comparable to that derived from the Planck CMB
power spectra \citep{Planck Collaboration}. Compared with the
previous results obtained in \citet{Rasanen2015}, the fits on the
cosmic curvature have been improved by two orders of magnitudes,
benefit from the significant increase of well-measured strong
lensing systems and standard sirens in the future. Such conclusion
could also be derived through the comparison with other works by
using different available SGL sub-samples \citet{Xia2017,Qi2019}. In
this analysis, the lens parameter characterizing the mass
distribution profile $f_E$ is also estimated in a global fitting
without taking any prior. To study the correlation between
$\Omega_K$ and $f_E$, we display the two-dimensional (2D)
probability distributions in the ($\Omega_K$, $f_E$) plane, with the
marginalized $1\sigma$ constraint of the parameter
$f_E=1.000^{+0.002}_{-0.002}$. It is interesting to note that our
results, which strongly support a flat universe together with the
validity of SIS model ($\Omega_K=0$, $f_E=1$), reveal significant
degeneracy between the spatial curvature of the universe and the
$f_E$ parameter, which characterizes the mass distribution of the
lensing galaxies. The numerical results are also summarized in Table
1.
Now we focus our attention on the constraints on the parameters in
the framework of power-law mass density profile, in which the mass
density power-law index of massive elliptical galaxies evolves with
redshift ($\gamma=\gamma_0+\gamma_1\times z_l$). Performing fits on
the data comprising the strong lensing systems in LSST and the
luminosity distance measurements from standard sirens in DECIGO, we
obtain the following best-fit values and corresponding $1\sigma$
uncertainties: $\Omega_K=-0.016^{+0.035}_{-0.037}$,
$\gamma_0=2.001^{+0.001}_{-0.001}$ and
$\gamma_1=0.002^{+0.003}_{-0.003}$. The 1D and 2D marginalized
distribution for $\Omega_K$ and the power-lens model parameters are
shown in Fig.~5. It is turned out that the fit on the cosmic
curvature in this case is still very strong compared with that found
for the SIS model, which indicates that our findings are quite
robust. Meanwhile, contour plots in Fig. 5 show that the
degeneracies among the cosmic curvature and lens parameters
($\gamma_0$) in power-law mass density profile are similar to that
of the SIS model ($f_E$). More specifically, it is easy to find from
the $\Omega_K-\gamma_1$ contour that $\Omega_K$ is strongly
correlated with $\gamma_1$, which indicates that a significant
redshift evolution of the mass density power-law index will result
in a larger cosmic curvature. Therefore, compared with the previous
results focusing on a constant power-law lens index parameter
\citep{Zhou20}, our analysis reveals that the estimation of the
spatial curvature is more sensitive to the measurement of its
possible evolution with redshifts. Such conclusion, which has been
suggested by \citet{Ruff2011}, is also further supported by
\citet{Sonnenfeld13a} in a combined sample of lenses from SLACS,
SL2S, and LSD. Therefore, additional observational information, such
as dedicated follow-up imaging of the lensed images for a sample of
individual lenses is necessary in this case. Such high-cadence,
high-resolution and multi-filter imaging could be technically
obtained through frequent visits on Hubble telescope or smaller
telescopes on the ground \citep{Collett14,Wong15}.
Let us finally focus on the performance of the extended power-law
lens model, in which the total density slope, luminosity density
slope and the anisotropy of stellar velocity dispersion are taken as
free parameters. The graphical and numerical results from the
combined DECIGO+LSST data set are displayed in Fig.~6 and Table 1.
The values listed in Table 1 show that the extended power-law lens
model generates competitive constraints on the cosmic curvature
($\Omega_K=-0.007^{+0.050}_{-0.047}$) comparable with the power-law
mass density profile. This is inconsistent with the results
presented in \citet{Xia2017}, where they have used the similar lens
model (without considering the effect of the anisotropy of stellar
velocity dispersion) but but adopted different combinations of
available data in the EM domain. Compared with the SNe Ia standard
candles, the advantage of DECIGO standard sirens is that larger
number of GWs could be observed at much higher redshifts, which
motivate us to calibrate the LSST strong lenses and investigate the
cosmic curvature in the early universe. More interestingly, the
extended power-law lens model is more suitable to estimate the mass
distribution of baryonic matter and dark matter in the early-type
galaxies. Compared with the previous observational constraints on
the total-mass density profile \citep{Cao2016,Xia2017,Chen19}, the
combined LSST+DECIGO data will improve the constraints significantly
($\alpha=2.003^{+0.011}_{-0.011}$ with respect to current results,
showing the high constraining power that can be reached by the
forthcoming surveys. Furthermore, as can be seen in Fig.~6 the
addition of DECIGO to the combination of LSST+DECIGO does improve
the constraint on the luminosity density slope
($\delta=1.968^{+0.527}_{-0.516}$) and the anisotropy of the stellar
velocity dispersion significantly
($\beta=-0.067^{+0.605}_{-0.325}$). Such steeper luminosity density
profile and nonzero stellar velocity anisotropy parameter are
consistent with \citet{Cao2016,Zhou20} and also with recent results
of Illustris simulations \citep{Xu16}, focusing on early-type
galaxies with spherically symmetric density distributions. In this
case, auxiliary data such as integral field unit (IFU) spectroscopic
data, especially Adaptive optics (AO) IFU spectroscopy on
8-40m-class telescopes or AO imaging with slit spectroscopy
\citep{Hlozek19}, could provide complementary information of
$\delta$ and $\beta$ in the near future \citep{Barnabe13}.
\begin{figure*}
\begin{center}
\includegraphics[width=0.4\linewidth]{DECIGO_ok.eps} \includegraphics[width=0.4\linewidth]{ET_ok.eps}
\end{center}
\caption{Determination of cosmic curvature with different GW
detectors, future ground-based Einstein Telescope (ET) and satellite
GW observatory (DECIGO).}
\end{figure*}
Finally, in order to demonstrate the advantage of the
second-generation technology of space-borne GW detector, in Fig. 7
we compare the results of DECIGO with those obtained using the
third-generation ground-based Einstein Telescope (ET) \footnote{The
Einstein Telescope Project, https://www.et-gw.eu/et/}. See
\citet{Qi2019b} for detailed description of the simulation process
based on ET. It should be noted that adding the information brought
by such a space-based GW detector to the combination of LSST+DECIGO
does improve the $\Omega_K$ constraints significantly. More
specifically, the cosmic curvature is expected to be constrained
with an error smaller than $10^{-2}$, improving the sensitivity of
ET constraints by about a factor of 10 in the framework of three
kinds of spherically symmetric mass distributions (SIS, power-law
model, and extended power-law model) for the lensing galaxies. Such
conclusion could be not surprising, as although one would expect
that ET would be ten times more sensitive than current advanced
ground-based detectors, the neutron star-neutron star (NS-NS)
mergers and black hole-neutron star (BH-NS) mergers systems could
only be detected up to redshift $z\sim 2$ and $z\sim 5$,
respectively \citep{Cai17}. More importantly, benefit from the
fundamentally self-calibrating capability of space-based detectors,
the corresponding distance uncertainties for DECIGO may be reduced
to 1 percent accuracy at lower frequencies (in the frequency range
of 0.1-1 Hz) \citep{Cutler09}. This is to be compared with Fig.~1 in
\citet{Qi2019b}, whose forecast is for ET (with the specifications
foreseen at the time) the luminosity distance measurements could be
derived from 1000 observed GW events in the frequency range of
$1-10^4$ Hz. In summary, we do expect that the use of our technique,
i.e., using luminosity distance of standard sirens detected by the
second-generation technology of space-borne GW detector, would lead
to a stronger improvement in the direct measurement of the spatial
curvature in the early universe ($z\sim5.0$). However, in order to
investigate this further, the mass density profiles of early-type
galaxies should be properly taken into account \citep{Qi18}.
\section{Conclusions}
The spatial curvature of the Universe has been one of the most
important cosmological parameters in modern cosmology. Its value, or
even its sign would help us rule out the standard cosmological
paradigm or even point to the presence of new physics. In this work,
we have quantified the ability of DECIGO, a future Japanese space
gravitational-wave antenna in combination with galactic-scale strong
lensing systems expected to be detected by LSST, to improve the
current constraints on the cosmic curvature in the redshift range
$z\sim 5$. In the framework of the well-known distance sum rule
\citep{Rasanen2015}, the perfect redshift coverage of the standard
sirens observed by DECIGO, compared with lensing observations
including the source and lens from LSST, makes such
cosmological-model-independent test more natural and general. While
we exploited a commonly used Singular Isothermal Ellipsoid (SIE)
model to describe the mass distribution of lensing galaxies, we also
use Power-law model and Extended power-law model to better assess
the effect of lens model on measuring the cosmic curvature.
In the case of the simplest SIS lens model, due to the significant
increase of well-measured strong lensing systems and standard
sirens, one could expect the most stringent fits on the cosmic
curvature which has been improved by two orders of magnitudes
compared with the previous results obtained in \citet{Rasanen2015}.
Such precision is competitive with that derived from the Planck CMB
power spectra (TT, TE, EE+lowP) data \citep{Planck Collaboration}.
For the second lens model, we have considered the power-law mass
density profile in which the mass density power-law index of massive
elliptical galaxies evolves with redshift. Our findings indicate
that the constraint on the cosmic curvature in this case is still
very strong compared with that found for the SIS model. However, our
analysis reveals the strong degeneracy between the spatial curvature
and the redshift evolution of power-law lens index parameter.
Compared with the previous results focusing on a constant power-law
slope, we show that the estimation of $\Omega_K$ is more sensitive
to the measurement of $\gamma_1$, i.e., a significant redshift
evolution of the mass density power-law index will result in a
larger cosmic curvature. Therefore, additional observational
information, such as dedicated follow-up imaging of the lensed
images for a sample of individual lenses is necessary in this case.
Focusing on the performance of the extended power-law lens model, in
which the total density slope, luminosity density slope and the
anisotropy of stellar velocity dispersion are taken as free
parameters, the combined LSST+DECIGO data will improve the
constraints significantly with respect to current results, showing
the high constraining power that can be reached by the forthcoming
surveys. More interestingly, the extended power-law lens model is
more suitable to estimate the mass distribution of baryonic matter
and dark matter in the early-type galaxies. Specially, the addition
of DECIGO to the combination of LSST+DECIGO does improve the
constraint on the luminosity density slope and the anisotropy of the
stellar velocity dispersion significantly. In this case, our results
highlight the importance of investigating the luminosity density
slope and the anisotropy of the stellar velocity dispersion through
auxiliary data, especially integral field unit (IFU) spectroscopic
data in view of upcoming surveys.
What we are more concerned about is the advantage of higher quality
data sets from the second-generation technology of space-borne GW
detector, compared with the third-generation ground-based Einstein
Telescope (ET). For this purpose, our analysis demonstrates that the
cosmic curvature is expected to be constrained with an error smaller
than $10^{-2}$, improving the sensitivity of ET constraints by about
a factor of 10 in the framework of three kinds of lens models. In
summary, our paper highlights the benefits of synergies between
DECIGO and LSST in constraining the physical mechanism of cosmic
acceleration or new physics beyond the standard model, which could
manifest itself through accurate determination of the spatial
curvature of the Universe.
\vspace{0.5cm}
This work was supported by National Key R\&D Program of China No.
2017YFA0402600; the Ministry of Science and Technology National
Basic Science Program (Project 973) under Grants No. 2014CB845806;
the National Natural Science Foundation of China under Grants Nos.
12021003, 11690023, 11633001, and 11373014; Beijing Talents Fund of
Organization Department of Beijing Municipal Committee of the CPC;
the Strategic Priority Research Program of the Chinese Academy of
Sciences, Grant No. XDB23000000; and the Interdiscipline Research
Funds of Beijing Normal University. M.B. was supported by Foreign
Talent Introducing Project and Special Fund Support of Foreign
Knowledge Introducing Project in China. He is also grateful for
support from Polish Ministry of Science and Higher Education through
the grant DIR/WK/2018/12.
| proofpile-arXiv_065-7724 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Recently, two decompositions of an arbitrary $n \times n$ unitary matrix~$U$
into a matrix product $DXZ$ of three unitary matrices have been proposed:
\begin{itemize}
\item For arbitrary $n$, Idel and Wolf \cite{idel}
present a decomposition
where $D$~and~$Z$ are diagonal matrices,
whereas $X$ is a matrix with all linesums equal to unity.
\item For arbitrary even $n$, F\"uhr and Rzeszotnik \cite{fuhr}
present a decomposition
where $D$ and $Z$ are block-diagonal matrices,
whereas $X$ is a matrix with all block-linesums equal to
the $n/2 \times n/2$ unit matrix.
\end{itemize}
These matrix decompositions have been applied in
quantum optics \cite{idel},
quantum computing \cite{freiberg}
\cite{devosdebaerdemacker} \cite{vanrentergem}, and
quantum memory \cite{simnacher}.
The two matrix decompositions have been proved in a very different way.
Whereas
the proof of the Idel--Wolf decomposition
(based on symplectic topology) is not constructive,
the proof of the F\"uhr--Rzeszotnik decomposition
(based on linear algebra) is constructive.
In the present paper, we conjecture that nevertheless
the two decompositions belong to a same set of similar decompositions.
We conjecture that there exist as many such decompositions
as there are divisors of the number~$n$.
We present no proof, as
neither the Idel--Wolf proof
nor the F\"uhr--Rzeszotnik proof can be easily extrapolated.
\section{Conjecture}
We introduce the following three positive integers:
\begin{itemize}
\item $n$, an arbitrary integer greater than~1,
\item $m$, a divisor of~$n$,
distinct\footnote{The restriction $m \neq n$ is merely
introduced for convenience.
The reader may easily investigate
the case $m=n$. E.g., if $m=n$,
then Conjecture~1 is trivially true:
suffice it to choose $D$ equal to $U$
and both $X$ and $Z$ equal to
the $n \times n$ unit matrix.}
from~$n$, and
\item $q$, equal to~$n-m$.
\end{itemize}
We write $n=rm$ and $q=(r-1)m$.
Hence, both $m$ and $r$ are divisors of~$n$.
They satisfy $1 \le m < n$ and $1 < r \le n$.
For convenience,
$n \times n$ matrices will be called `great matrices',
$m \times m$ matrices will be called `small matrices', and
$q \times q$ matrices will be called `intermediate matrices'.
\begin{vermoeden}
Every great unitary matrix~$U$ can be decomposed into
three great unitary matrices:
\[
U = DXZ \ ,
\]
where
\begin{itemize}
\item $D$ consists of $r$ small matrices on its diagonal:
\[
D = \mbox{diag}\, (D_{11}, D_{22}, D_{33}, ..., D_{rr}) \ ,
\]
\item $Z$ consists of $r$ small matrices on its diagonal,
the upper-left small matrix being equal to
the $m \times m$ unit matrix~$I$:
\[
Z = \mbox{diag}\, (I, Z_{22}, Z_{33}, ..., Z_{rr}) \ ,
\] and
\item $X$ consists of $r^2$ small matrices $X_{jk}$, such that
all row sums $\sum_{k=1}^rX_{jk}$ and
all column sums $\sum_{j=1}^rX_{jk}$ are equal to
the small matrix~$I$.
\end{itemize}
\end{vermoeden}
Because $D$ is unitary,
automatically all its blocks $D_{jj}$ are unitary;
because $Z$ is unitary,
automatically all its blocks $Z_{jj}$ are unitary.
In contrast, the blocks $X_{jk}$ are not necessarilly unitary.
We define the $n \times n$ transformation matrix
\[
T = F_r \otimes I \ ,
\]
where the matrix $F_r$ is
the $r \times r$ discrete Fourier transform.
We can easily demonstrate that the product
$T^{-1}XT$ is of the form
\[
\left( \begin{array}{cc} I & \\ & G \end{array} \right) \ .
\]
We thus have the following property:
\[
X = T \left( \begin{array}{cc} I & \\ & G \end{array} \right) T^{-1} \ .
\]
Because both $X$ and $T$ are unitary,
automatically $G$ is a unitary intermediate matrix.
We summarize that the decomposition of $U$
corresponds to finding
the appropriate $2r-1$ small unitary matrices and
a single appropriate intermediate unitary matrix.
This corresponds to find the appropriate
\[
(2r-1)m^2 + q^2 = (2r-1)m^2 + [(r-1)m]^2 = r^2m^2
\]
real parameters, a number which exactly matches $n^2$,
i.e.\ the number of degrees of freedom of the given matrix~$U$.
\section{Three special cases}
If $m=1$ (and thus $q=n-1$), then all small matrices are,
in fact, just complex numbers.
Both $D$ and $Z$ are diagonal unitary matrices
and $X$ is a unit linesum unitary matrix.
The transformation matrix~$T$ equals the great Fourier matrix $F_n$.
In this particular case, the above conjecture
has been proposed by De Vos and De Baerdemacker \cite{devos}
and subsequently proved by Idel and Wolf \cite{idel}.
The proof is by symplectic topology.
Unfortunately, the proof is not constructive
and therefore only provides the guarantee
that the numbers $D_{11}, D_{22}, ..., D_{nn}$ and
$Z_{22}, Z_{33}, ..., Z_{nn}$ exist, without providing their values.
De Vos and De Baerdemacker \cite{devos} give a
Sinkhorn algorithm that yields
numerical approximations of these numbers.
Finally, we note that examples of the case $m=1$ demonstrate that the $DXZ$
decomposition is not always unique.
If $n$ is even and $m$ equals $n/2$ (and thus $q=n/2$),
then intermediate matrices are,
in fact, small matrices.
The transformation matrix~$T$ equals $H \otimes I$,
where $H=F_2$ is the $2 \times 2$ Hadamard matrix.
In this particular case, the above conjecture
has been proved by F\"uhr and Rzeszotnik \cite{fuhr}.
The proof is constructive and thus gives
explicit values for the small matrices $D_{11}$, $D_{22}$, $Z_{22}$, and $G$.
Also in this special case decomposition
is not unique \cite{devosdebaerdemacker} \cite{rzeszotnik}.
Finally, if both $m=1$ and $m=n/2$, i.e. if $n=r=2$ and $m=q=1$,
then the decomposition is well-known.
An arbitrary matrix from U(2) looks like
\begin{equation}
U = \left( \begin{array}{rl} \cos(\varphi)e^{i(\theta+\psi)} &
\sin(\varphi)e^{i(\theta+\chi)} \\
-\sin(\varphi)e^{i(\theta-\chi)} &
\cos(\varphi)e^{i(\theta-\psi)} \end{array} \right) \ .
\label{u2}
\end{equation}
One possible decomposition is
\begin{equation}
U = \left( \begin{array}{cc} e^{i(\theta+\varphi+\psi)} & \\ & ie^{i(\theta+\varphi-\chi)} \end{array} \right)
\ \frac{1}{2}\
\left( \begin{array}{cc} 1 + e^{-2i\varphi} & 1 - e^{-2i\varphi} \\
1 - e^{-2i\varphi} & 1 + e^{-2i\varphi} \end{array} \right)
\left( \begin{array}{cc} 1 & \\ & -ie^{i(-\psi+\chi)}\end{array} \right) \ .
\label{xu2}
\end{equation}
\section{Group hierarchy}
The matrices $D$ form a group isomorphic to U($m$)$^r$,
of dimension $rm^2 = nm$.
The matrices $Z$ form a group isomorphic to U($m$)$^{r-1}$,
of dimension $(r-1)m^2 = (n-m)m$. Finally,
the matrices $X$ form a group isomorphic to U($q$),
of dimension $q^2 = (n-m)^2$.
We denote these three matrix groups by
DU($n,m$), ZU($n,m$), and XU($n,m$), respectively.
In particular, the groups XU($n, 1$) and ZU($n, 1$)
are the groups XU($n$) and ZU($n$),
extensively studied in the past \cite{vanrentergem} \cite{debaerdemacker}.
According to the conjecture, for any~$m$,
the closure of the groups DU($n,m$), ZU($n,m$), and XU($n,m$)
is the unitary group of $U$~matrices.
Of course, because ZU($n,m$) is a subgroup of DU($n,m$),
the closure of DU($n,m$) and XU($n,m$) is also U($n$). In fact,
the closure of merely ZU($n,m$) and XU($n,m$)
already equals U($n$).
Indeed, any DU($n,m$) matrix can be decomposed into
two ZU($n,m$) matrices and
two XU($n,m$) matrices:
\begin{eqnarray*}
&&
\left( \begin{array}{ccccc} D_{11} & & & & \\
& D_{22} & & & \\
& & D_{33} & & \\
& & & \ddots & \\
& & & & D_{rr} \end{array} \right) =
\\ && \hspace{-20mm}
\left( \begin{array}{ccccc} & I & & & \\
I & & & & \\
& & I & & \\
& & & \ddots & \\
& & & & I \end{array} \right)\left( \begin{array}{ccccc} I & & & & \\
& D_{11} & & & \\
& & I & & \\
& & & \ddots & \\
& & & & I \end{array} \right)\left( \begin{array}{ccccc} & I & & & \\
I & & & & \\
& & I & & \\
& & & \ddots & \\
& & & & I \end{array} \right)\left( \begin{array}{ccccc} I & & & & \\
& D_{22} & & & \\
& & D_{33} & & \\
& & & \ddots & \\
& & & & D_{rr} \end{array} \right) \ .
\end{eqnarray*}
If $m>1$, then we have the following group hierarchy:
\begin{eqnarray*}
\mbox{U}(n) \supset & \hspace*{-3mm} \mbox{DU}(n,m) \supset \mbox{DU}(n,1) = \mbox{DU}(n) \hspace*{-2mm} & \\
& \hspace*{-3mm} \mbox{XU}(n,m) \subset \mbox{XU}(n,1) = \mbox{XU}(n) \hspace*{-2mm} & \subset \mbox{U}(n) \ .
\end{eqnarray*}
In fact, we have the following isomorphisms:
\begin{eqnarray*}
\mbox{DU}(n,m) & \cong & \mbox{U}(m)^{n/m} \\
\mbox{ZU}(n,m) & \cong & \mbox{U}(m)^{n/m-1} \\
\mbox{XU}(n,m) & \cong & \mbox{U}(n-m) \ .
\end{eqnarray*}
If $n$ is a power of a prime, say $p^w$,
then $m$ necessarilly is also a prime power, say~$p^u$ (with $0 \le u < w$).
The XU($n,m$) groups with all possible values of~$m$ (i.e.\ $1, p, p^2, ..., p^{w-1}$)
form an elegant subgroup chain according to
\[
\mbox{XU}(p^w) =
\mbox{XU}(p^w, 1) \supset \mbox{XU}(p^w, p) \supset \mbox{XU}(p^w, p^2)
\supset ... \supset \mbox{XU}(p^w, p^{w-1}) \ ,
\]
with successive dimensions
\[
(p^w-1)^2 > (p^w-p)^2 > (p^w-p^2)^2 > ... > (p^w-p^{w-1})^2 \ .
\]
\section{Conjugate conjecture}
If Conjecture 1 is true,
then automatically a second conjecture is also true:
\begin{vermoeden}
Every great unitary matrix~$U$ can be decomposed into
three great unitary matrices:
\[
U = C \left( \begin{array}{cc} I & \\ & A \end{array} \right) Y \ ,
\]
where
\begin{itemize}
\item $C$ is a circulant $n \times n$ matrix,
i.e.\ a unitary matrix consisting of $m \times m$ small blocks,
such that two $C_{jk}$ are identical if their two $j-k$ are equal,
\item $A$ is a $q \times q$ unitary matrix, and
\item $Y$ is an $n \times n$ circulant matrix,
the upper row sum\footnote{As the matrix is circulant,
all row sums and column sums are equal.
Hence $Y$ is a member of XU($n, m$).}
being equal to the $m \times m$ unit matrix~$I$.
\end{itemize}
\end{vermoeden}
Indeed, if we apply Conjecture~1, not to the given matrix~$U$,
but instead to its conjugate
\[
u = T^{-1} U T \ ,
\]
then we obtain the decomposition
\[
u = dxz \ .
\]
This leads to
\[
U = T\, d\, T^{-1}T\, x\, T^{-1}T\, z T^{-1} \ .
\]
One can easily verify that
\begin{itemize}
\item $T\, d\, T^{-1}$ is a circulant great matrix,
\item $T\, x\, T^{-1}$ is of the form $\left( \begin{array}{cc} I & \\ & A \end{array} \right)$, and
\item $T\, z\, T^{-1}$ is a circulant XU($n, m$) matrix.
\end{itemize}
Such conjugate decomposition was already noticed before,
in both the $m=1$ case and the $m=n/2$ case
\cite{idel} \cite{vanrentergem} \cite{debaerdemacker}.
\section{Unitary and biunitary vectors}
For $m=1$, the DXZ decomposition involves
unit-modulus numbers~$d_{jj}$ and~$z_{jj}$:
\[
U = \left( \begin{array}{rrrr} d_{11} & & & \\
& d_{22} & & \\
& & \ddots & \\
& & & d_{nn}
\end{array} \right) X
\left( \begin{array}{rrrr} 1 & & & \\
& z_{22} & & \\
& & \ddots & \\
& & & z_{nn}
\end{array} \right) \ .
\]
If we multiply both sides of the equation
by the $n \times 1$ matrix (i.e.\ column vector)
$v = (1, z_{22}^{-1}, z_{33}^{-1}, ..., z_{nn}^{-1})^T$,
then we obtain
\[
Uv = \left( \begin{array}{rrrr} d_{11} & & & \\
& d_{22} & & \\
& & \ddots & \\
& & & d_{nn}
\end{array} \right) X
\left( \begin{array}{r} 1 \\ 1 \\ \vdots \\ 1 \end{array} \right) \ .
\]
Taking into account that all row sums of $X$ equal unity,
we find
\[
Uv = w \ ,
\]
where $w = (d_{11}, d_{22}, d_{33}, ..., d_{nn})^T$.
Both $v$ and $w$ are vectors with all entries having unit modulus.
Therefore, they are called unimodular vectors.
The unimodular vector $v$ is
called biunimodular for the matrix~$U$,
as $Uv$ is unimodular as well \cite{idel} \cite{fuhr}.
We say that the Idel--Wolf DXZ decomposition implies
the fact that any unitary matrix
has at least one biunimodular vector.
Moreover, it possesses a biunimodular vector with leading entry~1.
As an example, decomposition (\ref{xu2}) of the matrix (\ref{u2})
corresponds with the following biunimodular vector:
\[
U\ \left( \begin{array}{c} 1 \\ i\, e^{i(\psi - \chi)}
\end{array} \right) =
\left( \begin{array}{r} e^{i(\varphi + \theta + \psi)} \\
i\, e^{i(\varphi + \theta - \chi)}
\end{array} \right) \ .
\]
If Conjecture~1 is true for $m>1$,
then we can draw a similar conclusion $UV=W$,
however with $V$ and $W$ matrices of size $n \times m$.
These matrices consist of $r$ blocks, each a unitary $m \times m$ matrix.
Because such blocks have no modulus,
we cannot call $V$ and $W$ unimodular vectors.
We will instead call them unitary vectors
and $V$ a biunitary vector.
These unitary vectors reside in an $nm$-dimensional vector space
$\mathbb{C}^{n}\otimes\mathbb{C}^{m}$, isomorphic to
$\mathbb{R}^{2nm}$.
A~basis for this space consists e.g.\ of the $nm$~following basis vectors:
$a_i \otimes b_j^T$, where
the $a_i$ are the $n$~standard basis vectors of $\mathbb{C}^n$ and
the $b_j$ are the $m$~standard basis vectors of $\mathbb{C}^m$.
We note that a unitary vector~$V$ has the property $V^{\dagger}V=rI$,
with $I$ once again the $m \times m$ unit matrix.
If Conjecture~1 is true, then also the following conjecture is true:
\begin{vermoeden}
Every great unitary matrix~$U$ has at least one biunitary vector~$V$:
\[
UV = W ,
\]
where
\begin{itemize}
\item both $V$ and $W$ consist of $n/m$ unitary $m \times m$ entries and
\item $V$ has leading entry equal to the small unit matrix~$I$.
\end{itemize}
\end{vermoeden}
Suffice it to repeat the above reasoning with $m=1$ for $m>1$,
the vector $E = (I, I, I, ..., I)^T$ taking over the role of
the vector $e = (1, 1, 1, ..., 1)^T$ above.
Important is the fact that not only
Conjecture~3 is a consequence of Conjecture~1, but
Conjecture~1 is equally a consequence of Conjecture~3.
Indeed, if $UV = W$, with both $V$ and $W$ being unitary vectors
and $V$ having the unit matrix~$I$ as leading entry,
then the matrix
\[
A = \mbox{diag}\, (W_1^{-1}, W_2^{-1}, ..., W_r^{-1}) \ U \
\mbox{diag}\, ( I, V_2^{-1}, ..., V_r^{-1})
\]
belongs to XU($n$, $m$).
Proof of this fact consists of two parts:
\begin{itemize}
\item Taking into account that $UV = W$, we find that $AE$ equals $E$,
such that $A$ has unit row sums.
\item Because of $E=AE$, we have
$E = \overline{E} = \overline{AE} = \overline{A}E$.
Taking into account that $A$ is unitary,
we have $A^T\,\overline{A}$ equal to the $n \times n$ unit matrix.
Hence $E=A^T\,\overline{A}E =A^TE$.
Because $A^TE$ thus turns out to equal~$E$,
we conclude that $A$ has unit column sums.
\end{itemize}
As~$A$ belongs to XU($n$, $m$) and $U$ has decomposition
\[
\mbox{diag}\, (W_1, W_2, ..., W_r) \ A \
\mbox{diag}\, ( I, V_2, ..., V_r) \ ,
\]
Conjecture~1 is fulfilled.
We finally note that Conjecture~2 leads to the same Conjecture~3,
according to a similar proof, where however
the vector $ (I, 0, 0, ..., 0)^T$ takes over the role of
the vector $E=(I, I, I, ..., I)^T$ above.
We conclude:
\begin{teorema}
The Conjectures~1, 2, and 3 are equivalent:
if one is proved, then all three are proved.
\end{teorema}
\section{Group topology}
In order to prove the three conjectures,
it suffices to prove Conjecture~3.
For that purpose, we first give a lemma:
\begin{lemma}
If an $n \times n$ unitary matrix $U$ possesses
a biunitary $n \times m$ vector,
then it possesses a biunitary $n \times m$ vector
with leading entry equal to the $m \times m$ unit matrix~$I$.
\end{lemma}
Indeed, let us suppose that
\[
U \left( \begin{array}{c} V_1 \\ V_2 \\ V_3 \\ \vdots \\ V_r
\end{array} \right) =
\left( \begin{array}{c} W_1 \\ W_2 \\ W_3 \\ \vdots \\ W_r
\end{array} \right) \ ,
\]
with all $V_j$ and $W_j$ are unitary blocks.
We multiply to the right with the small matrix $V_1^{-1}$
and thus obtain
\[
U \left( \begin{array}{c} I \\ V_2V_1^{-1} \\ V_3V_1^{-1} \\
\vdots \\ V_rV_1^{-1}
\end{array} \right) =
\left( \begin{array}{c} W_1V_1^{-1} \\ W_2V_1^{-1} \\ W_3V_1^{-1} \\
\vdots \\ W_rV_1^{-1}
\end{array} \right) \ ,
\]
a result which proves the lemma.
We consider the vector space ${\bf M}$ of vectors $(M_1, M_2, ..., M_r)^T$,
where each $M_j$ is a complex $m \times m$ matrix.
Let ${\bf S}$ be the the following submanifold of~${\bf M}$:
\[
{\bf S} = \{ (V_1, V_2, ..., V_r)^T\ |\ V_j \in {\mbox U}(m) \} \ .
\]
The Lie group U($m$)$^r$ behaves as if it were
the following topological product of
odd-dimensional spheres~\cite{pontrjagin} \cite{samelson}:
\[
\left( S^1 \times S^3 \times S^5 \times ... \times S^{2m-1}\right)^r \ ,
\]
where $S^k$ denotes the $k$-sphere.
In fact, the Poincar\'e polynomial of the manifold $\bf S$ is
\[
P(x) = [\ (1+x)(1+x^3)(1+x^5)...(1+x^{2m-1})\ ]^r \ .
\]
Therefore, the sum of its Betti numbers is
\[
P(1) = (2^m)^r = 2^n \ ,
\]
where $n=mr$.
It is clear that, if
\begin{equation}
{\bf S} \cap\, U{\bf S} \neq \emptyset \ ,
\label{empty}
\end{equation}
then there exists at least one unitary vector in ${\bf S}$
which is a biunitary vector an which, because of Lemma~1,
has a unit leading entry.
One promising approach is to reduce the problem to the Arnold conjecture \cite{arnold},
as has been done in the $m=1$ case \cite{idel}.
If ${\bf S}$ was a Lagrangian submanifold for a symplectic form on $\mathbb{C}^n$
such that $U$ was still a Hamiltonian symplectomorphism,
then eqn (\ref{empty}) would be true, provided the Arnold conjecture
is true for this particular manifold.
Direct computation suggests that ${\bf S}$ is no Lagrangian submanifold of $\mathbb{C}^n$
with the standard symplectic form.
There are two possible roads to still prove a relation to the Arnold conjecture:
\begin{itemize}
\item show that ${\bf S}$ is a submanifold for some other symplectic structure
on $\mathbb{C}^n$ and $U$ is a Hamiltonian symplectomorphism
for that particular structure, too
\item find a Lagrangian embedding of ${\bf S}$ into some other manifold
such that the mapping of $U$ results in a Hamiltonian symplectomorphism
for this other manifold.
\end{itemize}
Let us start with the first idea: we note that ${\bf S}$ is a Cartesian product
of odd-dimensional spheres and that the Cartesian product of Lagrangian manifolds
is a Lagrangian manifold,
it might be possible to consider each sphere separately.
However this is not true, as no sphere $S^n$ with $n>1$ can be embedded into $\mathbb{C}^n$
as a Lagrangian manifold according to \cite{gromov}, as no simply-connected manifold
can be embedded into $\mathbb{C}^n$.
Since ${\bf S}$ is not simply connected as it contains a product factor of $S^1$,
this does not yet rule out the possibility of finding a symplectic structure
such that it is a Lagrangian submanifold, but there is no argument we know of.
This leaves us with attempting the second idea: Indeed, using \cite{audin},
who attributes this idea to \cite{polterovich},
we can find a Lagrangian embedding of every odd-dimensional space via:
\begin{eqnarray*}
S^{2n+1} & \to & {\bf P}^n(\mathbb{C})\times {\bf P}^n(\mathbb{C}) \\
z & \mapsto & (\, [z],[\overline{z}]\, )
\end{eqnarray*}
where ${\bf P}^n(\mathbb{C})$ denotes the complex projective space
and $[z]$ being the canonical projection.
Since $S^1$ is a Lagrangian submanifold of $\mathbb{C}$,
and products of Lagrangian embeddings are Lagrangian embeddings in the product manifold,
we can embed ${\bf S}$ as a Lagrangian submanifold.
To be of help, we also would need $U$ to be mapped to a symplectomorphism.
To do that, we note that, if we decompose
$z\in {\bf S}$ as $(z_1^1,z_3^1,z_5^1,\ldots,z_{2m+1}^r)$,
then $U$ acts on any factor $z_i^r$ as $U(0,\ldots,0,z_i^r,0,\ldots,0)$,
which explains how it must act on $([z],[\overline{z}])$.
But this implies that $U$ will mix factors of ${\bf P}^n(\mathbb{C})$
in our product manifold,
which in turn results in $U$ not being a symplectomorphism after direct computation.
This does not rule out the second idea either, but shows where the difficulties lie.
It is still unclear whether the applicability of symplectic topology
to the original problem of a Sinkhorn-like decomposition was a mere coincidence
or whether there is a deeper link to unitary decompositions
so it seems worthwile to consider this problem.
We summarize:
if the Arnold conjecture is applicable,
then the above Conjectures~1, 2, and~3 are true.
\section{Numerical approximation}
We note that the above three conjectures are not constructive.
Only in the case $m=n/2$, do we have explicit expressions
for the matrices $D$, $X$, $Z$, $C$, $A$, and $Y$ and
for the vectors $V$ and $W$.
For other cases, we can only find numerical approximations.
Therefore, in the present section,
we give a numerical procedure to find,
given the matrix~$U$,
an approximation of the matrices $D$, $X$, and~$Z$.
It is similar to the Sinkhorn-like method presented earlier for the
$m=1$ case \cite{devos}.
The successive approximations $X_t$ of $X$ are given by
\[
X_0 = U
\]
and
\[
X_t = L_tX_{t-1}R_t \ .
\]
The diagonal of the left great matrix $L_t$
consists of $r$ small matrices $(L_t)_{jj}$,
equal to $\Phi_j^{-1}$, i.e.\ the inverse of the unitary factor
in the polar decomposition $\Phi_jP_j$
of the row sum $r_j=\sum_{k=1}^r(X_{t-1})_{jk}$.
The right great matrix $R_t$ consists of $r$ small matrices $(R_t)_{kk}$,
equal to $\Upsilon_k^{-1}\Upsilon_1$,
with $\Upsilon_k^{-1}$ equal to the inverse of the unitary factor
in the polar decomposition $Q_k\Upsilon_k$
of the column sum $c_k=\sum_{j=1}^r(L_tX_{t-1})_{jk}$.
The extra factor $\Upsilon_1$ in the expression of $(R_t)_{kk}$ guarantees
that $(R_t)_{11}$ equals~$I$.
After a sufficient number (say, $\tau$) of iterations,
the product $L = L_{\tau}.L_{\tau-1}...L_1$ and
the product $R = R_1.R_2...R_{\tau}$ yield the desired great matrix~$X$:
\[
X \approx X_{\tau} = LUR \ .
\]
The fact that all $(R_t)_{11}=I$ guarantees that $R_{11}=I$ and thus that
$R$ belongs to ZU($n,m$) instead of merely to DU($n,m$).
We have
\begin{eqnarray*}
D & \approx & L^{-1} \\
Z & \approx & R^{-1} \ .
\end{eqnarray*}
Exceptionally, a particular row sum $r_j$ might be singular.
Then its polar decomposition is not unique,
such that the corresponding matrix $\Phi_j$ is not determined.
In that case, we choose $(L_t)_{jj}$ equal to the unit matrix~$I$.
Analogously we choose $(R_t)_{kk} = I$ whenever a particular column sum $c_k$ is singular.
The progress of the iteration process can be monitored
by the following property of a great matrix~$M$:
\[
\Psi(M) = n^2 - |\mbox{Btr}(M)|^2
\]
where we call $\mbox{Btr}(M)$ the `block trace' of~$M$:
\[
\mbox{Btr}(M) = \sum_{j=1}^r \sum_{k=1}^r \,\mbox{Tr} (M_{jk}) \ .
\]
Indeed, the quantity $\Psi(M)$ is zero iff $M \in$ XU($n, m$).
During the iteration process, $\Psi(X_t)$ becomes smaller and smaller,
approaching zero in the limit.
See~Appendix for details.
We note that, if $m=1$,
then $\mbox{Btr}$ is simply the sum of all $n^2$~matrix entries \cite{devos}.
\section{Example}
As an example, we choose the following U(6)~matrix:
\[
U = \frac{1}{12}\ \left( \begin{array}{cccccc}
- 5 & 6 + 2i & - 5 - 5i & - 4 + 2i & 2 & - 2 - i \\
2 + 2i & - 2 - 4i & - 4i & - 3 - i & 5 + 5i & 2 + 6i \\
- 6 - 3i & - 2 - 2i & 1 + 3i & - 6 & - 4 - 2i & 3 + 4i \\
- 2 - 4i & - 1 - 7i & 2 - 6i & 4 + 3i & - 1 - 2i & - 2i \\
3 - i & 4 & - 4 - 2i & 2 - 2i & - 6 + 2i & 7 + i \\
- 6i & - 1 + 3i & - 2 + 2i & 3 + 6i & 5i & - 2 + 4i
\end{array} \right) \ .
\]
Hence $n=6$. For $m$, we invesitigate all different possibilities,
i.e.\ $m=1$, $m=2$, and $m=3$.
During the numerical procedure, the progress parameter~$\Psi$
diminishes according to Table~\ref{tabel1}.
We see that, after 36~iterations, $\Psi$~already approaches~0.
Therefore, below we give results for $\tau = 36$.
\begin{table}
\caption{Progress parameter $\Psi$ as a function of
the number~$t$ of iteration steps.}
\begin{center}
\begin{tabular}{|r|rrr|}
\hline
& & & \\[-2mm]
& $m=1$ & $m=2$ & $m=3$ \\[2mm]
\hline
& & & \\[-2.5mm]
$t$ & \multicolumn{3}{c|}{$\Psi_t$} \\[1.5mm]
\hline
& & & \\[-2mm]
0 & 34.889 & 32.000 & 33.743 \\
1 & 4.407 & 9.517 & 6.643 \\
2 & 2.573 & 4.332 & 2.533 \\
3 & 1.381 & 2.680 & 1.023 \\
4 & 0.586 & 1.627 & 0.513 \\
5 & 0.213 & 0.868 & 0.375 \\
6 & 0.084 & 0.577 & 0.318 \\
7 & 0.042 & 0.492 & 0.277 \\
8 & 0.027 & 0.461 & 0.240 \\
9 & 0.020 & 0.442 & 0.206 \\
10 & 0.016 & 0.423 & 0.174 \\
11 & 0.014 & 0.400 & 0.147 \\
12 & 0.012 & 0.372 & 0.122 \\
13 & 0.010 & 0.339 & 0.101 \\
14 & 0.009 & 0.303 & 0.083 \\
15 & 0.008 & 0.264 & 0.067 \\
... & & & \\
36 & 0.001 & 0.001 & 0.001 \\
& & & \\[1.5mm]
\hline
\end{tabular}
\end{center}
\label{tabel1}
\end{table}
We thus find, after
36~iterations\footnote{Each iteration, in turn,
needs $2r$~polar decompositions.
These are performed by Hero's iterative method
(a.k.a. Heron's method).
For each, we applied only ten iterations.}:
\begin{itemize}
\item for $m=1$:
\[ \hspace*{-35mm}
X = \left( \begin{array}{rrrrrr}
0.27 - 0.31i & -0.27 + 0.45i & 0.58 + 0.13i & 0.28 - 0.24i & 0.07 + 0.15i & 0.07 - 0.17i \\
0.04 + 0.23i & -0.12 - 0.35i & 0.26 - 0.21i & -0.01 - 0.26i & 0.57 + 0.13i & 0.25 + 0.46i \\
0.51 + 0.22i & 0.23 + 0.06i & -0.02 - 0.26i & 0.42 + 0.27i & 0.27 - 0.25i & -0.42 - 0.03i \\
0.37 - 0.03i & 0.57 - 0.14i & 0.28 + 0.45i & -0.42 - 0.04i & 0.06 - 0.18i & 0.16 - 0.06i \\
0.26 + 0.01i & 0.33 - 0.04i & -0.16 - 0.33i & 0.23 + 0.05i & -0.23 + 0.47i & 0.57 - 0.16i \\
-0.49 - 0.12i & 0.26 + 0.02i & 0.06 + 0.23i & 0.51 + 0.22i & 0.26 - 0.33i & 0.37 - 0.03i \end{array} \right) \ ,
\]
with the following row sums and column sums:
\begin{eqnarray*}
r_1 & = & 1.002 + 0.000i \\
r_2 & = & 0.998 - 0.001i \\
r_3 & = & 1.007 + 0.001i \\
r_4 & = & 1.007 + 0.002i \\
r_5 & = & 0.998 - 0.000i \\
r_6 & = & 0.988 - 0.002i \\
c_1 & = & 0.989 - 0.002i \\
c_2 & = & 0.998 - 0.001i \\
c_3 & = & 0.997 - 0.000i \\
c_4 & = & 1.006 + 0.001i \\
c_5 & = & 1.002 + 0.001i \\
c_6 & = & 1.008 + 0.001i \ ;
\end{eqnarray*}
\item for $m=2$:
\[ \hspace*{-35mm}
X = \left( \begin{array}{rrrrrr}
0.33 - 0.05i &
-0.39 - 0.07i &
0.61 + 0.34i &
0.32 + 0.19i &
0.06 - 0.29i &
0.08 - 0.12i
\\
-0.30 - 0.16i &
0.35 + 0.36i &
0.07 + 0.16i &
0.08 + 0.09i &
0.23 - 0.01i &
0.57 - 0.45i
\\
0.41 - 0.38i &
0.51 - 0.02i &
0.34 + 0.14i &
-0.30 - 0.12i &
0.26 + 0.24i &
-0.21 + 0.13i
\\
0.36 - 0.08i &
0.26 - 0.27i &
-0.17 - 0.26i &
0.56 + 0.34i &
-0.19 + 0.34i &
0.17 - 0.07i
\\
0.26 + 0.43i &
-0.12 + 0.08i &
0.06 - 0.48i &
-0.02 - 0.06i &
0.68 + 0.04i &
0.13 - 0.02i
\\
-0.06 + 0.24i &
0.39 - 0.09i &
0.10 + 0.09i &
0.35 - 0.43i &
-0.04 - 0.34i &
0.26 + 0.52i
\end{array} \right) \ ,
\]
with row sums and column sums
\begin{eqnarray*}
r_1 & = & \left( \begin{array}{rr} 0.994 - 0.001i & 0.003 - 0.002i \\
0.001 + 0.000i & 0.997 - 0.000i \end{array} \right) \\
r_2 & = & \left( \begin{array}{rr} 1.003 + 0.002i & -0.001 - 0.005i \\
-0.003 - 0.000i & 1.003 + 0.002i \end{array} \right) \\
r_3 & = & \left( \begin{array}{rr} 1.002 + 0.002i & -0.001 + 0.001i \\
0.001 - 0.005i & 1.000 + 0.001i \end{array} \right) \\
c_1 & = & \left( \begin{array}{rr} 1.002 + 0.001i & -0.004 - 0.002i \\
-0.004 - 0.001i & 1.001 + 0.001i \end{array} \right) \\
c_2 & = & \left( \begin{array}{rr} 1.002 + 0.001i & 0.001 + 0.002i \\
0.002 - 0.001i & 0.998 - 0.001i \end{array} \right) \\
c_3 & = & \left( \begin{array}{rr} 0.995 - 0.002i & 0.002 - 0.000i \\
0.002 + 0.002i & 1.001 + 0.000i \end{array} \right) \ ;
\end{eqnarray*}
\item for $m=3$:
\[ \hspace*{-30mm}
X = \left( \begin{array}{rrrrrr}
0.54 + 0.37i &
-0.10 - 0.25i &
0.16 - 0.13i &
0.45 - 0.36i &
0.10 + 0.24i &
-0.16 + 0.13i
\\
-0.23 - 0.13i &
0.47 - 0.06i &
-0.07 - 0.41i &
0.23 + 0.13i &
0.53 + 0.06i &
0.07 + 0.41i
\\
-0.13 - 0.16i &
-0.37 - 0.19i &
0.53 + 0.17i &
0.13 + 0.16i &
0.37 + 0.19i &
0.47 - 0.17i
\\
0.46 - 0.37i &
0.10 + 0.24i &
-0.16 + 0.13i &
0.54 + 0.36i &
-0.10 - 0.24i &
0.16 - 0.13i
\\
0.23 + 0.13i &
0.53 + 0.06i &
0.07 + 0.41i &
-0.23 - 0.14i &
0.47 - 0.06i &
-0.07 - 0.41i
\\
0.13 + 0.16i &
0.37 + 0.19i &
0.47 - 0.17i &
-0.13 - 0.16i &
-0.37 - 0.19i &
0.52 + 0.17i
\end{array} \right) \ ,
\]
with row sums and column sums
\begin{eqnarray*}
r_1 & = & \left( \begin{array}{rrr} 0.997 + 0.001i & -0.004 - 0.001i & -0.002 - 0.002i \\
-0.001 + 0.002i & 0.999 + 0.000i & 0.001 - 0.001i \\
0.001 + 0.001i & 0.002 - 0.001i & 1.003 - 0.001i \end{array} \right) \\
r_2 & = & \left( \begin{array}{rrr} 1.002 - 0.001i & 0.002 + 0.000i & -0.000 + 0.001i \\
0.003 - 0.003i & 1.000 - 0.000i & -0.003 + 0.000i \\
0.001 - 0.002i & -0.001 + 0.001i & 0.997 + 0.001i\end{array} \right) \\
c_1 & = & \left( \begin{array}{rrr} 1.000 + 0.000i & -0.000 - 0.003i & 0.001 - 0.003i \\
0.002 + 0.002i & 1.000 + 0.000i & 0.000 - 0.002i \\
0.003 + 0.001i & 0.001 + 0.001i & 1.000 - 0.000i\end{array} \right) \\
c_2 & = & \left( \begin{array}{rrr} 1.000 - 0.000i & 0.000 + 0.003i & -0.001 + 0.003i \\
-0.001 - 0.002i & 1.000 - 0.000i & -0.000 + 0.002i \\
-0.003 - 0.001i & -0.002 - 0.001i & 1.000 + 0.000i\end{array} \right) \ .
\end{eqnarray*}
\end{itemize}
For $m=2$, we also give the corresponding biunitary vector:
\[
U \left( \begin{array}{rr} 1.00 - 0.00i & 0.00 + 0.00i \\
0.00 - 0.00i & 1.00 + 0.00i \\[1mm]
0.81 - 0.31i & 0.23 + 0.43i \\
-0.20 + 0.44i & 0.84 + 0.25i \\[1mm]
-0.34 + 0.77i & -0.37 + 0.38i \\
-0.29 - 0.45i & 0.19 + 0.83i \end{array} \right)
=
\left( \begin{array}{rr} -0.95 - 0.16i & 0.24 - 0.14i \\
-0.14 - 0.24i & -0.91 - 0.31i \\[1mm]
0.06 - 0.70i & -0.71 + 0.01i \\
-0.28 - 0.65i & 0.62 - 0.34i \\[1mm]
-0.12 - 0.73i & 0.67 - 0.03i \\
-0.48 - 0.46i & -0.57 + 0.47i \end{array} \right) \ .
\]
\section{Permutation matrices}
Although we lack a proof of Conjecture~1
in the case of an arbitrary unitary matrix~$U$,
we can say that Conjecture~1 is certainly true for the case where $U$
is an arbitrary $n \times n$ permutation matrix.
Indeed, any permutation matrix of size of $n \times n = mr \times mr$
can be decomposed as a product of
three permutation matrices $D$, $X$, and $Z$,
the matrix~$D$ belonging to the group DU($n, m$),
the matrix~$X$ belonging to the group XU($n, m$), and
the matrix~$Z$ belonging to the group ZU($n, m$).
In fact,
$D$~belongs to a finite subgroup of DU($n, m$), of order $(m!)^r$ and
isomorphic to the product {\bf S}$_m^r$ of symmetric goups,
$X$~belongs to a finite subgroup of XU($n, m$), of order $(r!)^m$ and
isomorphic to the product {\bf S}$_r^m$ of symmetric goups, and
$Z$~belongs to a finite subgroup of ZU($n, m$), of order $(m!)^{r-1}$ and
isomorphic to the product {\bf S}$_m^{r-1}$.
The fact that such a decomposition is always possible
\cite{devosvanrentergem1} \cite{devosvanrentergem2},
is a consequence of Birkhoff's theorem \cite{birkhoff} on doubly stochastic matrices
(with rational entries).
The decomposition has been applied both
in Clos networks of telephone switching systems \cite{clos} \cite{hwang} and
in reversible computing \cite{revc}.
As an example, we choose the following $6 \times 6$ permutation matrix:
\[
U = \left( \begin{array}{rrrrrr} 0 & 0 & 0 & 0 & 1 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 & 0 & 0 \end{array} \right) \ .
\]
For $m$, we investigate all different
non-trivial\footnote{We note that, for permuation matrices,
not only the case $m=n$ is trivial,
but also the case $m=1$: suffice it
to choose both $D$ and $Z$ equal to the $n \times n$ unit matrix and
to choose $X$ equal to $U$.}
possibilities: $m=2$ and $m=3$.
We have:
\begin{itemize}
\item for $m=2$:
\[
U = \left( \begin{array}{rrrrrr} 0 & 1 & & & & \\
1 & 0 & & & & \\
& & 0 & 1 & & \\
& & 1 & 0 & & \\
& & & & 1 & 0 \\
& & & & 0 & 1 \end{array} \right)
\left( \begin{array}{rrrrrr} 1 & 0 & \ 0 & 0 & \ 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\[1mm]
0 & 0 & 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\[1mm]
0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \end{array} \right)
\left( \begin{array}{rrrrrr} 1 & 0 & & & & \\
0 & 1 & & & & \\
& & 0 & 1 & & \\
& & 1 & 0 & & \\
& & & & 0 & 1 \\
& & & & 1 & 0 \end{array} \right) \ ,
\]
where indeed the middle matrix has six unit line sums
$r_1=r_2=r_3=c_1=c_2=c_3={\tiny \left( \begin{array}{rr} 1 & \\ & 1 \end{array} \right)}$;
\item for $m=3$:
\[
U = \left( \begin{array}{rrrrrr} 0 & 0 & 1 & & & \\
1 & 0 & 0 & & & \\
0 & 1 & 0 & & & \\
& & & 1 & 0 & 0 \\
& & & 0 & 1 & 0 \\
& & & 0 & 0 & 1 \end{array} \right)
\left( \begin{array}{rrrrrr} 1 & 0 & 0 & \ 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\[1mm]
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 \end{array} \right)
\left( \begin{array}{rrrrrr} 1 & 0 & 0 & & & \\
0 & 1 & 0 & & & \\
0 & 0 & 1 & & & \\
& & & 1 & 0 & 0 \\
& & & 0 & 0 & 1 \\
& & & 0 & 1 & 0 \end{array} \right) \ ,
\]
where indeed the middle matrix has four unit line sums
$r_1=r_2=c_1=c_2={\tiny \left( \begin{array}{rrr} 1 & & \\ & 1 & \\ & & 1 \end{array} \right)}$.
\end{itemize}
Because Conjecture~1
is true for any $n \times n$ permutation matrix, it also
is true for any $n \times n$ complex permutation matrix
(i.e.\ unitary matrix with only one non-zero entry in every row and column).
Such matrices form an $n$-dimensional non-connected subgroup of
the $n^2$-dimensional group U($n$)
(consisting of $n!$ components, each $n$-dimensional).
We can indeed decompose such matrix as $D'P$,
where $D'$ is a diagonal unitary matrix and $P$ is a permutation matrix.
We decompose $P$ as $D''XZ$, leading to the decomposition $D'D''XZ$
of the complex permutation matrix.
Introducing $D=D'D''$, we obtain a desired decomposition $DXZ$.
\section{Conclusion}
Every $n \times n$ unitary matrix has an Idel--Wolf decomposition.
If $n$ is even, then it also has a F\"uhr--Rzeszotnik decomposition.
We conjecture that, if $n$ is a composed integer,
it has as many similar decompositions as $n$ has divisors.
We offer no proof, as generalization of either
the Idel--Wolf proof (based on symplectic topology) or
the F\"uhr--Rzeszotnik proof (based on linear algebra) is not straightforward.
We provide an iterative algorithm for finding
a numerical approximation of each of the conjectured decompositions.
Finally, we demonstrate that the conjecture is true
for $n \times n$ (complex) permutation matrices.
\section*{Appendix}
In Section 8, multiplying $X_{t-1}$ to the left with $L_t$ increases its block trace:
\[
|\mbox{Btr}(L_tX_{t-1})| =
\left|\ \sum_{j=1}^r \sum_{k=1}^r \, \mbox{Tr}((L_tX_{t-1})_{jk})\ \right| =
\left|\ \sum_{j=1}^r \sum_{k=1}^r \, \mbox{Tr}(\Phi_j^{-1}\, (X_{t-1})_{jk})\ \right|
\]
\[
=
\left|\ \sum_{j=1}^r \, \mbox{Tr}(P_j)\ \right| \ge
\left|\ \sum_{j=1}^r \, \mbox{Tr}(\Phi_jP_j)\ \right| =
\left|\ \sum_{j=1}^r \sum_{k=1}^r \, \mbox{Tr}((X_{t-1})_{jk})\ \right| =
|\mbox{Btr}(X_{t-1})| \ .
\]
Analogously,
multiplying $L_tX_{t-1}$ to the right with $R_t$ increases its block trace.
Hence, we have
\[
|\mbox{Btr}(X_t)| = |\mbox{Btr}(L_tX_{t-1}R_t)| \ge |\mbox{Btr}(L_tX_{t-1})|
\ge |\mbox{Btr}(X_{t-1})|\ .
\]
The increasing value of $|\mbox{Btr}(X_t)|$ is bounded by the value~$n$.
An $n \times n$ unitary matrix $A$ has $|\mbox{Btr}(A)|=n$
iff it is a member of the group $e^{i\alpha}$~XU($n,m$).
These two facts
are proved by reasoning as in Appendix~A of De Vos and De Baerdemacker \cite{devos},
by considering the following property of the row sums~$r_a$ and column sums~$c_b$ of~$A$:
\[
\sum_{a=1}^r \sum_{j=1}^m \sum_{k=1}^m \ \left|(r_a)_{jk}\right|^2 =
\sum_{b=1}^r \sum_{j=1}^m \sum_{k=1}^m \ \left|(c_b)_{jk}\right|^2 = n \ ,
\]
a fact which, in turn,
is proved by reasoning as in Appendix~A of De Vos, Van Laer, and Vandenbrande \cite{vanlaer}.
\noindent
{\bf Acknowledgements}.
SDB acknowleddes
the Canada Research Chair program and
the New Brunswick Innovation Foundation.
| proofpile-arXiv_065-7727 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Approach}
\label{sec:Approach}
\label{sec:Approach:Overview}
In this section, we present the different steps of our framework: (1)~topic clustering, (2)~argument identification, and (3)~argument clustering according to topical aspects (see Figure~\ref{fig:Approach:Pipeline}).
\subsection{Topic Clustering}
\label{sec:Approach:DS}
First, documents are grouped into topics. Such documents can be individual texts or collections of texts under a common title, such as posts on Wikipedia or debate platforms. We compute the topic clusters using unsupervised clustering algorithms
and study the results of \textit{k-means} and \textit{HDBSCAN}~\cite{Campello2013} in detail.
We also take the $argmax$ of the tf-idf vectors and LSA~\cite{Deerwester1990} vectors directly into consideration
to evaluate how well topics are represented by single terms.
Overall, we consider the following models:
\begin{itemize}
\item $ARGMAX_{none}^{tfidf}$: We restrict the vocabulary size and the maximal document frequency to obtain a vocabulary representing topics with single terms. Thus, clusters are labeled with exactly one term by choosing the $argmax$ of these tf-idf document vectors.
\item $ARGMAX_{lsa}^{tfidf}$: We perform a dimensionality reduction with LSA on the tf-idf vectors. Therefore, each cluster is represented by a collection of terms.
\item $KMEANS_{none}^{tfidf}$: We apply the k-means clustering algorithm directly to tf-idf vectors and compare the results obtained by varying the parameter $k$.
\item $HDBSCAN_{umap}^{tfidf}$: We apply UMAP \cite{McInnes2018} dimensionality reduction on the tf-idf vectors. We then compute clusters using the HDBSCAN algorithm based on the resulting vectors.
\item $HDBSCAN_{lsa+umap}^{tfidf}$: Using the best parameter setting from the previous model, we apply UMAP dimensionality reduction on LSA vectors. Then, we evaluate the clustering results obtained with HDBSCAN while the number of dimensions of the LSA vectors is varied.
\end{itemize}
\subsection{Argument Identification}
\label{sec:Approach:SEG}
For the second step of our argument search framework, we propose the segmentation of sentences into argumentative units.
Related works define arguments either on document-level~\cite{Wachsmuth2017} or sentence-level~\cite{Levy2018}\cite{Stab2018}, while, in this paper, we define an argumentative unit as a sequence of multiple sentences. This yields two advantages:
(1)~We can capture the context of arguments over multiple sentences (e.g., claim and its premises);
(2)~Argument identification becomes applicable to a wide range of texts (e.g., user-generated texts).
Thus, we train a sequence-labeling model to predict for each sentence whether it
starts an argument, continues an argument, or is outside of an argument (i.e., \textit{BIO-tagging}). Based on the findings of \citet{Ajjour2017}, \citet{Eger2017}, and \citet{Petasis2019}, we use a BiLSTM over more complex architectures like BiLSTM-CRFs. The BiLSTM is better suited than a LSTM as the bi-directionality takes both preceding and succeeding sentences into account for predicting the label of the current sentence. We evaluate the sequence labeling results compared to a feedforward neural network as baseline classification model that predicts the label of each sentence independently of the context.
We consider two ways to compute embeddings over sentences with BERT~\cite{Devlin2018}:
\begin{itemize}
\item \emph{bert-cls}, denoted as $MODEL_{cls}^{bert}$, uses the $[CLS]$ token corresponding output of BERT after processing a sentence.
\item \emph{bert-avg}, denoted as $MODEL_{avg}^{bert}$, uses the average of the word embeddings calculated with BERT as a sentence embedding.
\end{itemize}
\subsection{Argument Clustering}
\label{sec:Approach:AS}
In the argument clustering task, we apply the same methods (k-means, DBSCAN) as in the topic clustering step to group the arguments within a specific topic by topical aspects. Specifically, we compute clusters of arguments for each topic and compare the performance of k-means and HDBSCAN with tf-idf, as well as \emph{bert-avg} and \emph{bert-cls} embeddings. Furthermore, we investigate whether calculating tf-idf within each topic separately is superior to computing tf-idf over
all arguments in the document corpus (i.e., across topics).
\section{Conclusion} %
\label{sec:Conclusion}
In this paper, we proposed an
argument search framework that
combines the keyword search with precomputed topic clusters for argument-query matching, applies a novel approach to argument identification based on sentence-level sequence labeling,
and
aggregates arguments via argument clustering.
Our evaluation with real-world data
showed that our framework can be used
to mine and search for arguments from unstructured text
on any given topic.
It
became clear that a full-fledged argument search requires a deep understanding of text and that the individual steps can still be improved.
We suggest future research on developing argument search approaches that are sensitive to different aspects of argument similarity and argument quality. %
\section{Evaluation}
\label{sec:Evaluation}
\subsection{Evaluation Data Sets}
\label{sec:Evaluation:Datasets}
In total, we use four data sets for evaluating the different steps of our argument retrieval framework.
\begin{enumerate}
\item \textbf{Debatepedia} is a debate platform that lists arguments to a topic on one
page, including subtitles, structuring the arguments into different aspects.\footnote{We use the data
available at \url{https://webis.de/}.\label{dataset:webis}}
\item \textbf{Debate.org} is a debate platform that is organized in rounds where each of two opponents submits posts arguing for their side. Accordingly, the posts might also include non-argumentative parts used for answering the important points of the opponent before introducing new arguments.\footref{dataset:webis}
\item \textbf{Student Essay} \cite{Stab2017} is widely used in research on argument segmentation \cite{Eger2017, Ajjour2017, Petasis2019}. Labeled on token-level, each document contains one major claim and several claims and premises. We can use this data set for evaluating argument identification.
\item \textbf{Our Dataset} is based on a debate.org crawl.\footref{dataset:webis} It is restricted to a subset of four out of the total 23 categories -- \textit{politics}, \textit{society}, \textit{economics} and \textit{science} -- and contains additional annotations.
3 human annotators familiar with linguistics segmented these documents
and labeled them as being of \emph{medium} or \emph{low quality}, to exclude low quality documents.
The annotators were then asked to indicate the beginning of each new argument and to label argumentative sentences summarizing the aspects of the post as \emph{conclusion} and \emph{outside of argumentation}. In this way, we obtained a ground truth of labeled arguments on a sentence level (Krippendorff's $\alpha=0.24$ based on 20 documents and three annotators).
A
description of the data set is provided online.\footnote{See
\url{https://github.com/michaelfaerber/arg-search-framework}.}
\end{enumerate}
\textbf{Train-Validation-Test Split.} We use the splits provided by \citet{Stab2017} for the student essay data set. The other data sets are divided into train, validation and test splits based on topics (15\% of the topics were used for testing).
\begin{table*}[tb]
\centering
\caption{Results of unsupervised topic clustering on the \emph{debatepedia} data set $-$ \%$n$:~noise examples (HDBSCAN)
}
\label{tab:Results:ClusteringUnsupervised}
\begin{small}
\resizebox{0.99\textwidth}{!}{
\begin{tabular}{@{}l rrrrr r rrrrr@{}}
\toprule
& \multicolumn{5}{c}{With noise}& &\multicolumn{5}{c}{\emph{Without noise}}\\
\cline{2-6} \cline{8-12}
& \#$Clust.$ & $ARI$ &$Ho$ & $Co$ & $BCubed~F_1$ & \%$n$ & \#$Clust.$ & $ARI$ &$Ho$ & $Co$ & $F_{1}$ \\
\midrule
$ARGMAX_{none}^{tfidf}$
&253 & 0.470 &0.849 & 0.829 & 0.591 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$\\
$ARGMAX_{lsa}^{tfidf}$
&157 & 0.368 & 0.776& 0.866 & 0.561& $-$ & $-$ & $-$ & $-$ & $-$ & $-$\\
$KMEANS_{none}^{tfidf}$
&170 & \textbf{0.703} & 0.916 & 0.922 & 0.774 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\
$HDBSCAN_{none}^{tfidf}$
& 206 & 0.141 &0.790 & 0.870 & 0.677 & 21.1 &205 & 0.815 &0.955& 0.937 & 0.839\\
$HDBSCAN_{umap}^{tfidf}$
& 155 & 0.673 &0.900 & 0.931 & 0.786 & 4.3 & 154 & 0.779 &0.927& 0.952& 0.827\\
$HDBSCAN_{lsa + umap}^{tfidf}$
&162 & 0.694 & 0.912& 0.935& \textbf{0.799} & 3.6 &161 & 0.775 &0.932 &0.950 & 0.831\\
\bottomrule
\end{tabular}
}
\end{small}
\end{table*}
\begin{table*}[tb]
\centering
\caption{Results of unsupervised topic clustering on the \emph{debate.org} data set
$-$ \%$n$:~noise examples (HDBSCAN)}
\label{tab:Results:ClusteringUnsupervisedORG}
\begin{small}
\resizebox{0.99\textwidth}{!}{
\begin{tabular}{@{}l rrrrr r rrrrr@{}}
\toprule
& \multicolumn{5}{c}{With noise}& &\multicolumn{5}{c}{\emph{Without noise}}\\
\cline{2-6} \cline{8-12}
& \#$Clust.$ & $ARI$ & $Ho$ &$Co$& $F_{1}$ & \%$n$ &\#$Clust.$ & $ARI$ & $Ho$ &$Co$ & $BCubed~F_1$ \\
\midrule
$KMEANS_{none}^{tfidf}$ & 50 & \textbf{0.436} & 0.822 & 0.796 & \textbf{0.644} &$-$&$-$&$-$&$-$&$-$&$-$\\
$HDBSCAN_{umap}^{tfidf}$
& 20 & 0.354 &0.633& 0.791 & 0.479 & 7.1 & 19 & 0.401 & 0.648 & 0.831 & 0.502\\
$HDBSCAN_{lsa + umap}^{tfidf}$
&26 & 0.330 & 0.689 &0.777 & 0.520 & 5.8 & 25 & 0.355 & 0.701 & 0.790 &0.542\\
\bottomrule
\end{tabular}
}
\end{small}
\end{table*}
\subsection{Evaluation Settings}
\label{sec:Evaluation:Methods}
We report the results by using the evaluation metrics
precision, recall and $F_1$-measure concerning the classification, and adjusted rand index (ARI), homogeneity (Ho), completeness (Co), and $BCubed~F_1$ score concerning the clustering.
\textbf{Topic Clustering.} We use HDBSCAN with the minimal cluster size of 2.
Regarding k-means,
we vary the parameter $k$ that determines the number of clusters. We only report the results of the best setting.
For the model $ARGMAX_{none}^{tfidf}$ we restrict the vocabulary size and the maximal document frequency of the tf-idf to obtain a vocabulary that best represents the topics by single terms. %
\textbf{Argument Identification.}
We use
a BiLSTM implementation with 200 hidden units and apply \textit{SigmoidFocalCrossEntropy}
as loss function.
Furthermore, we use the \emph{Adagrad} optimizer~\cite{Duchi2011} and train the model for 600 epochs, shuffling the data in each epoch, and keeping only the best-performing model as assessed on the validation loss. The baseline feed-forward neural network contains a single hidden dense layer of size 200 and gets compiled with the same hyperparameters.
As BERT implementation, we use DistilBERT~\cite{Sanh2019}. %
\begin{figure}[tb]
\centering
\includegraphics[width=0.78\linewidth]{images/subtopic_argument_correlation2}
\caption{Linear regression of arguments and their topical aspects for the \emph{debatepedia} data set.}%
\label{fig:Results:Correlation}
\end{figure}
\textbf{Argument Clustering.}
We estimate the parameter $k$ for the k-means algorithm for each topic using a linear regression based on the number of clusters relative to the number of arguments in this topic. As shown in Figure \ref{fig:Results:Correlation}, we observe a linear relationship between the number of topical aspects (i.e., subtopics) and the argument count per topic in the \emph{debatepedia} data set.
We apply HDBSCAN with the same parameters as in the topic clustering task (\emph{min\_cluster\_size}~=~2).
\section{Introduction}
\label{sec:Introduction}
Arguments are an integral part of
debates and discourse between people. For instance, journalists, scientists, lawyers, and managers often need to pool arguments and contrast pros and cons \cite{DBLP:conf/icail/PalauM09a}.
In light of this, argument search has been proposed to retrieve relevant arguments for a given query (e.g., \emph{gay marriage}).
Several argument search approaches have been proposed in the literature~\cite{Habernal2015,Peldszus2013}.
However, major challenges of argument retrieval still exist, such as identifying and clustering arguments concerning controversial topics %
and extracting arguments from a wide range of texts on a fine-grained level.
For instance, the argument search system args.me \cite{Wachsmuth2017} uses a keyword search to match relevant documents and lacks the extraction of individual arguments. In contrast, the ArgumenText \cite{Stab2018} applies keyword matching on single sentences to identify relevant arguments and neglects the context,
yielding rather shallow arguments.
Furthermore, IBM Debater \cite{Levy2018} proposes a rule-based extraction of arguments from Wikipedia articles utilizing prevalent structures to identify sentence-level arguments.
Overall, the existing approaches are lacking (a)~a semantic argument-query matching,
(b)~a segmentation of documents into argumentative units of arbitrary size, and (c)~a clustering of arguments
w.r.t. subtopics
for a fully equipped framework.
In this paper, we propose a novel argument search framework
that addresses these aspects
(see Figure~\ref{fig:Approach:Pipeline}):
During the \textit{topic clustering} step, we group argumentative documents by topics and, thus, identify the set of %
relevant documents for a given search query
(e.g., \textit{gay marriage}). To overcome the limitations of keyword search approaches, we rely on semantic representations via embeddings
in combination with established clustering algorithms.
Based on the relevant documents, the \textit{argument segmentation} step aims at identifying and separating arguments. We hereby understand arguments to consist of one or multiple sentences and propose a BiLSTM-based sequence labeling method. %
In the final \textit{argument clustering} step, we identify the different aspects of the topic at hand that are
covered
by the arguments identified in the previous step.
We evaluate all three steps of our framework based on
real-world data sets from several domains.
By
using the output of one framework's component as input for the subsequent component,
our evaluation is
particularly
challenging but
realistic. In the evaluation, we show that by using embeddings, text labeling, and clustering, we can extract and aggregate arguments from unstructured text to a considerable extent.
Overall, our framework
provides the basis for advanced
argument search in real-world scenarios with little training data.
\begin{figure*}[tb]
\centering
\includegraphics[width=0.99\textwidth]{images/new_approach.png}
\caption{Overview of our framework for argument search. %
}
\label{fig:Approach:Pipeline}
\end{figure*}
In total, we make the following contributions:
\begin{itemize}
\setlength\itemsep{0em}
\item We propose a novel argument search framework for fine-grained argument retrieval
based on topic clustering, argument identification, and argument clustering.\footnote{The source is available online at \url{https://github.com/michaelfaerber/arg-search-framework}.}
\item We provide a new evaluation data set for sequence labeling of arguments.
\item We evaluate all steps of our framework extensively based on four data sets.
\end{itemize}
In the following,
after discussing related work in Section~\ref{sec:RelatedWork},
we propose our argument mining framework in Section~\ref{sec:Approach}.
Our evaluation is presented in Section~\ref{sec:Evaluation}.
Section~\ref{sec:Conclusion} concludes the paper.
\section{Related Work}
\label{sec:RelatedWork}
\textbf{Topic Clustering. }
Various approaches for modeling the topics of documents have been proposed, such as Latent Dirichlet Allocation (LDA) and Latent Semantic Indexing (LSI).
Topic detection and tracking~\cite{Wayne00} and topic segmentation~\cite{Ji2003} have been pursued in detail in the IR community.
\citet{Sun2007} introduce an unsupervised method for
topic detection and topic segmentation of multiple similar documents.
Among others, \citet{Barrow2020}, \citet{Arnold2019}, and \citet{Mota2019} propose models for segmenting documents and assigning topic labels to these segments, but ignore arguments.
\textbf{Argument Identification. }
\label{sec:RelatedWork:ArgumentRecognition}
Argument identification can be approached on the \textit{sentence level} by deciding for each sentence whether it constitutes an argument. For instance, IBMDebater \cite{Levy2018} relies on a combination of rules and weak supervision for classifying sentences as arguments.
In contrast, ArgumenText \cite{Stab2018} does not limit its argument identification to sentences.
\citet{Reimers2019} show that contextualized word embeddings can improve the identification of sentence-level arguments.
Argument identification has been approached on the level of \textit{argument units}, too. Argument units are defined as different parts of an argument.
\citet{Ajjour2017} compare machine learning techniques
for argument segmentation on several corpora. %
The authors observe that
BiLSTMs mostly achieve the best results. %
Moreover, \citet{Eger2017} and \citet{Petasis2019} show
that using more advanced models, such as combining a BiLSTM with CRFs and CNNs, hardly improves the BIO tagging results. Hence, we also create a BiLSTM model for argument identification.
\textbf{Argument Clustering.}
\label{sec:RelatedWork:ArgumentClustering}
\citet{Ajjour2019a} approach argument aggregation by identifying non-overlapping \emph{frames}, defined as a set of arguments from one or multiple topics that focus on the same aspect.
\citet{Bar-Haim2020} propose an argument aggregation approach by mapping similar arguments to common \emph{key points}, i.e., high-level arguments.
They observe that models with BERT embeddings perform the best for this task.
\citet{Reimers2019} propose the clustering of arguments based on the similarity of two sentential arguments with respect to their topics.
Also here, a fine-tuned
BERT model is most successful for assessing the argument similarity automatically. %
Our framework, while being also based on BERT for argument clustering, can consist of several sentences, making it on the one hand more flexible but argument clustering on the other hand more challenging.
\textbf{Argument Search Demonstration Systems. } %
\label{sec:RelatedWork:Frameworks}
\citet{Wachsmuth2017} propose the argument search framework \textit{args.me}
using online debate platforms. %
The arguments are
considered
on document level.
\citet{Stab2018} propose the framework \textit{ArgumenText} for argument search. %
The retrieval of topic-related web documents
is based on keyword matching, while the argument identification is based on a binary sentence classification.
\citet{Levy2018} propose
\textit{IBM Debater} based on Wikipedia articles. Arguments are defined as single claim sentences that explicitly discuss a \emph{main concept} in Wikipedia
and that are identified via rules.
\textbf{Argument Quality Determination.}
Several approaches and data sets have been published on determining the quality of arguments \cite{DBLP:conf/acl/GienappSHP20,DBLP:conf/cikm/DumaniS20,DBLP:conf/aaai/GretzFCTLAS20}, which is beyond the scope of this paper.
\subsection{Evaluation Results}
\label{sec:Results}
In the following, we present the evaluation results for the tasks topic clustering, argument identification, and argument clustering.
\begin{table*}[h]
\centering
\caption{Results of the clustering of arguments of the \emph{debatepedia} data set by topical aspects. \emph{across topics}:~tf-idf scores are computed across topics, \emph{without noise}:~{HDBSCAN} is only evaluated on those examples which are not classified as noise.}
\label{tab:Results:ASUnsupervised}
\begin{small}
\resizebox{0.85\textwidth}{!}{
\begin{tabular}{@{}l l l cccc l@{}}
\toprule
Embedding & Algorithm & Dim. Reduction & $ARI$ & $Ho$ &$Co$ & $BCubed~F_1$ & Remark\\
\midrule
tf-idf & HDBSCAN & UMAP & 0.076 & 0.343 &0.366& 0.390 & \\
tf-idf & HDBSCAN & UMAP & 0.015 & 0.285 &0.300& 0.341 &\emph{across topics} \\
tf-idf & HDBSCAN & $-$ & \textbf{0.085} & \textbf{0.371} &\textbf{0.409}& \textbf{0.407} & \\
tf-idf & k-means & $-$ & 0.058 & 0.335 &0.362& 0.397 & \\
tf-idf & k-means & $-$ & 0.049 & 0.314 &0.352& 0.402 &\emph{across topics} \\
\midrule
bert-cls & HDBSCAN & UMAP & 0.030 & 0.280 &0.298& 0.357 & \\
bert-cls & HDBSCAN & $-$ & 0.016 & 0.201 &0.324& 0.378 & \\
bert-cls & k-means & $-$ & 0.044 & 0.332 &0.326& 0.369 & \\
\midrule
bert-avg & HDBSCAN & UMAP & 0.069 & 0.321 &0.352& 0.389 & \\
bert-avg & HDBSCAN & $-$ & 0.018 & 0.170 &0.325& 0.381 & \\
bert-avg & k-means & $-$ & 0.065 & 0.337 &0.349& 0.399 & \\
\midrule
tf-idf & HDBSCAN & $-$ & 0.140 & 0.429 &0.451& 0.439 &\emph{without noise}\\
\bottomrule
\end{tabular}
}
\end{small}
\end{table*}
\begin{figure*}[h]
\centering
\includegraphics[width = 0.42\textwidth]{key_results/AS/hdbscan_scatterplot2}
\includegraphics[width = 0.42\textwidth]{key_results/AS/kmeans-bert_scatterplot2}
\caption{Argument clustering results (measured by $ARI$, $BCubed F_1$, and $homogeneity$) for HDBSCAN on tf-idf embeddings and k-means on \emph{bert-avg} embeddings.}
\label{fig:Results:Scatterplot}
\end{figure*}
\textbf{Topic Clustering.}
We evaluate unsupervised topic clustering based on the 170 topics from the \textit{debatepedia} data set. Given Table~\ref{tab:Results:ClusteringUnsupervised}, we see that density-based clustering algorithms, such as {HDBSCAN}, applied to tf-idf document embeddings are particularly suitable for this task and clearly outperform alternative clustering approaches. We find that their ability to handle unclear examples as well as clusters of varying shapes, sizes, and densities is crucial to their performance.
{HDBSCAN} in combination with a preceding dimensionality reduction step achieves an $ARI$ score of 0.779. However, these quantitative results must be considered from the standpoint that topics in the \emph{debatepedia} data set are overlapping and, thus, the reported scores are lower bound estimates.
When evaluating the clustering results for the \emph{debatepedia} data set qualitatively,
we find that many predicted clustering decisions are reasonable but evaluated as erroneous in the quantitative assessment. For instance,
we see that
documents on \textit{gay parenting}
appearing in the debate about \textit{gay marriage} can be assigned to a cluster with documents on \textit{gay adoption}.
Furthermore, we investigate the impact on the recall of relevant documents and observe a clear improvement of topic clusters over keyword matching for argument-query matching.
For instance, given the topic \emph{gay adoption}, many documents from debatepedia.org use related terms like \emph{homosexual} and \emph{parenting} instead of explicitly mentioning \emph{`gay'} and \emph{`adoption'} and, thus, cannot be found by a keyword search.
We additionally evaluate the inductive capability of our topic clustering approach by applying them to debates from debate.org (see Table~\ref{tab:Results:ClusteringUnsupervisedORG}). We observe that the application of the unsupervised clustering on tf-idf embeddings for debates from debate.org achieves a solid mediocre $ARI$ score of 0.436.
\textbf{Argument Identification.}
To evaluate the identification of argumentative units based on sentence-level sequence-labeling, we apply our approach to the \emph{Student Essay} data set (see Table~\ref{tab:Results:Segmentation}) and achieve a macro $F_1$ score of 0.705 with a BiLSTM-based sequence-learning model on sentence embeddings computed with BERT. Furthermore, we observe a strong influence of the data sources on the results for argument identification. For instance, in the case of the \textit{Student Essay} data set, information about the current sentence as well as the surrounding sentences is available yielding accurate segmentation results ($F_{1,macro}=0.71$, $F_{1,macro}^{BI}=0.96$).
\textbf{Argument Clustering.}
Finally, we proposed an approach to cluster arguments consisting of one or several sentences by topical aspects. We evaluate the clustering
based on tf-idf and BERT embeddings of the arguments (see Table~\ref{tab:Results:ASUnsupervised}). Overall, the performance of the
investigated argument clustering methods is considerably low. We find that this is due to the fact that information on topical aspects is scarce and often underweighted in case of multiple sentences.
Given Figure~\ref{fig:Results:Scatterplot}, we can observe that the performance of the clustering does not depend on the number of arguments.
\if0
\begin{table}
\centering
\caption{Confusion matrices of $BILSTM_{cls}^{bert}$: Rows represent ground-truth labels and columns represent predictions.\\Left: \emph{Student Essay} data set, Right: \emph{debate.org} data set.}
\label{tab:Results:CM}
\begin{small}
\begin{tabular}{c| ccc}
& B & I & O\\
\midrule
B&15 & 33 & 0\\
I&24 & 226 & 0 \\
O&39 & 94 & 2\\
\bottomrule
\end{tabular}
\hspace{1cm}
\begin{tabular}{c| ccc}
& B & I & O\\
\midrule
B&1 & 24 & 23\\
I&0 & 135 & 115 \\
O&0 & 50 & 85\\
\bottomrule
\end{tabular}
\end{small}
\end{table}
\begin{table*}[tb]
\centering
\caption{Overview of the topics with the highest $ARI$ scores for HDBSCAN and k-means.}
\label{tab:Results:BestTopics}
\begin{small}
\begin{tabular}{@{}l p{7cm} rr cccc @{}}
\toprule
&Topic & \#$Clust._{true}$ & \#$Clust._{pred}$ & $ARI$ & $Ho$ &$Co$ & $BCubed~F_1$ \\
\toprule
\multicolumn{8}{c}{\textbf{HDBSCAN based on tf-idf embeddings}}\\
\midrule
1 &Rehabilitation vs retribution
&2 &3 & 0.442 &1.000 &0.544 &0.793 \\
2 &Manned mission to Mars
&5 &6 &0.330 & 0.461&0.444 & 0.515\\
3 & New START Treaty
&5 &4 &0.265 &0.380 &0.483 &0.518 \\
4 &Obama executive order to raise the debt ceiling
&3 &6 & 0.247 &0.799 &0.443 &0.568 \\
5 &Republika Srpska secession from Bosnia and Herzegovina
&4 &6 &0.231 &0.629 &0.472 &0.534 \\
6& Ending US sanctions on Cuba
&11 &9 & 0.230&0.458 &0.480 &0.450 \\
\toprule
\multicolumn{8}{c}{\textbf{k-means based on \emph{bert-avg} embeddings}}\\
\midrule
1 &Bush economic stimulus plan
&7 &5 & 0.454 &0.640 &0.694 &0.697 \\
2 &Hydroelectric dams
&11 &10 &0.386 & 0.570&0.618 & 0.537\\
3 & Full-body scanners at airports
&5 &6 &0.301 &0.570 &0.543 &0.584 \\
4 &Gene patents
&4 &7 & 0.277 &0.474 &0.366 &0.476 \\
5 &Israeli settlements
&3 &4 &0.274 &0.408 &0.401 &0.609 \\
6& Keystone XL US-Canada oil pipeline
&2 &3 & 0.269&0.667 &0.348 &0.693 \\
\bottomrule
\end{tabular}
\end{small}
\end{table*}
\begin{figure}[tb]
\centering
\includegraphics[width=0.4\textwidth]{key_results/AS/ARI_two_models3}
\includegraphics[width=0.4\textwidth]{key_results/AS/ARI_hdbscan2}
\caption{Argument clustering consensus based on the $ARI$ between (top) HDBSCAN and k-means score and (bottom) HDBSCAN model and the ground-truth.}
\label{fig:Results:Hist}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\linewidth]{images/subtopic_argument_correlation2}
\caption{Linear regression of arguments and their topical aspects for the \emph{debatepedia} data set.
\label{fig:Results:Correlation}
\end{figure}
\subsubsection{Topic Clustering}
\label{sec:Results:DS}
The results of the unsupervised clustering approach are shown in Table \ref{tab:Results:ClusteringUnsupervised}, based
on a subset of the \emph{debatepedia} data set with 170 topics. The simple model $ARGMAX_{none}^{tfidf}$ computes clusters based on words with the highest tf-idf score within a document. Its $ARI$ score of $0.470$ indicates that many topics in the \emph{debatepedia} data set can already be represented by a single word. Considering the $ARGMAX_{lsa}^{tfidf}$ model, the lower $ARI$ score of $0.368$ indicate that using the topics found by LSA does not add value compared to tf-idf vectors.
Furthermore, we evaluate k-means with the density-based clustering algorithm HDBSCAN.
First, we find that $HDBSCAN_{lsa + umap}^{tfidf}$ and $KMEANS_{none}^{tfidf}$ with $ARI$ scores of $0.694$ and $0.703$, respectively, achieve a comparable performance on the tf-idf vectors. However, since HDBSCAN accounts for noise is the data which it pools within a single cluster, the five rightmost columns of Table \ref{tab:Results:ClusteringUnsupervised} have to be considered when deciding on a clustering method for further use. When excluding the noise cluster from the evaluation and thereby considering only approx. 96\% of instances where $HDBSCAN_{lsa + umap}^{tfidf}$ and $HDBSCAN_{umap}^{tfidf}$ are sure about the cluster membership, the $ARI$ score of the clustering amounts to $0.78$. Considering that 21.1\% of $HDBSCAN_{none}^{tfidf}$ are classified as noise, compared to the 4.3 \% of $HDBSCAN_{umap}^{tfidf}$, we conclude that applying an UMAP dimensionality reduction step before the HDBSCAN clustering is essential for its performance on the \emph{debatepedia} data set.
\textbf{Qualitative Evaluation.} Looking at the predicted clusters reveals that many of the clusters considered as erroneous are reasonable. For example, $HDBSCAN_{lsa + umap}^{tfidf}$ maps the document with the title \emph{`Military recruiting: Does NCLB rightly allow military recruiting in schools?'} of the topic \emph{`No Child Left Behind Act'} to the cluster representing the topic \emph{`Military recruiting in public schools'}.
Another such example is a cluster that is largely comprised of documents about \emph{`Gay adoption'}, but contains one additional document \emph{`Parenting: Can homosexuals do a good job of parenting?'} from the topic \emph{`Gay marriage'}.
\textbf{Inductive Generalization.}
We apply the unsupervised approach to the \emph{debate.org} data set to evaluate whether the model is able to generalize. The results, given in Table \ref{tab:Results:ClusteringUnsupervisedORG}, show that the unsupervised approach using HDBSCAN and k-means leads to far better results with $ARI$ scores of 0.354 and 0.436, respectively. K-means performs on the \emph{debate.org} data set distinctly better than HDBSCAN because the \emph{debate.org} data set is characterized by an high number of single-element topics. In contrast to k-means, HDBSCAN does not allow single-element clusters, thus getting pooled into a single noise cluster. Furthermore, this is reflected by the different numbers of clusters as well as the lower homogeneity scores of HDBSCAN of 0.633 and 0.689 compared to 0.822 of k-means while having a comparable completeness of approx. 0.8.
\textbf{Topic Clusters for Argument-Query Matching.}
Applying a keyword search would retrieve only a subset of the relevant arguments that mention the search phrase explicitly. Combining a keyword search on documents with the clusters found by $HDBSCAN$, enables an argument search framework to retrieve arguments from a broader set of documents. For example, in the data set \textit{debatepedia.org}, we observe that clusters of arguments related to \emph{gay adoption} include words like \emph{parents}, \emph{mother}, \emph{father}, \emph{sexuality}, \emph{heterosexual}, and \emph{homosexual} while neither \emph{gay} nor \emph{adoption} are not mentioned explicitly.
\subsubsection{Argument Identification}
\label{sec:Results:SEG}
The results of the argument identification step based on the \emph{Student Essay} data set is given in Table \ref{tab:Results:Segmentation}. We evaluate a BiLSTM model in a sequence-labeling setting compared to a feedforward neural network (FNN) in a classification setting in order to investigate the importance of context for the identification of arguments. Using the BiLSTM improves the macro $F_1$ score relative to the feedforward neural network on the \emph{bert-avg} embeddings by 3.9\% and on the \emph{bert-cls} embeddings by 15.2\%. Furtheremore, we evaluate the effects of using the sentence embedding \emph{bert-avg} and \emph{bert-cls}.
Using \emph{bert-cls} embeddings increases the macro $F_1$ score by 4.3\% in the classification setting and by 15.6\% in the sequence-learning setting relative to using \emph{bert-avg}. Considering also the sequence-learning and classification results, we conclude that the \emph{bert-cls} embeddings encode important additional information which gets only utilized in a sequence-learning setting.
Our results also reflect the peculiarity of the \emph{Student Essay}'s training data set that most sentences are part of an argument and only $\sim$3\% of the sentences are labeled as outside (`O'). Accordingly, the models observe hardly any examples for outside sentences during training and, thus, have difficulties in learning to distinguish them from other sentences. Overall, this is reflected by the very low precision and recall for the class `O'. Considering the fact that the correct identification of a `B' sentence alone is already enough to separate two arguments from each other, the purpose of labeling a sentence as `O' is restricted to classifying the respective sentence as non-argumentative. Therefore, in case of the \emph{Student Essay} data set, the task of separating two arguments from each other becomes much more important than separating non-argumentative sentences from arguments. Therefore, the last column of Table \ref{tab:Results:Segmentation} indicates the macro $F_1$ score only for the `B' and `I' labels with a macro $F_1$ score of 0.956 for our best model. This reflects the model's high ability to separate arguments from each other.
Furthermore, we evaluate whether the previously best-performing model $BILSTM_{cls}^{bert}$ is able to identify arguments on the \emph{debate.org} data set if trained on the \emph{Student Essay} data set. The results are given in Table \ref{tab:Results:SegORG}.
The pretrained model performs very poor on \emph{`O'} sentences since not many examples of \emph{`O'} sentences were observed during training. Moreover, applying the pretrained $BILSTM_{cls}^{bert}$ to the \emph{debate.org} data set yields very low precision and recall on \emph{`B'} sentences. In contrast to the \emph{Student Essay} data set where arguments often begin with cue words (e.g., \emph{first}, \emph{second}, \emph{however}), the documents in the \emph{debate.org} data set use cue words less often, impacting the classification performance.
The results from training the $BILSTM_{cls}^{bert}$ model from scratch on the \emph{debate.org} data set are very different to our results on the \emph{Student Essay} data set. As shown in Table \ref{tab:Results:CM}, the confusion matrix for the \emph{debate.org} data set show that the BiLSTM model has difficulties to learn which sentences start an argument in the \emph{debate.org} data set. In contrast to the \emph{Student Essay} data set, it cannot learn from the peculiarities (e.g., cue words) and apparently fails to find other indications for \emph{`B'} sentences. In addition, also the distinction between \emph{`I'} and \emph{`O'} sentences is not clear. These results match our experiences with the annotation of documents in the \emph{debate.org} data set where it was often difficult to decide whether a sentence form an argument and to which argument it belongs. This is also reflected by the inter-annotator agreement of 0.24 based on Krippendorff's $\alpha$ on a subset of 20 documents with three annotators.
Overall, we find that the performance of the argument identification strongly depends on the peculiarities and quality of the underlying data set. For well curated data sets such as the \emph{Student Essay} data set, the information contained in the current sentence as well as the surrounding sentences yield an accurate identification of arguments. In contrast, data sets with poor structure or colloquial language, as given in the \emph{debate.org} data set, lead to less accurate results
\subsubsection{Argument Clustering}
\label{sec:Results:AS}
In the final step of the argument search framework, we evaluate the argument clustering according to topical aspects based on the \emph{debatepedia} data set. Therefore, we evaluate the performance of the clustering algorithms HDBSCAN and k-means for different document embeddings that yielded the best results in the topic clustering step of our framework. We estimate the parameter $k$ for the k-means algorithm for each topic using a linear regression based on the number of clusters relative to the number of arguments in this topic. As shown in Figure \ref{fig:Results:Correlation}, we observe a linear relationship between the number of topical aspects and the argument count per topic in the \emph{debatepedia} data set. As shown in Table \ref{tab:Results:ASUnsupervised}, we perform the clustering based on the arguments in each ground-truth topic separately and average the results across the topics. We observe that HDBSCAN performs best on tf-idf embeddings with an averaged $ARI$ score of 0.085 while k-means achieves its best performance on \emph{bert-avg} embeddings with an averaged $ARI$ score of 0.065. Using HDBSCAN instead of k-means on tf-idf embeddings yields an improvement in the $ARI$ score of 2.7\%. Using k-means instead of HDBSCAN on \emph{bert-avg} and \emph{bert-cls} embeddings results in an improvement of 4.7\% and 2.8\%, respectively.
Using an UMAP dimensionality reduction step before applying HDBSCAN outperforms k-means on \emph{bert-avg} embeddings with an $ARI$ score of 0.069. However, on tf-idf embeddings using an UMAP dimensionality reduction step slightly reduces the performance. Comparing the results of using \emph{bert-cls} embeddings versus \emph{bert-avg} embeddings, we find that \emph{bert-avg} embeddings result in slightly better scores with a maximum improvement of 2.1\%. We further compare calculating tf-idf vectors only within each topic relative to computing them based on all arguments in the whole dataset (\emph{across topics}). This change only affects the document frequencies used to calculate the tf-idf scores. Therefore, terms that are characteristic for a given topic are likely to show higher document frequencies and thus lower tf-idf scores when the computation is performed within each topic. Since tf-idf scores indicate the relevance of each term, clustering algorithms focus more on terms which distinguish the arguments from each other within a topic.
Accordingly, the observed deviation of the $ARI$ score by 0.9\% for k-means and 6.1\% for HDBSCAN in combination with UMAP matches our expectation.
We also evaluated the exclusion of the HDBSCAN noise clusters (\emph{without noise}) yielding an $ARI$ score of 0.140 and a $BCubed~F_1$ score of 0.439.
Furthermore, we show in Table \ref{tab:Results:BestTopics} for the best performing k-means and HDBSCAN models the topics with the highest $ARI$ scores. We observe that the clustering performance on topics with the best clustering performance is still relative low, in particular when compared to the results of the topic clustering step. As indicated by decreasing $ARI$ scores, both models have only a few topics where they perform comparatively well.
Figure~\ref{fig:Results:Scatterplot} shows the performance of the two models with respect to the number of arguments in each topic. Both $ARI$ and $BCubed~F_1$ scores show a very similar distribution for topics with different numbers of arguments, while the distributions of the homogeneity score show a slight difference for the two models. This indicates that the performance of the clustering algorithms does not depend on the number of arguments.
Moreover, the upper histogram in Figure \ref{fig:Results:Hist} displays the distribution of the degree of consensus between the two models based on the $ARI$ scores for each topic. Comparing this with the distribution of the $ARI$ score computed between the HDBSCAN model and the ground-truth displayed on the histogram below, we observe that the consensus between the two models is, for most topics, rather low.
Overall, our results show the difficulties of argument clustering based on topical aspects.
Based on the \emph{debatepedia} data set, we show that unsupervised clustering algorithm with the proposed embedding methods cannot cluster arguments into topical aspects in a consistent and reasonable way. This result is in line with the results of Reimers et al.~\cite{Reimers2019} stating that even experts have difficulties to identify argument similarity based on topical aspects.
Considering that their evaluation is based on sentence-level arguments, it seems likely that assessing argument similarity is even harder for arguments comprised of one or multiple sentences.
Moreover, the authors report promising results for the pairwise assessment of argument similarity when using the output corresponding to the BERT $[CLS]$ token.
However, our experiments show that their findings do not apply to \emph{debatepedia} data set. We assume that this is due to differences in argument similarity that are introduced by using prevalent topics in the \emph{debatepedia} data set rather than using explicitly annotated arguments.
\fi
\subsection{Evaluation Results}
\label{sec:Results}
In the following, we present the results for the three tasks topic clustering, argument identification, and argument clustering.
\subsubsection{Topic Clustering}
\label{sec:Results:DS}
We start with the results of clustering the documents by topics based on
the \emph{debatepedia} data set with 170 topics (see Table \ref{tab:Results:ClusteringUnsupervised}). The plain model $ARGMAX_{none}^{tfidf}$, which computes clusters based on words with the highest tf-idf score within a document, achieves an $ARI$ score of $0.470$. This indicates that many topics in the \emph{debatepedia} data set can already be represented by a single word. Considering the $ARGMAX_{lsa}^{tfidf}$ model, the $ARI$ score of $0.368$ shows that using the topics found by LSA does not add value compared to tf-idf.
Furthermore, we
find that the tf-idf-based
$HDBSCAN_{lsa + umap}^{tfidf}$ and $KMEANS_{none}^{tfidf}$ achieve a comparable performance given the $ARI$ scores of $0.694$ and $0.703$. However, since HDBSCAN accounts for noise in the data which it pools within a single cluster, the five rightmost columns of Table \ref{tab:Results:ClusteringUnsupervised} need to be considered when deciding on a clustering method for further use. When excluding the noise cluster from the evaluation,
the $ARI$ score increases considerably for all $HDBSCAN$ models ($HDBSCAN_{none}^{tfidf}$: $0.815$, $HDBSCAN_{umap}^{tfidf}$: $0.779$, $HDBSCAN_{lsa+umap}^{tfidf}$: $0.775$). Considering that $HDBSCAN_{none}^{tfidf}$ identifies 21.1\% of the instances as noise, compared to the 4.3\%
in case of $HDBSCAN_{umap}^{tfidf}$, we conclude that applying an UMAP dimensionality reduction step before the HDBSCAN clustering is quite fruitful for the performance (at least for the \emph{debatepedia} data set).
\begin{table*}[tb]
\centering
\caption{Results of argument identification on the \emph{Student Essay} data set.
}
\label{tab:Results:Segmentation}
\begin{small}
\begin{tabular}{@{}l cc cc cc ccc@{}}
\toprule
& \multicolumn{2}{c}{\emph{B}}& \multicolumn{2}{c}{\emph{I}}& \multicolumn{2}{c}{\emph{O}}& \multicolumn{2}{r}{}\\ \cline{2-3} \cline{4-5} \cline{6-7}
& $Prec.$ & $Rec.$
& $Prec.$ & $Rec.$
& $Prec.$ & $Rec.$
&$F_{1, macro}$ & $F_{1, weighted}$ &$F_{1, macro}^{B, I}$ \\
\midrule
majority class
& 0.000 & 0.000
& 0.719 & 1.000
& 0.000 & 0.000
& 0.279 & 0.602 & 0.419\\
$FNN_{avg}^{bert}$
& 0.535 & 0.513
& 0.820 & 0.836
& 0.200 & 0.160
& 0.510 & 0.736 & 0.675\\
$FNN_{cls}^{bert}$
& 0.705 & 0.593
& 0.849 & 0.916
& \textbf{0.400} & 0.080
& 0.553 & 0.805 & 0.763\\
$BILSTM_{avg}^{bert}$
& 0.766 & 0.713
& 0.885 & 0.930
& 0.000 & 0.000
& 0.549 & 0.846 & 0.823 \\
$BILSTM_{cls}^{bert}$
& \textbf{0.959} & \textbf{0.914}
& \textbf{0.967} & \textbf{0.985}
& 0.208 & \textbf{0.200}
& \textbf{0.705} & \textbf{ 0.951} & \textbf{0.956} \\
\bottomrule
\end{tabular}
\end{small}
\end{table*}
\begin{table*}[tb]
\centering
\caption{Argument identification results on the \emph{debate.org} data set; model trained on \emph{Student Essay} and \emph{debate.org}.}
\label{tab:Results:SegORG}
\begin{small}
\begin{tabular}{@{}ll cc cc cc cc@{}}
\toprule
& & \multicolumn{2}{c}{\emph{B}}& \multicolumn{2}{c}{\emph{I}}& \multicolumn{2}{c}{\emph{O}}& \multicolumn{2}{r}{}\\
\cline{3-8}
&\emph{trained on}
& $Prec.$ & $Rec.$
& $Prec.$ & $Rec.$
& $Prec.$ & $Rec.$
&$F_{1, macro}$ & $F_{1, weighted}$ \\
\midrule
$BILSTM_{cls}^{bert}$ & \emph{Student Essay}
& 0.192 & 0.312
& 0.640 & 0.904
& 1.000 & 0.015
& 0.339 & 0.468\\
$BILSTM_{cls}^{bert}$ &\emph{debate.org}
& 1.000 & 0.021
& 0.646 & 0.540
& 0.381 & 0.640
& 0.368 & 0.492\\
\bottomrule
\end{tabular}
\end{small}
\end{table*}
\textbf{Inductive Generalization.}
We apply our unsupervised approaches to the \emph{debate.org} data set to evaluate whether the model is able to generalize. The results, given in Table \ref{tab:Results:ClusteringUnsupervisedORG}, show that
k-means performs distinctly better on the \emph{debate.org} data set than HDBSCAN (ARI: $0.436$ vs. $0.354$ and $0.330$; $F_1$: $0.644$ vs. $0.479$ and $0.520$). This is likely because the \emph{debate.org} data set is characterized by a high number of single-element topics. In contrast to k-means, HDBSCAN does not allow single-element clusters, thus getting pooled into a single noise cluster. This is reflected by the different numbers of clusters as well as the lower homogeneity scores of HDBSCAN of 0.633 and 0.689 compared to 0.822 of k-means while having a comparable completeness of approx. 0.8.
\textbf{Qualitative Evaluation.}
Applying a keyword search would retrieve only a subset of the relevant arguments that mention the search phrase explicitly. Combining a keyword search on documents with the computed clusters
enables an argument search framework to retrieve arguments from a broader set of documents. For example,
for \textit{debatepedia.org}, we observe that clusters of arguments related to \emph{gay adoption} include words like \emph{parents}, \emph{mother}, \emph{father}, \emph{sexuality}, \emph{heterosexual}, and \emph{homosexual} while neither \emph{gay} nor \emph{adoption} are mentioned explicitly.
\subsubsection{Argument Identification}
\label{sec:Results:SEG}
The results of the argument identification step based on the \emph{Student Essay} data set are given in Table~\ref{tab:Results:Segmentation}. We evaluate a BiLSTM model in a sequence-labeling setting compared to
majority voting and
a feedforward neural network (FNN) as baselines.
Using the BiLSTM improves the macro $F_1$ score relative to the feedforward neural network on the \emph{bert-avg} embeddings by 3.9\% and on the \emph{bert-cls} embeddings by 15.2\%. Furthermore,
using \emph{bert-cls} embeddings increases the macro $F_1$ score by 4.3\% in the classification setting and by 15.6\% in the sequence-learning setting compared to using \emph{bert-avg}.
\textbf{BIO Tagging.}
We observe a low precision and recall for the class `O'.
This can be traced back to a peculiarity of the \emph{Student Essay} data set: Most sentences in the \emph{Student Essay}'s training data set are part of an argument and only 3\% of the sentences are labeled as non-argumentative (outside/`O'). Accordingly, the models observe hardly any examples for outside sentences during training and, thus, have difficulties in learning to distinguish them from other sentences.
Considering the fact that the correct identification of a `B' sentence alone is already enough to separate two arguments from each other, the purpose of labeling a sentence as `O' is restricted to classifying the respective sentence as non-argumentative. Therefore, in case of the \emph{Student Essay} data set, the task of separating two arguments from each other becomes much more important than separating non-argumentative sentences from arguments. In the last column of Table \ref{tab:Results:Segmentation}, we also show the macro $F_1$ score for the `B' and `I' labels only. The high macro $F_1$ score of 0.956 for the best performing model reflects the model's high ability to separate arguments from each other.
\begin{table}
\centering
\caption{Confusion matrices of $BILSTM_{cls}^{bert}$ (rows: ground-truth labels; columns: predictions).}
\label{tab:Results:CM}
\begin{footnotesize}
\begin{tabular}{c| ccc}
\multicolumn{4}{c}{Student Essay} \\
& B & I & O\\
\midrule
B&15 & 33 & 0\\
I&24 & 226 & 0 \\
O&39 & 94 & 2\\
\bottomrule
\end{tabular}
\hspace{1cm}
\begin{tabular}{c| ccc}
\multicolumn{4}{c}{debate.org} \\
& B & I & O\\
\midrule
B&1 & 24 & 23\\
I&0 & 135 & 115 \\
O&0 & 50 & 85\\
\bottomrule
\end{tabular}
\end{footnotesize}
\end{table}
\begin{table*}[tb]
\centering
\caption{Results of the clustering of arguments of the \emph{debatepedia} data set by topical aspects. \emph{across topics}:~tf-idf scores are computed across topics, \emph{without noise}:~{HDBSCAN} is only evaluated on instances not classified as noise.}
\label{tab:Results:ASUnsupervised}
\begin{small}
\begin{tabular}{@{}l l l cccc l@{}}
\toprule
Embedding & Algorithm & Dim. Reduction & $ARI$ & $Ho$ &$Co$ & $BCubed~F_1$ & Remark\\
\midrule
tf-idf & HDBSCAN & UMAP & 0.076 & 0.343 &0.366& 0.390 & \\
tf-idf & HDBSCAN & UMAP & 0.015 & 0.285 &0.300& 0.341 &\emph{across topics} \\
tf-idf & HDBSCAN & $-$ & \textbf{0.085} & \textbf{0.371} &\textbf{0.409}& \textbf{0.407} & \\
tf-idf & k-means & $-$ & 0.058 & 0.335 &0.362& 0.397 & \\
tf-idf & k-means & $-$ & 0.049 & 0.314 &0.352& 0.402 &\emph{across topics} \\
\midrule
bert-cls & HDBSCAN & UMAP & 0.030 & 0.280 &0.298& 0.357 & \\
bert-cls & HDBSCAN & $-$ & 0.016 & 0.201 &0.324& 0.378 & \\
bert-cls & k-means & $-$ & 0.044 & 0.332 &0.326& 0.369 & \\
\midrule
bert-avg & HDBSCAN & UMAP & 0.069 & 0.321 &0.352& 0.389 & \\
bert-avg & HDBSCAN & $-$ & 0.018 & 0.170 &0.325& 0.381 & \\
bert-avg & k-means & $-$ & 0.065 & 0.337 &0.349& 0.399 & \\
\midrule
tf-idf & HDBSCAN & $-$ & 0.140 & 0.429 &0.451& 0.439 &\emph{without noise}\\
\bottomrule
\end{tabular}
\end{small}
\end{table*}
\textbf{Generalizability.}
We evaluate whether the
model $BILSTM_{cls}^{bert}$, which performed best on the \emph{Student Essay} data set, is able to identify arguments on the \emph{debate.org} data set if trained on the \emph{Student Essay} data set. The results are given in Table~\ref{tab:Results:SegORG}.
Again, the pretrained model performs poor on \emph{`O'} sentences since not many examples of \emph{`O'} sentences were observed during training. Moreover, applying the pretrained $BILSTM_{cls}^{bert}$ to the \emph{debate.org} data set yields low precision and recall on \emph{`B'} sentences. A likely reason is that, in contrast to the \emph{Student Essay} data set where arguments often begin with cue words (e.g., \emph{first}, \emph{second}, \emph{however}), the documents in the \emph{debate.org} data set contain cue words less often. %
The results from training the $BILSTM_{cls}^{bert}$ model from scratch on the \emph{debate.org} data set are considerably different to our results on the \emph{Student Essay} data set. As shown in Table \ref{tab:Results:CM}, the confusion matrix for the \emph{debate.org} data set shows that the BiLSTM model has difficulties to learn which sentences start an argument in the \emph{debate.org} data set. In contrast to the \emph{Student Essay} data set, it cannot learn from the peculiarities (e.g., cue words) and apparently fails to find other indications for \emph{`B'} sentences. In addition, also the distinction between \emph{`I'} and \emph{`O'} sentences is not clear. These results match our experiences with the annotation of documents in the \emph{debate.org} data set where it was often difficult to decide whether a sentence forms an argument and to which argument it belongs. This is also reflected by the inter-annotator agreement of 0.24 based on Krippendorff's $\alpha$ on a subset of 20 documents with three annotators.
\textbf{Bottom Line.} Overall, we find that the performance of the argument identification strongly depends on the peculiarities and quality of the underlying data set. For well curated data sets such as the \emph{Student Essay} data set, the information contained in the current sentence as well as the surrounding sentences yield a considerably accurate identification of arguments. In contrast, data sets with poor structure or colloquial language, as given in the \emph{debate.org} data set, lead to less accurate results.%
\subsubsection{Argument Clustering}
\label{sec:Results:AS}
We now evaluate the argument clustering according to topical aspects (i.e., subtopics) as the final step of the argument search framework,
using the \emph{debatepedia} data set.
We evaluate the performance of the clustering algorithms HDBSCAN and k-means for different
embeddings that yielded the best results in the topic clustering step of our framework.
We perform the clustering of the arguments for each
topic (e.g., \textit{gay marriage}) separately and average the results across the topics. As shown in Table \ref{tab:Results:ASUnsupervised}, we observe that HDBSCAN performs best on tf-idf embeddings with an averaged $ARI$ score of 0.085 while k-means achieves its best performance on \emph{bert-avg} embeddings with an averaged $ARI$ score of 0.065. Using HDBSCAN instead of k-means on tf-idf embeddings yields an improvement in the $ARI$ score of 2.7\%. Using k-means instead of HDBSCAN on \emph{bert-avg} and \emph{bert-cls} embeddings results in slight improvements. %
\textbf{UMAP.} Using an UMAP dimensionality reduction step before applying HDBSCAN outperforms k-means on \emph{bert-avg} embeddings with an $ARI$ score of 0.069. However, using a UMAP dimensionality reduction in combination with tf-idf results in a slightly reduced performance.
We
find that \emph{bert-avg} embeddings result in slightly better scores than \emph{bert-cls} when using UMAP.
\textbf{TF-IDF across Topics.} We further evaluate whether computing tf-idf within each topic separately leads to a better performance than computing tf-idf
across topics in the data set.
The observed slight deviation of the $ARI$ score
for k-means and
HDBSCAN in combination with UMAP matches our expectation that
clustering algorithms focus more on terms which distinguish the arguments from each other within a topic.
\textbf{Excluding Noise.} When excluding the HDBSCAN noise clusters (\emph{without noise}), we yield an $ARI$ score of 0.140 and a $BCubed~F_1$ score of 0.439.
\textbf{Number of Arguments.} Figure~\ref{fig:Results:Scatterplot} shows the performance of the models HDBSCAN with tf-idf and k-means with \textit{bert-avg} with respect to the number of arguments in each topic. Both $ARI$ and $BCubed~F_1$ scores show a similar distribution for topics with different numbers of arguments, while the distributions of the homogeneity score show a slight difference for the two models. This indicates that the performance of the clustering algorithms does not depend on the number of arguments.
\textbf{Examples.} In Table \ref{tab:Results:BestTopics}, we show for the best performing k-means and HDBSCAN models the topics with the highest $ARI$ scores.
\begin{figure}[tb]
\centering
\includegraphics[width = 0.83\linewidth]{key_results/AS/hdbscan_scatterplot2}
\includegraphics[width = 0.83\linewidth]{key_results/AS/kmeans-bert_scatterplot2}
\caption{Number of arguments in each topic
for HDBSCAN with tf-idf embeddings and k-means with \emph{bert-avg} embeddings.}
\label{fig:Results:Scatterplot}
\end{figure}
\begin{table}[tb]
\centering
\caption{Top 5 topics
using HDBSCAN and k-means.} %
\label{tab:Results:BestTopics}
\begin{small}
\begin{tabular}{p{3.9cm} r@{}r}
\toprule
Topic & \#$Clust._{true}$ & \#$Clust._{pred}$ \\
\toprule
\multicolumn{3}{c}{\textbf{HDBSCAN based on tf-idf embeddings}}\\
\midrule
Rehabilitation vs retribution
&2 &3 \\
Manned mission to Mars
&5 &6 \\
New START Treaty
&5 &4 \\
Obama executive order to raise the debt ceiling
&3 &6 \\
Republika Srpska secession from Bosnia and Herzegovina
&4 &6 \\
\toprule
\multicolumn{3}{c}{\textbf{k-means based on \emph{bert-avg} embeddings}}\\
\midrule
Bush economic stimulus plan
&7 &5 \\
Hydroelectric dams
&11 &10 \\
Full-body scanners at airports
&5 &6 \\
Gene patents
&4 &7 \\
Israeli settlements
&3 &4 \\
\bottomrule
\end{tabular}
\end{small}
\end{table}
\textbf{Bottom Line.}
Overall, our results confirm that argument clustering based on topical aspects is nontrivival and high evaluation results are still hard to achieve in real-world settings.
Given the \emph{debatepedia} data set, we show that our unsupervised clustering algorithms with the different embedding methods do not cluster arguments into topical aspects in a highly consistent and reasonable way yet. This result is in line with the results of \citet{Reimers2019} stating that even experts have difficulties to identify argument similarity based on topical aspects (i.e., subtopics).
Considering that their evaluation is based on sentence-level arguments, it seems likely that assessing argument similarity is even harder for arguments comprised of one or multiple sentences.
Moreover, the authors report promising results for the pairwise assessment of argument similarity when using the output corresponding to the BERT $[CLS]$ token.
However, our experiments show that their findings do not apply to the \emph{debatepedia} data set. We assume that this is due to differences in the argument similarity that are introduced by using prevalent topics in the \emph{debatepedia} data set rather than using explicitly annotated arguments.
| proofpile-arXiv_065-7732 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\subsection{Background}
Learning invertible structures from data is a problem encountered in several fields, from more classical to modern ones, where an invertible function is a typical shape-constraint of functions. A traditional and well-known application is the \textit{nonparametric calibration problem}: in a nonparametric regression problem with an unknown invertible function, one estimates an input covariate corresponding to an observed response variable. This problem has been studied by \citet{knafl1984nonparametric}, \citet{osborne1991statistical}, \citet{chambers1993bias}, \citet{gruet1996nonparametric}, \citet{tang2011two} and \citet{tang2015two}, and applied in the fields of biology and medicine \citep{tang2011two,tang2015two}. A different application in econometrics is the \textit{nonparametric instrumental variable}, developed by \citet{newey2003instrumental} and \citet{horowitz2011applied}. This is an ill-posed problem with conditional expectations. For instance, \citet{krief2017direct} studies the estimation by direct usage of inverse functions. Another application that has been developed rapidly in recent years is a framework for \textit{normalizing flow} used for generative models in machine learning, developed by \citet{rezende2015variational} and \citet{dinh2017density}. A related problem is the analysis of latent independent components using nonlinear invertible maps~\citep{dinh2014nice,hyvarinen2016unsupervised}. Under this problem, an observed distribution is regarded as a latent variable transformed by an unknown invertible function, and this function is learned with an invertible estimator to reconstruct the latent variable (for a review, see \citet{kobyzev2020normalizing}). Several methods have been developed for constructing invertible functions, for example \citet{dinh2014nice}, \citet{papamakarios2017masked}, \citet{kingma2016improved}, \citet{huang2018neural}, \citet{de2020block} and \citet{ho2019flow++}.
In the univariate case ($d=1$), an error of the invertible estimators has been actively analyzed. In this case, the estimation of invertible functions is related to estimating strictly monotone functions, and there are many related studies in the field of isotonic regression (for a general introduction, see \citet{groeneboom2014nonparametric}). \citet{tang2011two,tang2015two} and \citet{gruet1996nonparametric} study an estimation for an input point $\Bar{\boldsymbol x} =\boldsymbol f^{-1}(\boldsymbol t) \in [-1,1]$ corresponding to an observed output $\boldsymbol t \in \mathbb{R}$ with an invertible function $\boldsymbol f$.
Specifically, \citet{tang2011two} show that an estimator $\hat{\boldsymbol x}$ which is based on the estimation of monotone functions achieves a convergence rate $|\hat{\boldsymbol x} - \bar{\boldsymbol x}| = O_P(n^{-1/3})$, where $n$ is the number of observations, and their two-step procedure for adaptation improves the rate to a parametric one, $O_P(n^{-1/2})$. They also establish an asymptotic distribution of point-wise estimator $\Bar{\boldsymbol x}$. \citet{krief2017direct} develops an estimator $\tilde{\boldsymbol f}$ for an unknown invertible function $\boldsymbol f_*$, which is written as a conditional expectation with a $r$-times continuously differentiable distribution function, and study its convergence in terms of a sup-norm $\|\cdot\|_{L^\infty}$ as $\mathbb{E}[\|{\boldsymbol f_*} - \tilde{\boldsymbol f}\|_{L^\infty}^2] = O(n^{-2r/(2r+1)})$. Because this rate is slower than the minimax optimal rate on (even a non-invertible) $r$-differentiable functions, it is suggested that this rate does not achieve optimality.
For the multivariate ($d\geq 2$) case, there are few studies on the rate of errors, because a multivariate invertible function may not be represented by a simple monotone function unlike the univariate case. The several studies developing normalizing flows, e.g., \citet{huang2018neural,jaini2019sum,teshima2020coupling} show the universality of each developed flow model. However, these studies do not discuss quantitative issues, and only a few have investigated the approximation efficiency of simple flows~\citep{pmlr-v108-kong20a}.
Revealing a minimax optimal rate in this invertible setting is a major problem of interest in terms of statistical efficiency. This interest is typically motivated by the fact that several shape constraints often lead to an improvement in their minimax rates. For example, when a true function is unimodal \citep{bellec2018sharp}, convex \citep{guntuboyina2015global}, or log-concave \citep{kim2018adaptation}, the minimax optimal rate is a parametric rate $O(n^{-1/2})$, whereas the ordinary rate without shape constraints is $O(n^{-r/(2r+d)})$ with an input dimension $d$ and smoothness $r$ of a target function. Furthermore, even in the invertible setting, \citet{tang2011two} achieved a parametric rate for the pointwise estimator. Based on these facts, whether the invertible constraint improves $L^2$-risk is an important open question to clarify the efficiency of invertible function estimation.
\subsection{Problem Setting with $d=2$}
We consider a nonparametric planer regression problem with an invertible bi-Lipschitz function, and study an invertible estimator for the problem. That is, we set $d=2$ and study the following problem. We define a set of invertible and bi-Lipschitz functions as
\begin{align*}
\flip_{\text{INV}}
:=
\{
\boldsymbol f:I^2 \to I^2
\mid \forall \boldsymbol y \in I^2,
!\exists \boldsymbol x \in I^2 \text{ s.t. }\boldsymbol f(\boldsymbol x)=\boldsymbol y, \text{bi-Lipschitz}
\}
\end{align*}
for $I:=[-1,1]$ and $!\exists$ denotes unique existence; a function $\boldsymbol f$ is called bi-Lipschitz if $L^{-1}\|\boldsymbol x-\boldsymbol x'\|_2 \le \|\boldsymbol f(\boldsymbol x)-\boldsymbol f(\boldsymbol x')\|_2 \le L\|\boldsymbol x-\boldsymbol x'\|_2$ holds for some $L>0$ for any $\boldsymbol x,\boldsymbol x' \in I^2$. Invertible and continuous function is also called homeomorphism.
For $\boldsymbol f \in \flip_{\text{INV}}$, its inverse $\boldsymbol f^{-1}$ (if it exists) is also bi-Lipschitz (see Lemma~\ref{lem:bi_lipschitz}). Assume we have observations $\mathcal{D}_n:=\{(\boldsymbol X_i,\boldsymbol Y_i)\}_{i=1}^{n} \subset I^2 \times \mathbb{R}^2$ that independently and identically follow the regression model for $i=1,...,n$:
\begin{align}
\boldsymbol Y_i =\boldsymbol f_*(\boldsymbol X_i)+\boldsymbol \varepsilon_i,
\quad
\boldsymbol \varepsilon_i \overset{\text{i.i.d.}}{\sim} N_2(\boldsymbol 0,\sigma^2 \boldsymbol I_2) \label{def:model}
\end{align}
for a true function $\boldsymbol f_* \in \flip_{\text{INV}}$ and $\sigma^2>0$. Let $P_{\boldsymbol X}$ be a marginal measure of $\boldsymbol X_i$, and we assume that $P_{\boldsymbol X}$ has an (absolutely continuous) density whose support is $I^2$.
\subsection{Analysis Framework with Inverse Risk}
The goal is to investigate the difficulty in estimating invertible functions by invertible estimators. To this end, we define an \textit{inverse risk} to evaluate invertible estimators.
For any $\boldsymbol y \in I^2$, $\bar{\boldsymbol f}_n^{-1}(\boldsymbol y)$ denotes $\boldsymbol x \in I^2$ if it satisfies $\bar{\boldsymbol f}_n(\boldsymbol x)=\boldsymbol y$ uniquely, and some constant vector $\boldsymbol c \in \mathbb{R}^2 \setminus I^2$ otherwise. Then, we develop the {inverse} $L^2$-risk as
\[
\risk_{\text{INV}}(\bar{\boldsymbol f}_n,\boldsymbol f_*)
:=
\VERT \bar{\boldsymbol f}_n - \boldsymbol f_* \VERT_{L^2(P_{\boldsymbol X})}^2
+
\psi\left(
\VERT \bar{\boldsymbol f}_n^{-1} - \boldsymbol f_*^{-1} \VERT_{L^2(P_{\boldsymbol X})}
\right)
\]
where $\VERT \boldsymbol f \VERT_{L^2(P_{\boldsymbol X})} := (\sum_{j=1}^2 \int |f_j|^2 \mathrm{d} P_{\boldsymbol X})^{1/2}$ is an $L^2$-norm for vector-valued functions,
and $\psi(\cdot)$ denotes a penalty term on the convergence of the inverse function:
we employ $\psi(z)=z^4$ for theoretical suitability.
By virtue of the penalty term $\psi(\cdot)$, $\risk_{\text{INV}}(\bar{\boldsymbol f}_n,\boldsymbol f_*) \to^p 0$ indicates both almost everywhere invertibility in probability and consistency for both $\bar{\boldsymbol f}_n$ and its inverse $\bar{\boldsymbol f}_n^{-1}$. Using this risk, we can discuss constructing invertible estimators in the context of nonparametric regression.
Then, we study the minimax inverse risk of the regression problem, that is, we consider the following value:
\begin{align*}
\inf_{\bar{\boldsymbol f}_n}
\sup_{\boldsymbol f_* \in \flip_{\text{INV}}}
\risk_{\text{INV}}(\bar{\boldsymbol f}_n, \boldsymbol f_*),
\end{align*}
where the infimum with respect to $\bar{\boldsymbol f}_n$ is taken over all measurable estimators, depending on $\mathcal{D}_n$. Note that this minimax inverse risk is related to an ordinary minimax risk without the invertibility of estimators, that is, $ \inf_{\bar{\boldsymbol f}_n} \sup_{\boldsymbol f_* \in \flip_{\text{INV}}} \risk_{\text{INV}}(\bar{\boldsymbol f}_n, \boldsymbol f_*) \geq \inf_{\bar{\boldsymbol f}_n} \sup_{\boldsymbol f_* \in \flip_{\text{INV}}} \mathsf{R}(\bar{\boldsymbol f}_n, \boldsymbol f_*)$ holds with the ordinary $L^2$-risk $ \mathsf{R}(\bar{\boldsymbol f}_n, \boldsymbol f_*) = \VERT \bar{\boldsymbol f}_n - \boldsymbol f_* \VERT_{L^2(P_{\boldsymbol X})}^2$.
\subsection{Approach and Results}
Our analysis depends on the representation of invertible functions by level-sets, to analyze the minimax risk.
For an invertible function $\boldsymbol f=(f_1,f_2) \in \flip_{\text{INV}}$,
we represent its inverse as
\begin{align}
\boldsymbol f^{-1}(\boldsymbol y)
= L_{f_1}(y_1) \cap L_{f_2}(y_2)
\label{intro:level_set_rep}
\end{align}
where $L_{f_j}(y_j):=\{\boldsymbol x \in I^2 \mid f_j(\boldsymbol x) = y_j\}$ is a level-set for $y_j \in I$ and $j=1,2$. In this form, we can characterize invertibility of $\boldsymbol f$ by assuring the uniqueness of the intersection in \eqref{intro:level_set_rep}. This result allows the analysis of the smoothness and composition of an invertible estimator.
Our first main result is developing a lower bound of the minimax inverse risk based on the developed representation. Specifically, we show that with $d=2$:
\begin{align*}
\inf_{\bar{\boldsymbol f}_n}
\sup_{\boldsymbol f_* \in \flip_{\text{INV}}}
\risk_{\text{INV}}(\bar{\boldsymbol f}_n, \boldsymbol f_*) \gtrsim n^{-2/(2+d)}
\end{align*}
with probability larger than $1/2$, where $\gtrsim$ denotes an asymptotic inequality up to constants. This rate corresponds to a minimax rate of estimating (not necessarily invertible) bi-Lipschitz functions.
This result resolves the question of whether invertibility improves the minimax optimal rate negatively. That is, the family of invertible functions is still sufficiently complex, and no rate improvement occurs for $L^2$-risk when estimating it. This result contrasts with the fact that the other shape constraints improve the minimax rate up to a parametric rate.
Our second main result is an upper bound of the minimax risk.
To derive the bound, we develop a novel estimator for $\boldsymbol f_*$, and derive an upper bound on the inverse risk that corresponds to the lower bound. This estimator employs an arbitrary estimator of $\boldsymbol f_*$ minimax optimal in the sense of the standard $L^2$ risk, and amends it to be asymptotically almost everywhere invertible, so as to inherit the rate of convergence. As a result, for $d=2$, we obtain:
\begin{align*}
\inf_{\bar{\boldsymbol f}_n }
\sup_{\boldsymbol f_* \in \flip_{\text{INV}}}
\risk_{\text{INV}}(\hat{\boldsymbol f}_n, \boldsymbol f_*) \asymp n^{-2/(2+d)},
\end{align*}
where $\asymp$ denotes the asymptotic equality in probability up to the constants and logarithmic factors in $n$. Similar to the above discussion, this result states that the learning invertibility problem has the same minimax rate for estimating bi-Lipschitz functions.
\subsection{Symbols and Notations}
$[n]:=\{1,2,\ldots,n\}$ for $n \in \mathbb{N}$. $\mathbbm{1}\{\cdot\}$ denotes an indicator function. For $p\in [0,\infty]$, the norm of vector $\boldsymbol x = (x_1,...,x_d)$ is defined as $\| \boldsymbol x \|_p := (\sum_{j} x_j^p)^{1/p}$. For a function $f: S \to \mathbb{R}$ and a set $S' \subseteq S$, we define $f(S'):= \{f(x) \mid x \in S'\}$. For a base measure $Q$, $\|f\|_{L^p(Q)} := (\int_S |f(\boldsymbol x)|^p \mathrm{d} Q(\boldsymbol x))^{1/p}$ denotes an $L^p$-norm. For a vector-valued function $\boldsymbol f(\cdot) = (f_1(\cdot),...,f_d(\cdot)): S \to \mathbb{R}^d$, $\VERT \boldsymbol f \VERT_{L^p(Q)} := (\sum_{j=1}^d \int_S |f_j(\boldsymbol x)|^p \mathrm{d} Q(\boldsymbol x))^{1/p}$ denotes its norm. When $Q$ is the Lebesgue measure, we simply write $\|f\|_{L^p}$ and $\VERT f \VERT_{L^p}$. Specifically, $\vertinfty{\boldsymbol f}=\max_{j \in [d]}\sup_{\boldsymbol x \in S}|f_j(\boldsymbol x)|$. For any set $S \subset \mathbb{R}^d$, its boundary is expressed as $\partial S := \{\boldsymbol x \in \mathbb{R}^d \mid B_{\varepsilon}(\boldsymbol x) \subset S \, \text{ for some }\varepsilon>0\}$. $\mathbb{D}^d:=\{\boldsymbol x \in \mathbb{R}^d \mid \|\boldsymbol x\|_2 \le 1\}$ is a unit sphere, and $\mathbb{S}^{d-1}=\{\boldsymbol x \in \mathbb{R}^d \mid \|\boldsymbol x\|_2=1\} \, (=\partial \mathbb{D}^d)$ denotes its surface. For two sets $X,X' \subset \mathbb{R}^d$, $d_{\text{Haus.}}(X,X'):=\max\{\min_{x \in X} \max_{x' \in X'}\|x-x'\|_2,\min_{x' \in X'} \max_{x \in X}\|x-x'\|_2\}$ denotes Hausdorff distance. $\pm$ represents a simultaneous relation concerning a simultaneous sign inversion; for instance, $a(\pm 1)=b(\pm 1)$ means that both $a(1)=b(1)$ and $a(-1)=b(-1)$ hold, but does not mean that $a(1)=b(-1)$, $a(-1)=b(1)$.
\subsection{Organization}
The remainder paper is organized as follows. In Section~\ref{sec:level-set_representation}, we characterize the level-sets of invertible function $\boldsymbol f \in \f_{\text{INV}}$. In Section~\ref{sec:lower_bound_analysis}, we provide a minimax lower bound for inverse risk. We propose an invertible estimator, and prove that an upper bound of the risk by the estimator attains the lower bound up to logarithmic factors in Section~\ref{sec:upper_bound_analysis}. Supporting Lemmas, propositions and proofs of Theorems are listed in Appendix.
\section{Level-Set Representation on Invertible Function}
\label{sec:level-set_representation}
We consider a characterization of invertible functions using the notion of level-sets. That is, we use an intersection of the level-sets of the component functions of a vector-valued function to define an equivalent condition to invertibility. This approach is different from the commonly used representation of invertible functions by monotonicity \citep{krief2017direct}, local approximation \citep{tang2011two,tang2015two}, or Hessian normalization \citep{rezende2015variational,dinh2017density}.
We consider a vector-valued function $\boldsymbol f: I^2 \to I^2$ with its coordinate-wise representation $\boldsymbol f(\boldsymbol x) = (f_1(\boldsymbol x), f_2(\boldsymbol x))$ for $f_j: I^2 \to I$. For $j=1,2$, we define a level-set of $f_j$ for $y_j \in I$ as
\[
L_{f_j}(y_j):=\{\boldsymbol x \in I^2 \mid f_j(\boldsymbol x) = y_j\}.
\]
The notion of level-sets represents a slice of functions, whose shape depends on the nature of these functions. Then, we define the \textit{level-set representation} of $\boldsymbol f(\boldsymbol x)$.
\begin{definition}[Level-set representation]
For a function $\boldsymbol f = (f_1,f_2): I^2 \to I^2$ and $\boldsymbol y \in I^2$, the level-set representation is defined as
\begin{align}
\boldsymbol f^{\dagger}(\boldsymbol y) := L_{f_1}(y_1) \cap L_{f_2}(y_2)
\label{eq:inverse_intersection}
\end{align}
\end{definition}
\noindent
This term is defined with an output-wise level-set of the function $\boldsymbol f$. The existence and nature of the intersection of $ \boldsymbol f^{\dagger}(\boldsymbol y)$ depends on the nature of $\boldsymbol f$. Then, the property of $ \boldsymbol f^{\dagger}(\boldsymbol y)$ explains the invertibility of $\boldsymbol f$.
\begin{proposition}[Level-set representation for an invertible function] \label{prop:equiv_invertible_levelset}
$\boldsymbol f: I^2 \to I^2$ is invertible if and only if $\boldsymbol f^\dagger(\boldsymbol y)$ exists and uniquely determined for all $\boldsymbol y \in I^2$. Furthermore, if $\boldsymbol f$ is invertible, we have
\[
\boldsymbol f^{-1}(\boldsymbol y) = \boldsymbol f^{\dagger}(\boldsymbol y).
\]
\end{proposition}
\noindent
From this result, if $\boldsymbol f$ is invertible, there exists a corresponding level-set representation. Additionally, the level-set has tractable geometric properties, which are useful for future analyses. We discuss the properties of level-sets in the next section.
We illustrate level-sets $L_{f_1},L_{f_2}$ in Figure~\ref{fig:level_set_intersection}. The orange and blue lines represent $L_{f_1}(y_1)$ and $L_{f_2}(y_2)$, respectively; $\boldsymbol x = \boldsymbol f^{-1}(\boldsymbol y)$ coincides with the intersection $L_{f_1}(y_1)\cap L_{f_2}(y_2)$ as described in eq.~(\ref{eq:inverse_intersection}).
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\textwidth]{levelset_rep.pdf}
\caption{Level-sets $L_{f_1}(y_1)$ (orange) and $L_{f_2}(y_2)$ (purple) in $I^2$ for $\boldsymbol f \in \flip_{\text{INV}}$. These provide a level-set representation $\boldsymbol f^{\dagger}$ of $\boldsymbol f$, and the uniqueness of the intersection (black dot) of each level-set ensures invertibility, yielding $\boldsymbol f^{-1}(\boldsymbol y)=\boldsymbol f^{\dagger}(\boldsymbol y)$.}
\label{fig:level_set_intersection}
\end{figure}
\subsection{Property of Level-Set by Invertible Function}
We consider an invertible function $\boldsymbol f \in \flip_{\text{INV}}$, where level-sets $L_{f_j}(y_j)$ have some geometric properties that are critical for the analyses on minimax inverse risk in Sections~\ref{sec:lower_bound_analysis} and \ref{sec:upper_bound_analysis}.
All results in this section are rigorously proven in Appendix \ref{sec:supporting_lemmas}.
A level-set has a parameterization with a dimensional parameter $\alpha \in I$:
\begin{lemma} \label{lem:parameterization}
For $\boldsymbol f \in \flip_{\text{INV}}$, the following holds for each $y \in I$:
\begin{align*}
&L_{f_1}(y)
=
\bigcup_{\alpha \in I}
\boldsymbol f^{-1}(y,\alpha), \mbox{~and~}L_{f_2}(y)
=
\bigcup_{\alpha \in I}
\boldsymbol f^{-1}(\alpha,y)
\end{align*}
\end{lemma}
\noindent
This parameterization guarantees the smoothness of level-sets, together with the Lipschitz property of $\boldsymbol f \in \flip_{\text{INV}}$. This property prohibits a ``sharp fluctuation'' in level-set $L_{f_j}$, as shown in Figure~\ref{fig:sharp_fluctuation}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.65\textwidth]{lipshitz_rep.pdf}
\caption{Level-sets in $I^2$. [Left] $L_{f_j}(y)$ \textit{without} the Lipschitz continuity of $f_j$. [Right] $L_{f_j}(y)$ \textit{with} the Lipschitz continuity of $h_j$. If $f_j$ is Lipschitz continuous, the (excessively) sharp fluctuation along with one direction, shown in the left panel, does not appear. This property is clarified by parameterization (Lemma \ref{lem:parameterization}).}
\label{fig:sharp_fluctuation}
\end{figure}
Furthermore, level-set $L_{f_j}(y)$ is continuously shifted with respect to $y_j \in I$; more specifically, there exists $C \in (0,\infty)$ so that
\[
d_{\text{Haus.}}(L_{f_j}(y),L_{f_j}(y')) \le \exists C |y-y'|
\]
for all $y,y' \in I$ (see Lemma~\ref{lemma:Hausdorff_Lipschitz} in Appendix~\ref{sec:supporting_lemmas}). The level-sets at $y=\pm 1$ are also properly included in the boundary of domain $I^2$: $L_{f_j}(\pm 1) \subset \partial I^2$ (see Lemma~\ref{lem:homeomorphism_boundary} in Appendix~\ref{sec:supporting_lemmas}).
Whereas the above representation is for identifying the inverse function $\boldsymbol f^{-1}$, the level-set representation for the inverse function recovers the original function $\boldsymbol f$ itself: Lemma~\ref{lem:parameterization} which proves $L_{h_1}(x_1)=\boldsymbol f(x_1,I),L_{h_2}(x_2)=\boldsymbol f(I,x_2)$ with $\boldsymbol f(\boldsymbol x) = \boldsymbol h^{-1}(\boldsymbol x) = \boldsymbol h^{\dagger}(\boldsymbol x)$ leads to
\begin{align}
\boldsymbol f(\boldsymbol x)=\boldsymbol f(x_1,I) \cap \boldsymbol f(I,x_2).
\label{eq:grid_representation}
\end{align}
As $\boldsymbol f(x_1,I)$ and $\boldsymbol f(I,x_2)$ are ($1$-dimensional) curve, they can be regarded as a kind of (skewed) ``grid" of the square $I^2$, identifying the unique point $\boldsymbol y=\boldsymbol f(\boldsymbol x)$ by their intersection. We employ this grid-like level-set representation for constructing an invertible estimator in Section~\ref{subsec:estimator}.
\section{Lower Bound Analysis}
\label{sec:lower_bound_analysis}
We develop a lower bound for the minimax risk.
The direction of the proof is to utilize the $L^2$ risk $\mathsf{R}(\hat{\boldsymbol f}_n, \boldsymbol f_*)$ and to develop a certain subset of invertible bi-Lipschitz functions $\tilde{\mathcal{F}} \subset \flip_{\text{INV}}$ as follows:
\begin{align}
\inf_{\bar{\boldsymbol f}_n}
\sup_{\boldsymbol f_* \in \flip_{\text{INV}}}
\risk_{\text{INV}}(\bar{\boldsymbol f}_n, \boldsymbol f_*)\geq
\inf_{\bar{\boldsymbol f}_n}
\sup_{\boldsymbol f_* \in \flip_{\text{INV}}}
\mathsf{R}(\bar{\boldsymbol f}_n, \boldsymbol f_*) \geq \inf_{\bar{\boldsymbol f}_n}
\sup_{\boldsymbol f \in \tilde{\mathcal{F}}}
\mathsf{R}(\bar{\boldsymbol f}_n, \boldsymbol f). \label{ineq:risks}
\end{align}
Then, we derive a lower bound on the right-hand side by applying the information-theoretical approach (see, e.g., Section 2 in \citet{tsybakov2008introduction}). In this case, we need to verify that the set of functions has a sufficient number of elements, each of which is reasonably distant from the others. Details are provided in the next section.
\subsection[Construction of Subset of Flipinv]{Construction of Subset of $\flip_{\text{INV}}$}
\label{subsec:subset_of_flipinv}
We construct a subset of $\flip_{\text{INV}}$ to obtain the minimax lower bound of the inverse risk in Section~\ref{subsec:lower_bound}. To this end, we first define a set of functions $\Xi^2_k:=\{\xi_{\theta}=x_k+\chi_{\theta}(\boldsymbol x):I^2 \to I\}$, establishing $\mathcal{F}(\{\Xi^2_k\}_k):=\{\boldsymbol f=(f_1,f_2) \mid f_k \in \Xi^2_k,k =1,2\} \subset \flip_{\text{INV}}$.
Let $m \in \mathbb{N}$ and let $M>2m$. Using a hyperpyramid-type basis function $\Phi:\mathbb{R}^2 \to [ 0,1]$
\[
\Phi(\boldsymbol x)
=
\begin{cases}
\min_{\tilde{\boldsymbol x} \in \partial I^2}\|\boldsymbol x-\tilde{\boldsymbol x}\|_2
& (\boldsymbol x \in I^2) \\
0 & (\text{Otherwise.}) \\
\end{cases},
\]
and grid points $t_j:=-1+\frac{2j-1}{m} \in I$ ($j=1,2,\ldots,m$), we define the bi-Lipschitz function as
\[
\chi_{\theta}(\boldsymbol x)
=
\sum_{j_1=1}^m \sum_{j_{2}=1}^m \frac{\theta_{j_1,j_{2}} }{M} \Phi \left( m\left(x_1 - t_{j_1}\right),m\left(x_{2} - t_{j_{2}}\right) \right)
:
I^2 \to [0,1/M],
\]
parameterized by binary matrix $\theta=(\theta_{j_1,j_2}) \in \Theta_m^{\otimes 2}$ ($\Theta_m:=\{0,1\}^m$). Using function $\chi_{\theta}$, we define a function class:
\begin{align}
\Xi^2_k
:=
\{
\xi_{\theta}(\boldsymbol x)
:=
x_k + \chi_{\theta}(\boldsymbol x):
I^2 \to I
\, \mid \,
\theta \in \Theta_m^{\otimes 2}
\},
\label{eq:Xidk}
\end{align}
for $k=1,2$. See Figure~\ref{fig:xi_theta} for an illustration of function $\xi_{\theta} \in \Xi^2_k$. Using function set $\Xi^2_k$ defined in (\ref{eq:Xidk}), we define the function class as
\[
\mathcal{F}(\{\Xi^2_k\}_k)
:=
\{
\boldsymbol f=(f_1,f_2):I^2 \to I^2
\mid
f_k \in \Xi^2_k, \: k =1,2
\}.
\]
\begin{figure}[!ht]
\centering
\begin{minipage}{0.46\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{chi_new.pdf}
\caption{$\xi_{\theta}(x_1,x_2)=x_1+\chi_{\theta}(x_1,x_2)$ for $k=1,m=3,M=6$.
The entries in matrix $\theta \in \{0,1\}^{3 \times 3}$ are $\theta_{1,2}=\theta_{2,3}=\theta_{3,1}=\theta_{3,3}=1$, and $0$ otherwise.\label{fig:xi_theta}}
\end{minipage}
\hfill
\begin{minipage}{0.46\textwidth}
\centering
\vspace{1em}
\includegraphics[width=0.6\textwidth]{level_set_intersection_d=2.pdf}
\vspace{1em}
\caption{Level-sets $L_{f_1},L_{f_2}$. Their slopes are restricted so that the intersection is unique; hence, invertibility is guaranteed.\label{fig:levelset_finv}}
\end{minipage}
\end{figure}
We state this invertibility by applying the level-set representation. That is, as function $f_k(\boldsymbol x) = x_k + \chi_{\theta}(\boldsymbol x) \in \Xi^2_k$ is piecewise linear, its level-set $L_{f_k}(y_k)$ is also piecewise linear with small slopes. Then, we can prove the uniqueness of level-set representation $\boldsymbol f^{\dagger}(\boldsymbol x)$, which indicates the invertibility of function $\boldsymbol f$.
We summarize the result as follows.
\begin{proposition}
\label{prop:FXi_ivnertible}
$\mathcal{F}(\{\Xi^2_k\}_k) \subset \flip_{\text{INV}}$.
\end{proposition}
\subsection{Minimax Lower Bound of the Inverse Risk}
\label{subsec:lower_bound}
We derive the minimax lower bound for the inverse risk by applying the above result.
Applying the information-theoretic approach to the subset $\mathcal{F}(\{\Xi^2_k\}_k)$ yields the following theorem.
\begin{theorem}
\label{thm:main_lower}
For $d=2$, there exists $C_* > 0$ so that, with a probability larger than $1/2$, we obtain:
\begin{align*}
\inf_{\bar{\boldsymbol f}_n}
\sup_{\boldsymbol f_* \in \flip_{\text{INV}}}
\mathsf{R}(\bar{\boldsymbol f}_n, \boldsymbol f_*)
\geq C_* n^{-2/(2 + d)}.
\end{align*}
\end{theorem}
\noindent
This lower bound on the rate indicates that imposing invertibility on the true function does not improve estimation efficiency in the minimax sense. This is because the lower rate $n^{-2/(2 + d)}$ is identical to the rate for estimating (non-invertible) Lipschitz functions (see \cite{tsybakov2008introduction}). Although set $\flip_{\text{INV}}$ is smaller than a set of Lipschitz functions, we find that the estimation difficulty is equivalent in this sense.
We also derive a lower bound for an inverse risk based on the above results.
By the relation \eqref{ineq:risks}, the following result holds without proof:
\begin{corollary}
For $d=2$, there exists $C_* > 0$ so that with a probability larger than $1/2$, we obtain:
\begin{align*}
\inf_{\bar{\boldsymbol f}_n}
\sup_{\boldsymbol f_* \in \flip_{\text{INV}}}
\risk_{\text{INV}}(\bar{\boldsymbol f}_n, \boldsymbol f_*)
\geq C_* n^{-2/(2 + d)}.
\end{align*}
\end{corollary}
\noindent
This result implies that the efficiency of estimators preserving invertibility, such as normalizing flow, coincides with that of the estimation without invertibility in this sense.
\section{Upper Bound Analysis}
\label{sec:upper_bound_analysis}
In this section, we derive an upper bound of the minimax inverse risk.
To this end, we define an estimator $\hat{\boldsymbol f}_n$ almost everywhere invertible in the asymptotic sense. Note that the estimator is constructed to prove the existence of an estimator that achieves the upper bound, but not to construct a high-performance method in practice.
\subsection{Idea for Invertible Estimator}
\label{subsec:estimator_outline}
Our proposed estimator is made by partitioning the domain $I^2$ and the range $I^2$ respectively, and combining local bijective maps between pieces of the partitions. To develop the partitions and bijective maps, we develop (i) a coherent rotation for $\boldsymbol f_*$ and (ii) two types of partitions of $I^2$ by squares and quadrilaterals. In this section, we introduce these techniques in preparation.
\subsubsection{Coherent Rotation}
First, we introduce an invertible function $\boldsymbol g_*: I^2 \to I^2$ whose endpoint level-sets correspond to endpoints of $I^2$, that is, $\boldsymbol g_*(\pm 1,I) = (\pm 1,I)$ and $\boldsymbol g_*(I,\pm 1)=(I,\pm 1)$ hold. Such $\boldsymbol g_*$ is utilized to define a partition of $I^2$ using quadrilaterals. See Figure~\ref{fig:gstar} for illustration.
To the aim, we define a bi-Lipschitz invertible function $\boldsymbol \rho: I^2 \to I^2$ for rotation, then obtain $\boldsymbol g_*$ as follows:
\begin{lemma}
\label{lemma:coherent_rotation}
There exists an invertible map $\boldsymbol \rho \in \flip_{\text{INV}}$ so that an invertible function
\begin{align}
\boldsymbol g_* = (g_{1},g_{2}) := \boldsymbol \rho \circ \boldsymbol f_* \in \flip_{\text{INV}}
\end{align}
satisfies $\boldsymbol g_*(\pm 1,I) = (\pm 1,I)$ and $\boldsymbol g_*(I,\pm 1)=(I,\pm 1)$.
\end{lemma}
\noindent
We refer to $\boldsymbol \rho$ as a \textit{coherent rotation}. We provide a specific form of $\boldsymbol \rho$ in Appendix~\ref{subsec:coherent_rotation}, and the proof of Lemma~\ref{lemma:coherent_rotation} is shown in Appendix~\ref{subsec:proof_of_lemma:coherent_rotation}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{gstar2.pdf}
\caption{(Left) Level-sets of $\boldsymbol g_*=\boldsymbol \rho \circ \boldsymbol f_*$, whose endpoints are aligned with a square $I^2$ by the coherent rotation. (Right) Partition of $I^2$ into quadrilaterals. Since the endpoint level-set of $\boldsymbol g_*$ is aligned to the endpoint of $I^2$, the partition is well-defined.
}
\label{fig:gstar}
\end{figure}
\subsubsection[Two Partitions of I2]{Two Partitions of $I^2$} \label{sec:partition}
We develop two types of partitions of $I^2$, in order to construct local bijective maps between pieces of the partitions, then combine them to develop an invertible function.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.6\textwidth]{interpolation_square.pdf}
\caption{Two partitions of $I^2$. The left $I^2$ is partitioned into squares $\square$, and the right $I^2$ into quadrilaterals $\Diamond$. The quadrilaterals $\Diamond$ is defined by the vertices $\{\boldsymbol x',\boldsymbol x'',\boldsymbol x''',\boldsymbol x''''\}$ of $\square$ mapped by $\boldsymbol g_*$.}
\label{fig:interpolation}
\end{figure}
The first partition is defined by grids in $I^2$.
We consider a set of grids $\hat{I}^2:=\{0,\pm 1/t,\pm 2/t,\ldots,\pm (t-1)/t,\pm 1\}$ ($t \in \mathbb{N}$), then consider a square by the grids
\[
\square := [\tau_1/t,(\tau_1 + 1)/t] \times [\tau_2/t,(\tau_2 + 1)/t] \subset I^2,
\]
for each $\tau_1,\tau_2 \in \{-t,-t+1,...,-1,0,1,...,t-2,t-1\}$. For each $\square$, we choose four points $\nu (\square) := \{\boldsymbol x', \boldsymbol x'', \boldsymbol x''', \boldsymbol x''''\} \subset \hat{I}^2$ so that they are vertices of $\square$, and starting from the $\boldsymbol x'$ closest to $(1,1)$, we set the other vertices by a clockwise-path $\boldsymbol x' \to \boldsymbol x'' \to \boldsymbol x''' \to \boldsymbol x''''$ along with a boundary of $\square$. A set of $\square$ forms a straightforward partition of $I^2$.
The second partition is developed by the first partition and $\boldsymbol g_*$. Intuitively, using the level-set representation
$
\boldsymbol g_*(\boldsymbol x)=\boldsymbol g_*(x_1,I) \cap \boldsymbol g_*(I,x_2)
$
in (\ref{eq:grid_representation}) and $\boldsymbol g_*(\pm 1,I) = (\pm 1,I), \boldsymbol g_*(I,\pm 1)=(I,\pm 1)$ in Lemma \ref{lemma:coherent_rotation}, we consider quadrilaterals in $I^2$ generated by $\{\boldsymbol g_*(x_1,I)\}_{x_1}$ and $\{\boldsymbol g_*(I,x_2)\}_{x_2}$ as shown in Figure~\ref{fig:gstar} (right). Formally, we define a quadrilateral $\Diamond$ corresponding to $\square$ from the first partition as
\begin{align*}
\Diamond := \mathrm{quadrilateral~whose~vertices~are~}\boldsymbol g_*(\nu(\square)).
\end{align*}
Figure \ref{fig:interpolation} (right) illustrates the quadrilaterals. A set of $\Diamond$ works as a partition of $I^2$, if the quadrilaterals are not twisted (see Remark \ref{remark:twist}). Also, $\Diamond$ plays a role of approximation of $\boldsymbol g_*(\square) \subset I^2$.
\begin{remark}[Twist of quadrilaterals $\Diamond$] \label{remark:twist}
If $\Diamond$ is twisted as Figure \ref{fig:twisted_quadrilateral} (left), the partition is not well-defined.
However, when the grids for $\square$ is sufficiently fine, i.e. $t$ is sufficiently large, the twisted quadrilaterals vanish in the sense of the Lebesgue measure (see Figure~\ref{fig:twisted_quadrilateral} (right)).
Since we will consider $t \to \infty$ as $n$ increases when developing an estimator, an effect of the twisted quadrilaterals are asymptotically ignored in the result of estimation.
Hence, we assume that there is no twist to simplify the discussion.
We provide details of the twist in Appendix~\ref{subsec:twist}.
\end{remark}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{levelset_twisted2.pdf}
\caption{
The twisted quadrilateral in $I^2$ (the green region in the left) disappears as the partition by squares become finer (the yellow quadrilateral in the right). The yellow and blue curves are level-sets by $\boldsymbol g_*$. As $t$ increases, the twists vanish or become negligibly small.
}
\label{fig:twisted_quadrilateral}
\end{figure}
Using the partitions, we can develop an invertible approximator for $\boldsymbol g_*$.
For each $\square$ and its corresponding $\Diamond$, we can easily find a local bijective map $\boldsymbol g_\square: \square \to \Diamond$ (its explicit construction will be provided in Section \ref{sec:proof_upper_d2}). Then, we combine them and define an invertible function $\boldsymbol g_*^{\dagger}: I^2 \to I^2$ as $\boldsymbol g_*^{\dagger}(\boldsymbol x):= \boldsymbol g_{\square_{\boldsymbol x}} (\boldsymbol x)$,
where $\square_{\boldsymbol x}$ is a square $\square$ containing $\boldsymbol x$. Since the partitions satisfy $\cup_i \square_i = \cup_i \Diamond_i=I^2$, $\boldsymbol g_*^{\dagger}$ is invertible. Furthermore, $\boldsymbol g_*^{\dagger}$ converges to $\boldsymbol g_*$ as $t$ increases to infinity. In the following section, we propose an invertible estimator through estimation of $\boldsymbol \rho$ and $\boldsymbol g_*^{\dagger}$.
\subsection{Invertible Estimator}
\label{subsec:estimator}
We propose an invertible estimator $\hat{\boldsymbol f}_n$ by the following two steps:
(i) we develop estimators $\hat{\boldsymbol \rho}_n$ for $\boldsymbol \rho$ and $\hat{\boldsymbol g}_n^\dagger$ for $\boldsymbol g_*^{\dagger}$, by using a pilot estimator $\hat{\boldsymbol f}_n^{(1)}$ (e.g., kernel smoother) which is not necessarily invertible but consistent, and
(ii) we define the proposed estimator as $\hat{\boldsymbol f}_n :=\hat{\boldsymbol \rho}_n^{-1} \circ \hat{\boldsymbol g}_n^{\dagger}$.
In preparation, we first introduce the following assumption on the pilot estimator:
\begin{assumption} \label{asmp:uniform_estimator}
There exists an estimator $\hat{\boldsymbol f}_n^{(1)}: I^2 \to I^2$ and $C > 0$ so that
\[
\mathbb{P}\left(\vertinfty{ \hat{\boldsymbol f}_{n}^{(1)} - \boldsymbol f_{*} } \leq C(n^{-1/(2+d)}(\log n)^{\alpha})\right) \geq 1-\delta_n
\]
holds for sufficiently large $n$, with some $\alpha > 0$ and a sequence $\delta_n \searrow 0$ as $n \to \infty$.
\end{assumption}
\noindent
Several estimators are proved to satisfy this assumption, for example, using a kernel method (\cite{tsybakov2008introduction}), a nearest neighbour method (\cite{devroye1978uniform,devroye1994strong}) and a Gaussian process method (\cite{yoo2016supremum, yang2017frequentist}) with various $\alpha$. In some cases, it is necessary to restrict their ranges to $I^2$ by clipping. Note that this assumption does not guarantee invertibility of $\hat{\boldsymbol f}_n^{(1)}$ as follows:
\begin{proposition} \label{prop:inpossibility_first_step}
There exists an estimator $\hat{\boldsymbol f}_n^{(1)}$ satisfying Assumption \ref{asmp:uniform_estimator} such that $\risk_{\text{INV}}(\hat{\boldsymbol f}_n^{(1)}, \boldsymbol f_*) > \exists c > 0$ holds for some $\boldsymbol f_* \in \flip_{\text{INV}}$ and any $n \in \mathbb{N}$.
\end{proposition}
We develop the invertible estimator $\hat{\boldsymbol f}_n$ by the following procedure:
\begin{enumerate}
\item[(i-a)]
\textbf{Estimator for $\boldsymbol \rho$}: We develop the invertible estimator $\hat{\boldsymbol \rho}_n$ for $\boldsymbol \rho$, such that $\hat{\boldsymbol \rho}_n(\hat{\boldsymbol f}_n^{(1)}(\tilde{\boldsymbol x}_j)) \approx \tilde{\boldsymbol x}_j$ for all vertices $\tilde{\boldsymbol x} \in \{\pm 1\} \times \{\pm 1\}$ of $I^2$ (formal definition of $\hat{\boldsymbol \rho}_n$ is provided in Appendix~\ref{subsec:estimator_rho}).
\item[(i-b)] \textbf{Estimator for $\boldsymbol g_*$}:
We define an estimator $\hat{\boldsymbol g}_n(\boldsymbol x):=\mathfrak{P}\hat{\boldsymbol \rho}_n(\hat{\boldsymbol f}_n^{(1)}(\boldsymbol x))$ for $\boldsymbol g_*$, where $\mathfrak{P}$ constrains $\hat{\boldsymbol g}_n(\boldsymbol x)$ to an edge of the range $I^2$, when $\boldsymbol x$ is an endpoint of the domain $I^2$: $\mathfrak{P}$ replaces $\tilde{y}_1$ in $\tilde{\boldsymbol y}=(\tilde{y}_1,\tilde{y}_2):=(\hat{\boldsymbol \rho}_n \circ \hat{\boldsymbol f}^{(1)}_n)(\boldsymbol x)$ with $\pm 1$ if $x_1=\pm 1$ and $\tilde{y}_2$ with $\pm 1$ if $x_2=\pm 1$. This operator $\mathfrak{P}$ is necessary for making $\hat{\boldsymbol g}_n$ to have a range $I^2$. Note that $\hat{\boldsymbol g}_n$ is not always invertible.
\item[(i-c)] \textbf{Invertible estimator for ${\boldsymbol g}_*^\dagger$}: We develop an invertible estimator $\hat{\boldsymbol g}_n^\dagger$ for ${\boldsymbol g}_*^\dagger$ by estimating the partition $\Diamond$ using $\hat{\boldsymbol g}_n$. For $\boldsymbol x \in I^2$, let $\square = \square_{\boldsymbol x}$ be a square containing $\boldsymbol x$ and $\nu(\square) = \{\boldsymbol x',\boldsymbol x'',\boldsymbol x''',\boldsymbol x''''\}$ be a set of its vertices, and we estimate its corresponding quadrilateral $\Diamond$ by its estimator $\hat{\Diamond}$ using $\hat{\boldsymbol g}_n(\nu(\square))$. Then, we develop a bijective map between $\square$ and $\hat{\Diamond}$.
Let $\boldsymbol s:=(\boldsymbol x'+\boldsymbol x''+\boldsymbol x'''+\boldsymbol x'''')/4 \in I^2$ and suppose $\boldsymbol x',\boldsymbol x''$ are the two closest vertices from $\boldsymbol x$. As there exists a unique $(\alpha',\alpha'') \in [0,1]^2$ satisfying $\alpha'+\alpha'' \le 1$ and $\boldsymbol x = \boldsymbol s + \alpha' \{\boldsymbol x'-\boldsymbol s\} + \alpha'' \{\boldsymbol x''-\boldsymbol s\}$, we define an estimator $\hat{\boldsymbol g}_n^{\dagger}$ for ${\boldsymbol g}_*^{\dagger}$ on $\hat{\Diamond}$ by a {triangle interpolation}:
\begin{align}
\hat{\boldsymbol g}_n^{\dagger}(\boldsymbol x):=\hat{\boldsymbol g}_n(\boldsymbol s) + \alpha' \{\hat{\boldsymbol g}_n(\boldsymbol x')-\hat{\boldsymbol g}_n(\boldsymbol s)\} + \alpha'' \{\hat{\boldsymbol g}_n(\boldsymbol x'')-\hat{\boldsymbol g}_n(\boldsymbol s)\}.
\label{eq:interpolation}
\end{align}
$\hat{\boldsymbol g}_n^{\dagger}$ is bijective within each pair of $\square$ and $\hat{\Diamond}$, hence it is therefore invertible. See Figure~ \ref{fig:triangle_interpolation} for illustration.
\item[(ii)] \textbf{Invertible estimator for ${\boldsymbol f}_*$}:
We define the estimator for $\boldsymbol f_*$ as
\[
\hat{\boldsymbol f}_n
:=
\hat{\boldsymbol \rho}_n^{-1} \circ \hat{\boldsymbol g}_n^{\dagger}.
\]
Since $\hat{\boldsymbol \rho}_n$ and $\hat{\boldsymbol g}_n^{\dagger}$ are invertible, the invertibility of $\hat{\boldsymbol f}_n$ is assured. See Appendix~\ref{sec:proofs_for_upper_bound_analysis} for details.
\end{enumerate}
\begin{figure}
\includegraphics[width=0.9\textwidth]{triangle_interpolation.pdf}
\caption{Triangle interpolation $\hat{\boldsymbol g}_n^{\dagger}(\boldsymbol x):=\hat{\boldsymbol g}_n(\boldsymbol s) + \alpha' \{\hat{\boldsymbol g}_n(\boldsymbol x')-\hat{\boldsymbol g}_n(\boldsymbol s)\} + \alpha'' \{\hat{\boldsymbol g}_n(\boldsymbol x'')-\hat{\boldsymbol g}_n(\boldsymbol s)\}$ for $\boldsymbol x = \boldsymbol s + \alpha' \{\boldsymbol x'-\boldsymbol s\} + \alpha'' \{\boldsymbol x''-\boldsymbol s\}$.
}
\label{fig:triangle_interpolation}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{estimator_k=1.pdf}
\includegraphics[width=\textwidth]{estimator_k=2.pdf}
\caption{Heatmap of the true function $\boldsymbol f_*=(f_{*,1},f_{*,2})$ and its invertible estimator $\hat{\boldsymbol f}_n=(\hat{f}_{n,1},\hat{f}_{n,2})$ with $t=1,3,5$.}
\label{figestimators_heatmap_for_different_t}
\end{figure}
\paragraph{Numerical experiments:}
We experimentally demonstrate the proposed estimator $\hat{\boldsymbol f}_n$. We set a true function
\[
\boldsymbol f_*(\boldsymbol x):=\boldsymbol v(\|\omega(\boldsymbol x)\|_2^{|\sin(\vartheta(\omega(\boldsymbol x)))|},\vartheta(\omega(\boldsymbol x))) \in \flip_{\text{INV}}
\]
where the functions $\omega,\vartheta,\boldsymbol v$ are defined in Appendix~\ref{subsec:symbols}. We generated $n=10^4$ covariates $\boldsymbol x_i \overset{\text{i.i.d.}}{\sim} U(I^2)$ and outcomes $\boldsymbol y_i \overset{\text{i.i.d.}}{\sim} N(\boldsymbol f_*(\boldsymbol x_i),\sigma^2\boldsymbol I_2)$, and conduct the above estimation procedure with $\sigma^2 \in \{10^{-3}, 10^{-1}\}$. Especially, we employed $k$-nearest neighbor regression ($k=10$, clipped to restrict the range to $I^2$) for the pilot estimator $\hat{\boldsymbol f}_n^{(1)}$.
We note that we use bi-linear interpolation for calculating $\hat{\boldsymbol g}_n^{\dagger}$, which coincides with the triangle interpolation \eqref{eq:interpolation} in this setting.
We plot the heatmaps of $\boldsymbol f_*=(f_{*,1},f_{*,2})$ and $\hat{\boldsymbol f}_n=(\hat{f}_{n,1},\hat{f}_{n,2})$ for $t \in \{1,3,5\}$ and $\sigma^2 = 10^{-3}$ in Figure \ref{figestimators_heatmap_for_different_t}. We can see that $\hat{\boldsymbol f}_n$ approaches $\boldsymbol f_*$ as $t$ increases. We further plot the heatmaps of $f_{*,1} $, $\hat{f}_{n,1}^{(1)}$, $\hat{g}_{n,1}$, $\hat{g}_{n,1}^{\dagger}$ and $\hat{f}_{n,1}$ with $t=3$ and $\sigma^2 \in \{10^{-3}, 10^{-1}\}$, in Figure~\ref{fig:estimators_heatmap}. We can verify that (i) $\hat{g}_{n,1}$ and $\hat{g}_{n,1}^{\dagger}$ have level-sets aligned to the endpoints of $I^2$, and (ii) $\hat{g}_{n,1}^{\dagger}$ and $\hat{f}_{n,1}$ have level-sets with fewer slopes than those of $\hat{f}_{n,1}^{(1)}$ and $\hat{g}_{n,1}$, which is suitable for invertibility.
\begin{figure}[!ht]
\centering
$\sigma^2 = 10^{-3}$
\begin{minipage}{0.18\textwidth}
\includegraphics[width=\textwidth]{groundtruth.pdf}
\subcaption{$f_{*,1}$}
\label{fig:groundtruth}
\end{minipage}
\vline
\hspace{0.5em}
\begin{minipage}{0.18\textwidth}
\includegraphics[width=\textwidth]{first_step.pdf}
\subcaption{$\hat{f}^{(1)}_{n,1}$}
\label{fig:first_step}
\end{minipage}
\begin{minipage}{0.18\textwidth}
\includegraphics[width=\textwidth]{g.pdf}
\subcaption{$\hat{g}_{n,1}$}
\label{fig:rotated}
\end{minipage}
\begin{minipage}{0.18\textwidth}
\includegraphics[width=\textwidth]{g_dagger_t=3.pdf}
\subcaption{$\hat{g}_{n,1}^{\dagger}$}
\label{fig:smoothed}
\end{minipage}
\vline \vspace{0.5em}
\begin{minipage}{0.18\textwidth}
\includegraphics[width=\textwidth]{second_step_t=3.pdf}
\subcaption{$\hat{f}_{n,1}$}
\label{fig:second_step}
\end{minipage}
$\sigma^2 = 10^{-1}$
\begin{minipage}{0.18\textwidth}
\includegraphics[width=\textwidth]{v01_groundtruth.pdf}
\subcaption{$f_{*,1}$}
\label{fig:v01_groundtruth}
\end{minipage}
\vline
\hspace{0.5em}
\begin{minipage}{0.18\textwidth}
\includegraphics[width=\textwidth]{v01_first_step.pdf}
\subcaption{$\hat{f}^{(1)}_{n,1}$}
\label{fig:v01_first_step}
\end{minipage}
\begin{minipage}{0.18\textwidth}
\includegraphics[width=\textwidth]{v01_g.pdf}
\subcaption{$\hat{g}_{n,1}$}
\label{fig:v01_rotated}
\end{minipage}
\begin{minipage}{0.18\textwidth}
\includegraphics[width=\textwidth]{v01_g_dagger_t=3.pdf}
\subcaption{$\hat{g}_{n,1}^{\dagger}$}
\label{fig:v01_smoothed}
\end{minipage}
\vline \vspace{0.5em}
\begin{minipage}{0.18\textwidth}
\includegraphics[width=\textwidth]{v01_second_step_t=3.pdf}
\subcaption{$\hat{f}_{n,1}$}
\label{fig:v01_second_step}
\end{minipage}
\caption{(\subref{fig:groundtruth},\subref{fig:v01_groundtruth}) True function $\boldsymbol f_{*,1}$, (\subref{fig:first_step},\subref{fig:v01_first_step}) pilot estimator $\hat{f}_{n,1}^{(1)}$, (\subref{fig:rotated},\subref{fig:v01_rotated}) estimator $\hat{g}_{n,1}$ transformed by a coherent rotation,
(\subref{fig:smoothed},\subref{fig:v01_smoothed}) invertible estimator $\hat{g}_{n,1}$ using biniliear interpolation, and
(\subref{fig:second_step},\subref{fig:v01_second_step}) invertible estimator $\hat{f}_{n,1}$. The upper row is $\sigma^2 = 10^{-3}$ and the lower row is $\sigma^{2} = 10^{-1}$.}
\label{fig:estimators_heatmap}
\end{figure}
\subsection{Minimax Upper Bound of the Inverse Risk}
\label{subsec:upper_bound}
For invertibility, we assume that $t=t_n$ is the power of two:
\begin{assumption}
\label{asmp:tn}
$t = t_n := \max\{t'=2^m \mid t' \le n^{2/(2+d)}(\log n)^{-(\alpha+\beta)} ,m \in \mathbb{N}\}$ for some $\beta>0$.
\end{assumption}
By Assumption~\ref{asmp:tn}, $\hat{I}_n=\{0,\pm 1/t_n,\ldots,\pm(t_n-1)/t_n,\pm 1\}$ used to define $\hat{\boldsymbol g}_n^{\dagger}$ is monotone, i.e., $\hat{I}_n \subset \hat{I}_{n'}$ for $n < n'$.
With this assumption, (asymptotic almost everywhere) invertibility and convergence of the proposed estimator are proved in Propositions \ref{prop:area_twisted} and \ref{prop:concentration_f2} in Appendix~\ref{sec:proofs_for_upper_bound_analysis}, hence we obtain the following upper bound on an inverse risk:
\begin{theorem} \label{thm:upper_d2}
Consider $d=2$.
Suppose $\boldsymbol f_* \in \flip_{\text{INV}}$ and Assumptions \ref{asmp:uniform_estimator}, \ref{asmp:tn} hold. Then, there exists $C_* \in (0,\infty)$ so that, for a sufficiently large $n$ with probability approaching $1$,
\begin{align*}
\risk_{\text{INV}}(\hat{\boldsymbol f}_n, \boldsymbol f_*) \leq C_* n^{-2/(2+d)}(\log n)^{2\alpha+2\beta}.
\end{align*}
\end{theorem}
\noindent
This result is consistent with the lower bound of the inverse minimax in Theorem \ref{thm:main_lower} up to logarithmic factors. We immediately obtain the following result:
\begin{corollary}
Consider the setting in Theorem \ref{thm:upper_d2}.
Then, there exists $ \overline{C} \in (0, \infty)$ such that for sufficiently large $n$ with probability approaching to $1$, we have
\begin{align*}
\inf_{\bar{\boldsymbol f}_n} \sup_{\boldsymbol f_* \in \flip_{\text{INV}}}
\risk_{\text{INV}}(\bar{\boldsymbol f}_n, \boldsymbol f_*)
\leq \overline{C} n^{-2/(2 + d)} (\log n)^{2\alpha + 2\beta}.
\end{align*}
\end{corollary}
\noindent
With this result, we achieve a tight evaluation of the minimax inverse risk in case $d=2$. This result implies that the difficulty of estimating invertible functions is similar to the case without invertibility, and that there are estimators that achieve the rate up to the same rate up to logarithmic factors.
\section{Conclusions and Future Research Directions}
We studied the nonparametric planer invertible regression, which estimates invertible and bi-Lipschitz function $\boldsymbol f_* \in \flip_{\text{INV}}$ between a closed square $[-1,1]^2$. For $d=2$, we defined inverse risk to evaluate the invertible estimators $\hat{\boldsymbol f}_n$: the minimax rate is lower bounded by $n^{-2/(2+d)}$. We proposed an invertible estimator, which attains the lower bound up to logarithmic factors. This result implies that the estimation of invertible functions is as difficult as the estimation of non-invertible functions in the minimax sense. For this evaluation, we employed output-wise level-sets $L_{f_j}(y):=\{\boldsymbol x \in I^2 \mid f_j(\boldsymbol x)=y\}$ of the invertible function $\boldsymbol f=(f_1,f_2)$, as their intersection $L_{f_1}(y_1) \cap L_{f_2}(y_2)$ identifies the inverse $\boldsymbol f^{-1}(\boldsymbol y)$. We identified some important properties of the level-set $L_{f_j}$. This study is the first step towards understanding the multidimensional invertible function estimation problem.
However, there remain unsolved problems. For example,
\begin{enumerate}[{(i)}]
\item We proposed an invertible estimator only for a restricted case, $d=2$. A natural direction would be to extend our estimator and the minimax upper bound of the inverse risk to the general $d \ge 3$. However, theoretical extension to general $d \ge 3$ seems not straightforward by the following two reasons: (i) coherent rotation, which is used to align the endpoints in our estimator, cannot be defined even for $d=3$ and (ii) \citet{donaldson1989quasiconformal} proved that bi-Lipschitz homeomorphisms cannot be approximated by even piecewise Affine functions for $d=4$. Some additional assumptions seem needed.
\item The discussions in this paper mostly rely on (a) the existence of the boundary and (b) the simple connectivity of set $[-1,1]^2$. It would be worthwhile to generalize our discussion to different types of domains, such as the open multidimensional unit cube $(-1,1)^2$ (e.g., \citet{kawamura1979invertibility} and \citet{pourciau1988global} for a characterization of nonsmooth invertible mappings between $\mathbb{R}^2$, where, $\mathbb{R}^2$ and $(-1,1)^2$ are homeomorphic) and some sets with different torus (see, \citet{hatcher2002topology} for the gentle introduction to torus, and \citet{pmlr-v119-rezende20a} for normalizing flow on tri and sphere surface).
\item Whereas the minimax rate is obtained for a supervised regression problem, one of the main applications of the multidimensional invertible function estimation is density estimation, which implicitly trains the invertible function in an unsupervised manner.
An interesting direction would be to extend the minimax rate to an unsupervised setting.
\end{enumerate}
\section*{Acknowledgement}
A. Okuno is supported by JSPS KAKENHI (21K17718) and JST CREST (JPMJCR21N3).
M. Imaizumi is supported by JSPS KAKENHI (18K18114) and JST Presto (JPMJPR1852). We would like to thank Keisuke Yano for helpful discussion.
| proofpile-arXiv_065-7736 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Randomized experiments have long been a staple of applied causal inference. In his seminal paper, \citet{rubin1974estimating} suggests that ``given a choice between the data from a randomized experiment and an equivalent nonrandomized study, one should choose the data from the experiment, especially in the social sciences where much of the variability is often unassigned to particular causes.'' Using the language of Rubin's \emph{potential-outcomes} framework, randomization guarantees that the treatment status is independent of the potential outcomes and that a simple and intuitive estimator that compares the average outcomes of the treatment and control units is an unbiased estimator of the \emph{average treatment effect} (ATE). If both the treatment and control samples are sufficiently large, the hope is that this difference-in-means estimate is close to the population mean of the treatment effect.
Another crucial property of randomized experimental designs is their robustness to alternative assumptions about the data generating process---a completely randomized experiment does not take into account any features of the observed data. Perhaps not surprisingly, when the researchers are willing to incorporate additional probabilistic assumptions in their design decisions, they can improve on the statistical properties of the average treatment effect estimators \citep[see, for example,][]{kasy2016experimenters}. These improvements, however, do not come for free and the performance of the estimators may suffer if the incorporated assumptions are violated \citep{banerjee2020theory}.
For these reasons randomized experiments are widely used in academic and clinical settings, as well as industrial applications. However, not every practically important question can be easily answered using an experiment on a large sample of experimental units. For instance, the evaluation of major policies targeting large geographic areas has long been of interest in social and political sciences. One of the traditional approaches is to compare the affected unit---such as a metropolitan area or a state---to the average across a sample of carefully picked control units which are deemed to be suitable comparisons \citep{card1990impact}. A more recent alternative, called \emph{synthetic control}, first introduced by \citet{abadie2003economic} and later developed in \citet{abadie2010synthetic} and \citet{abadie2015comparative}, compares the unit of interest to a weighted average of the units unaffected by the treatment, where the weights are selected in a way that achieves a good fit on pre-treatment outcome variables as well as potentially other observed covariates.
While originally developed by academics for evaluating the effects of policies, approaches similar to the synthetic-control methodology have recently gained popularity in industry as well in cases when applied researchers decide to run experiments targeting larger units often representing geographic areas. This decision may be justified when more granular experiments are either unavailable (for example, television advertising can only be targeted at the media-market level) or are unlikely to capture the relevant effects due to interference or equilibrium concerns~\citep{sobel2006randomized, rosenbaum2007interference, hudgens2008toward}.
For instance, a company like Uber may want to evaluate some of the possible treatments at the market level rather than at the driver level if the treatment in question is likely to affect the driver supply. Moreover, launching an experiment targeting even a single unit may be so expensive that the researchers try to minimize the required number of treated units. Privacy and fairness concerns may also make treatment assignment at a more granular level problematic.
Synthetic control and similar approaches may be attractive as estimation procedures in those cases, but they fail to address the equally if not more important aspect of the optimal choice of experimental units \citep[see, for example,][]{rubin2008objective}. We attempt to narrow this gap in the current paper. We consider a panel-data setting in which the researcher observes the outcome metric of interest for a number of units in a number of time periods and has to decide: (i) which units to experiment on and (ii) how to estimate the treatment effects after collecting the outcome data in the experimental time periods. The main difference between this setting compared to a typical synthetic-control study is that the treated units are not fixed, but rather chosen by the researcher. Proving that the underlying optimization problem is NP-hard, we rule out the possibility of designing polynomial-time algorithms for the problem under P$\neq$NP. Therefore, we formulate this combined design-and-analysis problem as a \emph{mixed-integer program} (MIP). Depending on the particular estimands of interest, we propose one of the several formulations and discuss their advantages and drawbacks. We motivate the choice of the optimization objectives and discuss the selection of experimental units each of the objectives leads to. The MIP formulation allows for an easy inclusion of additional constraints as long as those are linear. For instance, it is easy to restrict the overall number of treated units, exclude specific units from treatment, or enforce a budget constraint if there is a varying cost to treat different units.
Using publicly available state-level unemployment data from the US Bureau of Labor Statistics, we compare the proposed methodology to a randomized design that utilizes either the conventional difference-in-means estimator or the synthetic-control approach. We estimate the average as well as individual state-level treatment effects in a simulated experiment and find that our approach substantially reduces the \emph{root mean squared error} (RMSE) of the estimates. We show that our MIP-based design-and-analysis procedures consistently outperform the more traditional baselines regardless of whether the treatment effects are homogeneous or heterogeneous and whether the number of units selected for treatment is small or large relative to the total number of units in the sample. We also suggest a permutation-based inference procedure that follows \citet{chernozhukov2021exact}. We verify in our simulations that this procedure leads to correct test sizes and improved statistical power for testing the sharp null hypothesis of zero treatment affects across all treated units, when used in conjunction with the proposed estimators. We provide theoretical guarantees---albeit under rather strong assumptions---which ensure that the proposed tests have proper sizes.
To our knowledge, this is one of the first papers to study experimental design in the context of synthetic control and adjacent estimation techniques. \citet{doudchenko2019design} consider the case when the underlying effects are homogeneous and only a single unit can be treated; they suggest searching for the unit that delivers the highest statistical power when testing the hypothesis of zero treatment effect with an artificially applied treatment. In an independent work, \citet{abadie2021synthetic} also consider optimal design in settings where synthetic-control estimators are used for estimation. However, there are a few important aspects that differentiate the current paper from theirs. First, \citet{abadie2021synthetic} use both pre-treatment unit-level covariates as well as the outcomes from some---but not necessarily all---of the pre-treatment periods whereas we focus exclusively on the outcome variable and utilize all of the pre-treatment outcome data. As a result, the inference procedure proposed by \citet{abadie2021synthetic} does not apply to our approach. Second, we discuss formal hardness results. Finally---and, perhaps, most importantly---\citet{abadie2021synthetic} choose an objective function that fixes a priori the target average treatment effect, as either the population average treatment effect or a weighted version thereof. In contrast, the objective function we use focuses on the average treatment effect only for the units we choose for the treatment, making the target estimand stochastic, but potentially easier to estimate and requiring different assumptions.
The rest of the paper is organized as follows. Section~\ref{setting} introduces the setting and the proposed estimation approaches. Section~\ref{mips} introduces the mixed-integer formulation of the suggested estimators. Section~\ref{design} presents some of the theoretical unit-selection results and the intuition behind them. Section~\ref{results} reports the empirical results obtained through simulations. Section~\ref{practice} discusses some of the practical consideration that should accompany applied work that uses the proposed methodology. Section~\ref{future} concludes and outlines the directions of future research.
\section{Setting}\label{setting}
Let the researcher observe the outcome metric of interest, $Y$, for $N$ units during $T$ time periods, such that the observed data can be represented as an $N\times T$ matrix of values $Y_{it}$. At time $t=T$ the researcher decides---based on the data observed up until that point---which units should be treated and assigns a binary treatment described by variables $D_i\in\{0,1\}$, $i=1,\dots,N$. The outcomes are then observed for an additional $S-T$ time periods $t=T+1,\dots,S$ and the treatment-effect estimates are constructed. Each unit $i=1,\dots,N$ in each time period $t=T+1,\dots,S$ is associated with two potential outcomes $(Y_{it}(0),Y_{it}(1))$ which are considered random. The potential outcome $Y_{it}(0)$ is realized if $D_i=0$ and $Y_{it}(1)$ is realized if $D_i=1$ so that the observed outcome is $Y_{it}=Y_{it}(D_i)=Y_{it}(0)(1-D_i) + Y_{it}(1)D_i$.
Recall that, given a setting where a single unit $i=N$ has received the treatment in a single time period $t= T+1 = S$, the synthetic-control literature~\citep[][among others]{abadie2010synthetic} suggests constructing a counterfactual estimate for unit $N$ as a weighted average of the other units' observed outcomes: $\hat{Y}_{N, T+1}(0) = \sum_{i=1}^{N-1} w_i Y_{i, T+1}.$
In the previous equation, the $w_i$'s are weights learned from the data observed during the pre-treatment periods $t=1, 2,\dots,T$, often by minimizing $\sum_{t=1}^T (Y_{N t} - \sum_{i=1}^{N-1} w_i Y_{it})^2$ under some constraints on the weights. Assuming that the treatment effect, $\tau_N$, for this unit is additive, it can then be estimated as $\hat{\tau}_N = Y_{N, T+1} - \hat{Y}_{N, T+1}(0)$.
We now consider a more general setting where $K$ units have received the treatment, with outcomes given by
\begin{align*}
Y_{it}(0) = \mu_{it} + \varepsilon_{it}\quad \text{and}\quad
Y_{it} = Y_{it}(0) + D_{i} \tau_{i}
\end{align*}
with homoscedastic noise $\varepsilon_{it}$ that has mean zero and variance $\sigma^2$ and additive treatment effects $\tau_{i}$. In order to estimate the treatment effect in this more general setting, we can apply a separate synthetic control method to each treated unit $i$, learning an appropriate set of weights $\{w_j^i\}$ for each treated unit $i$ individually: $\hat{Y}_{i, T+1}(0) = \sum_{j\colon D_j = 0} w_j^i Y_{j, T+1}$. The treatment effect for unit $i$ is then estimated as $\hat{\tau}_i = Y_{i, T+1} - \hat{Y}_{i, T+1}(0)$.
Rather than considering each weight-fitting optimization problem separately, we can express the (conditional) \emph{mean squared error} (MSE) of the resulting estimator of the treatment effect as:
\begin{equation*}
\E \left[\left(\hat{\tau}_i-\tau_i\right)^2\big| \{D_j,w_j^i\}_{j=1}^N\right] = \left(\mu_{i, T+1} - \sum_{j\colon D_j =0} w_j^i \mu_{j, T+1}\right)^2 + \sigma^2 \left(1 + \sum_{j\colon D_j=0} (w_j^i)^2\right).
\end{equation*}
A proof is included in the supplementary materials. The synthetic-control literature often operates in settings where the treated units are given. Here, we allow the experimenter to select which units should receive the treatment. The mean-squared-error formula above leads us to consider the following optimization problem over the weights $\{w_j^i\}_{i,j=1}^N$ and the treatment variables $\{D_i\}_{i=1}^N$ under some appropriate constraints on the weights $\{w_j^i\}_{i,j=1}^N$. The objective below can be seen as the empirical analog of the right-hand side of the population equation above in the pre-treatment period averaged across time and across the treated units.
\begin{align*}
\tag{per-unit}
\min_{\{D_i,\{w_j^i\}_{j=1}^N\}_{i=1}^N}\quad \frac{1}{KT} \sum_{i=1}^N \sum_{t=1}^T D_i\left(Y_{it} - \sum_{j=1}^N w_j^i(1-D_j)Y_{jt} \right)^2 + \frac{\sigma^2}{K} \sum_{i=1}^N\sum_{j=1}^N D_i\left(w_j^i\right)^2 \label{per-unit}
\end{align*}
Note that we can safely sum across all units, not just the control ones, in the second term since the optimal values $w_j^i$ for units $j$ with $D_j=1$ will be equal to zero in an optimal solution.
In essence, this optimization problem attempts to minimize the discrepancy between the pre-treatment outcomes of the units chosen for treatment and the weighted averages of the outcomes of the units left as controls. At the same time, due to the second term, the objective attempts to balance the unit weights themselves.
The term $\sigma^2$ is unlikely to be known to the researcher and should be chosen based on the observed data. One possible way is to set it equal to the sample variance of the observed outcomes. We further discuss the selection of the penalty parameter in Section~\ref{practice}. Penalizing the weights is not uncommon in the synthetic-control literature. For example, \citet{doudchenko2016balancing} use the elastic-net penalty on the weights and \citet{abadie2021penalized} introduce a lasso-style penalty in which the nonnegative weight, $w_j^i$, is multiplied by the squared distance between the vectors of the covariates used for matching the units. This way, depending on the magnitude of the penalty hyperparameter, they can balance between the synthetic-control fit and the nearest-neighbor fit that puts all the weight on unit $j$ closest to $i$ in terms of the observed covariates.
In many applications, the researcher may be interested in estimating some weighted average of the unit-level treatment effects on the treated units. Rather than considering each treated unit separately and then computing the weighted average of the estimated individual treatment effects, practitioners may wish to construct a synthetic-control-type estimate for the weighed average of the treated units directly.
Consider a set of treatment assignments $\{D_i\}_{i=1}^N$ and weights $\{w_i\}_{i=1}^N$ on outcomes at time $T+1$, and assume that we wish to estimate the \emph{weighted average treatment effect on the treated} (wATET), $\tau = \sum_{i\colon D_i=1} w_i\tau_{i} = \sum_{i=1}^N D_i w_i\tau_i$, as a difference in weighted means:
$\hat{\tau} = \sum_{i\colon D_i=1} w_i Y_{i,T+1} - \sum_{i\colon D_i=0} w_i Y_{i,T+1}$. Then, under the same outcomes model as presented above, the (conditional) mean squared error of the difference-in-(weighted)-means estimator is
\begin{align*}
\E\left[\left(\hat{\tau} - \tau\right)^2\big| \{D_i,w_i\}_{i=1}^N\right] =
\left(\sum_{i\colon D_i=1} w_i\mu_{i,T+1} - \sum_{i\colon D_i=0} w_i \mu_{i,T+1}\right)^2 + \sigma^2 \sum_{i=1}^N w^2_i.
\end{align*}
As before, our setting allows the experimenter to optimally select which units should receive the treatment as well as which particular weighting scheme should be used. This is especially appropriate when the treatment effects are homogeneous and $\tau_i=\tau$ for all $i=1,\dots,N$. In that case, any weighted average of the unit-level treatment effects is equal to $\tau$ and the weights can be chosen in a way that minimizes the mean squared error.
The population equation above suggests solving the following optimization problem on the weights $\{w_i\}_{i=1}^N$ and the treatment variables $\{D_i\}_{i=1}^N$ based on the data observed in periods $t=1,\dots,T$:
\begin{align}
\tag{two-way global}
\min_{\{D_i,w_i\}_{i=1}^N}\quad \frac{1}{T} \sum_{t=1}^T \left(\sum_{i\colon D_i=1} w_i Y_{it} - \sum_{i\colon D_i=0} w_i Y_{it}\right)^2
+ \sigma^2 \sum_{i=1}^N w^2_i. \label{2-way}
\end{align}
So far, we have not considered specific constraints on the weights, $\{w_i\}_{i=1}^N$. If $\sigma^2>0$, $w_i=0$ for all $i=1,\dots,N$ is the unique optimal solution to the~\ref{2-way} problem above if the weights are not constrained in any way. In order to avoid this clearly undesirable solution, we assume that the weights $\{w_i\}_{i=1}^N$ are normalized: $\sum_{i\colon D_i=1} w_i = \sum_{i\colon D_i=0} w_i = 1$. While not strictly required, another reasonable set of constraints motivated by estimating a proper weighted average is requiring all weights to be nonnegative, $w_i\geq 0$ for all $i=1,\dots,N$.
Similar constraints can be imposed in the context of the per-unit problem: $w_j^i\geq 0$ for all $i,j=1,\dots,N$ and $\sum_{j\colon D_j=0} w_j^i=1$ for all $i=1,\dots,N$ such that $D_i=1$.
Finally, the weights on the treated units may be fixed if a specific weighted average---for example, a simple average with equal weighting---needs to be estimated. This constraint is particularly justified when the treatment effects are heterogeneous and different weighting schemes lead to different estimands. For that reason, we formulate another variation of the global problem:
\begin{align}
\tag{one-way global}
\min_{\{D_i,w_i\}_{i=1}^N}\quad \frac{1}{T} \sum_{t=1}^T \left(\sum_{i\colon D_i=1} w_i Y_{it} - \sum_{i\colon D_i=0} w_i Y_{it}\right)^2
+ \sigma^2 \sum_{i=1}^N w^2_i \label{1-way}
\end{align}
subject to an additional set of constraint that require $w_i=w_j$ for all $i,j=1,\dots,N$ such that $D_i=D_j=1$.
It is possible to obtain other design-and-estimation approaches as special cases of these problems---something that we utilize later in the paper. For instance, the standard difference-in-means approach under randomized design can be viewed as the special case of the one-way global problem with the treatment indicators, $D_i$, set randomly and the weights on the control units restricted to be equal, $w_i=w_j$ for all $i,j=1,\dots,N$ such that $D_i=D_j=0$. Likewise, the synthetic-control estimator can be viewed as the per-unit problem with the treatment indicators set according to the experimental design of choice---for example, randomly.
\section{Mixed-integer formulation}\label{mips}
The three optimization problems introduced in the first part of Section~\ref{setting}: the~\ref{per-unit}, the \ref{2-way}, and the \ref{1-way} problems can all be formulated as mixed-integer programs. We now describe the specific optimization problems we use in the empirical section of the paper.
The~\ref{per-unit} problem is formulated as:
\begin{align*}
\min_{\{D_i,\{w_j^i\}_{j=1}^N\}_{i=1}^N} &\quad \frac{1}{KT} \sum_{i=1}^N \sum_{t=1}^T D_i\left(Y_{it} - \sum_{j=1}^N w_j^i(1-D_j)Y_{jt} \right)^2 + \frac{\lambda}{K}\sum_{i=1}^N\sum_{j=1}^N D_i\left(w_j^i\right)^2 \\
\text{s.t.} & \quad w_j^i \geq 0,\quad D_i\in \{0, 1\}\ \ \text{for } i,j=1,\dots,N, \\
&\quad \sum_{i=1}^N D_i = K,
\quad \sum_{i=1}^N w_j^i (1-D_j) = 1\ \ \text{for } i=1,\dots,N \text{ such that } D_i=1.
\end{align*}
The~\ref{2-way} problem can be formulated as:
\begin{align*}
\min_{\{D_i,w_i\}_{i=1}^N} &\quad \frac{1}{T} \sum_{t=1}^T \left(\sum_{i=1}^N w_i D_i Y_{it} - \sum_{i=1}^N w_i (1-D_i) Y_{it} \right)^2 + \lambda \sum_{i=1}^N w_i^2 \\
\text{s.t.} & \quad w_i \geq 0,\quad D_i\in \{0, 1\}\ \ \text{for } i=1,\dots,N, \\
&\quad \sum_{i=1}^N D_i = K,
\quad \sum_{i=1}^N w_i D_i = 1,
\quad \sum_{i=1}^N w_i (1-D_i) = 1
\end{align*}
and the~\ref{1-way} problem is simply the~\ref{2-way} problem with an additional set of constraints: $w_i=w_j$ for all $i,j=1,\dots,N$ such that $D_i=D_j=1$.
All three problems require additional auxiliary variables representing the products of the weights and the treatment indicators as well as additional constraints in order to have a representation with a quadratic objective and only linear constraints so that they can be easier to solve by one of the academic or commercial MIP solvers.\footnote{We use SCIP~\citep{GamrathEtal2020OO} when generating the empirical results in Sections~\ref{design} and~\ref{results}.} See the supplementary materials for the exact formulations.
The term $\lambda$ that is used in all three objectives is a nonnegative penalty factor. Its selection is discussed in Section~\ref{practice}.
Note that the global versions of the problem can be solved without the constraint on the number of treated units, $\sum_{i=1}^N D_i=K$, but the per-unit problem requires it---without the constraint the per-unit problem will tend to select fewer treatment units unless the objective is divided by $\sum_{i=1}^N D_i$ which introduces a nonlinearity. See the supplementary materials where we introduce an alternative formulation that allows to circumvent this by imposing an additional quadratic constraint.
\section{Design}\label{design}
The three optimization problems introduced in Section~\ref{setting}---the one-way~and two-way global and the per-unit problems---tend to select certain treatment units in terms of their location within the distribution of the observed outcome data. In this section we illustrate that behavior using a simple example motivated by the objectives from Section~\ref{setting}. To simplify the analysis, we let $T=1$ and denote $a_i=Y_{i1}$. We also do not restrict the unit weights to be nonnegative to allow for simpler closed form solutions.\footnote{In some cases the nonnegativity constraint will not be binding, while in other cases the optimal weights without the constraint may actually turn out to be negative. This implies that only a subset of all units will have strictly positive weights in the optimal constrained solution.} Specifically, we consider the following per-unit problem:
\begin{align*}
\min_{\{D_i,\{w_j^i\}_{j=1}^N\}_{i=1}^N} &\quad \frac{1}{K}\sum_{i=1}^N D_i\left(a_i - \sum_{j=1}^N w_j^i (1-D_j) a_j\right)^2
+ \frac{\sigma^2}{K} \sum_{i=1}^N\sum_{j=1}^N D_i(w_j^i)^2 \\
\text{s.t} &\quad D_i\in \{0, 1\}\ \ \text{for } i,j=1,\dots,N, \\
&\quad \sum_{i=1}^N D_i = K,
\quad \sum_{j=1}^N w_j^i (1 - D_j) = 1\ \ \text{for } i = 1,\dots,N.
\end{align*}
A similarly simplified two-way global problem can be written as:
\begin{align*}
\min_{\{D_i,w_i\}_{i=1}^N} &\quad \left(\sum_{i=1}^N w_i D_i a_i - \sum_{i=1}^N w_i (1-D_i) a_i\right)^2
+ \sigma^2 \sum_{i=1}^N w^2_i \\
\text{s.t} &\quad D_i\in \{0, 1\}\ \ \text{for } i,j=1,\dots,N, \\
&\quad \sum_{i=1}^N D_i = K,
\quad \sum_{i=1}^N w_i D_i = 1, \quad \sum_{i=1}^N w_i (1 - D_i) = 1.
\end{align*}
This becomes a one-way global problem if we impose an additional set of constraints requiring that $w_i=1/K$ for all $i=1,\dots,N$ such that $D_i=1$.
When the unit weights are optimized (see the supplementary materials for the derivation), the optimal values of the objectives can be written in closed-form as functions of the set of treated units, $I$.
\begin{theorem}\label{thm:main} Let $I$ denote the set of treated units and $\bar{I}=\{1,\dots,N\}\setminus I$ denote the set of control units. Let $\overline{a}_I=\sum_{i\in I}a_i/|I|$ be the average outcome within the treatment group, $V_I^2=\sum_{i\in I}(a_i-\overline{a}_I)^2$ a quantity proportional to the sample variance of the outcomes within the treatment group, and the corresponding quantities for set $\bar{I}$ defined similarly. After the unit weights are optimized away, the per-unit objective can be written as:
\begin{align*}
J_{\text{per-unit}}(I) = \sigma^2\left(\frac{1}{N-K} + \frac{(\overline{a}_I - \overline{a}_{\bar{I}})^2 + K^{-1}V_I^2}{\sigma^2 + V_{\bar{I}}^2}\right),
\end{align*}
the two-way global objective can be written as:
\begin{align*}
J_{\text{2-way}}(I) = \sigma^2\left(\frac{1}{K} + \frac{1}{N-K} + \frac{(\overline{a}_I - \overline{a}_{\bar{I}})^2}{\sigma^2 + V_I^2 + V_{\bar{I}}^2}\right),
\end{align*}
and the one-way global objective can be written as:
\begin{align*}
J_{\text{1-way}}(I) = \sigma^2\left(\frac{1}{K} + \frac{1}{N-K} + \frac{(\overline{a}_I - \overline{a}_{\bar{I}})^2}{\sigma^2 + V_{\bar{I}}^2}\right).
\end{align*}
\end{theorem}
It is evident from the objectives that all problems aim to select units in such a way that averages of the observed outcomes for the treatment and control groups are similar---the squared difference between the average outcomes within each group appears in the numerators of all three objectives. The problems differ in terms of how they approach the sample variances of the two groups. The per-unit problem maximizes the sample variance of the control units while minimizing that of the treated units---the term $V_{\bar{I}}^2$ appears in the denominator while the term $V_I^2$ appears in the numerator. The intuition for the latter is that the per-unit objective tries to simultaneously model each treated unit with a combination of control units, while keeping weight variance small, so it is best to keep treated units as homogeneous as possible. The two-way problem attempts to maximize both (taking into account that they are, of course, interdependent), the sample variances of the outcomes of the control and treated groups---both quantities appear in the denominator. The one-way problem maximizes the sample variance of the outcomes of the control units only, as the weights for the treated units are fixed---only the term $V_{\bar{I}}^2$ appears in the denominator.
To illustrate the same patterns visually, we solve a per-unit problem and a two-way global problem for a simulated dataset with $N=25$ units, $T=2$ pre-treatment periods and the outcomes $Y_{it}$ drawn from a standard normal distribution independently across $i$ and $t$. This allows 2-d plotting of the units in the space of observed pre-treatment outcomes $(Y_{i1},Y_{i2})$. We select either $K=3$ or $K=25-3=22$ treated units and we use $\lambda=\sum_{i=1}^N (Y_{i1}-Y_{i2})^2/(4N)$ (which is the average two-period sample variance across all units). Figure~\ref{fig:design} shows the units selected by each of the problems.
The results align with the intuition presented using the simple model with $T=1$ above. Specifically, the per-unit problem maximizes the spread of the control units which is particularly apparent from the plot corresponding to $K=22$ while keeping the treatment units as close to each other as possible (see the plot for $K=3$). Since the control and the treatment units have symmetric roles in the two-way global objective, the three treatment units selected by the problem when $K=3$ are the same as the three control units selected when $K=22$. The control and treatment groups have similar average outcomes and both groups are relatively spread out.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.475\textwidth}
\centering
\fbox{\includegraphics[width=\textwidth]{perunit_few.pdf}}
\caption[]{{\small per-unit problem, $K=3$}}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\fbox{\includegraphics[width=\textwidth]{perunit_many.pdf}}
\caption[]{{\small per-unit problem, $K=22$}}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.475\textwidth}
\centering
\fbox{\includegraphics[width=\textwidth]{global_few.pdf}}
\caption[]{{\small two-way global problem, $K=3$}}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\fbox{\includegraphics[width=\textwidth]{global_many.pdf}}
\caption[]{{\small two-way global problem, $K=22$}}
\end{subfigure}
\medskip
{\small\textit{Note}: The units are plotted in the space of observed pre-treatment outcome data, $(Y_{i1},Y_{i2})$; `o' denote the selected control units and `+' denote the selected treatment units.}
\medskip
\caption[]{Units selected by the two-way global and per-unit problems.}
\label{fig:design}
\end{figure*}
\section{Empirical results}\label{results}
To evaluate the performance of the methods proposed in Section~\ref{setting} we compare several design-and-estimation procedures: (i) the per-unit problem, (ii) the two-way global problem, (iii) the one-way global problem, (iv) the per-unit problem with randomly chosen treatment units, and (v) the standard randomized experiment which randomly assigns the treatment and estimates the average treatment effect on the treated as the difference in means between the two groups. It is important to note that approach (iv) is equivalent to using the synthetic-control method\footnote{The only difference compared to the traditional synthetic-control methodology used, for example, in \citet{abadie2010synthetic} is that no additional covariates are used and the weights are obtained using the outcome data alone---the approach similar to the one taken in, for instance, \citet{doudchenko2016balancing}.} for each randomly chosen treatment unit separately and then either averaging the unit-level treatment effect estimates or using the individual estimates directly. Taking this into account, comparing (i) to (iv) amounts to evaluating the role of optimal design in a synthetic-control study, while comparing (iv) to (v) evaluates the synthetic-control approach relative to the difference-in-means estimator.
To run a number of simulated experiments, we take publicly available data from the US Bureau of Labor Statistics (BLS) which contain unemployment rates of 50 states in 40 consecutive months.\footnote{The data are available from the BLS website, but the specific dataset we use is taken from \url{https://github.com/synth-inference/synthdid/blob/master/experiments/bdm/data/urate_cps.csv}.} We run 500 simulations such that each simulation utilizes a 10-by-10 matrix sampled from the original 50-by-40 dataset. Specifically, we randomly select 10 units and the first time period. The remaining 9 time periods are the consecutive months that follow. In each simulation we treat $K$ units (equal to 3 in one set of simulations and 7 in another) which are chosen based on the data in the first 7 periods---or chosen randomly in cases (iv) and (v)---and the treatment is applied in the last 3 of the 10 periods. We either assign each treated unit the additive treatment effect of $0.05$ (the homogeneous treatment case) or assume that the treatment effects increase linearly from $0$ to $0.1$ from the first unit in the (randomly selected) 10-by-10 matrix to the last one (the heterogeneous treatment case). This implies that the true value of the ATET changes depending on the identity of the units selected for treatment. However, the (overall) ATE remains $0.05$.
We estimate the average treatment effect on the treated as well as the unit-level treatment effects. Only the per-unit problem, (i), and the synthetic control, (iv), allow for nontrivial estimation of heterogeneous treatment effects while the remaining approaches estimate all unit-level effects as being equal to the estimate of the average treatment effect on the treated.
We then compare approaches (i)--(v) in terms of the \emph{root-mean-square error} (RMSE), where the squared differences between the true values of the treatment effects and the respective estimates are computed for each treatment period (and each treatment unit in case of the unit-level effects) and averaged. The square roots of these quantities are the RMSE's in question. Table~\ref{tab:rmses} reports the RMSE's averaged across all simulations.
\begin{table}
\caption{Root-mean-square errors of the average and unit-level treatment effect estimates}\label{tab:rmses}
\medskip
\centering
\begin{tabular}{r@{\ }lcccc}
\toprule
\multicolumn{5}{l}{\textit{Homogeneous treatment}} \\
\midrule
& & \multicolumn{2}{c}{$K=3$} & \multicolumn{2}{c}{$K=7$} \\
\cmidrule(r){3-4} \cmidrule(r){5-6}
& & ATET RMSE & Unit-level RMSE & ATET RMSE & Unit-level RMSE \\
\midrule
(i) & Per-unit & $8.5$ & $13.9$ & $\mathbf{8.3}$ & $16.0$ \\
(ii) & Two-way global & $\mathbf{8.4}$ & $\mathbf{8.4}$ & $8.4$ & $\mathbf{8.4}$ \\
(iii) & One-way global & $8.5$ & $8.5$ & $8.5$ & $8.5$ \\
\midrule
(iv) & Synthetic control & $9.7$ & $15.9$ & $10.3$ & $19.0$ \\
& (random treat.) & & & & \\
(v) & Diff-in-means & $12.1$ & $12.1$ & $11.5$ & $11.5$ \\
& (random treat.) & & & & \\
\midrule
\multicolumn{5}{l}{\textit{Heterogeneous treatment}} \\
\midrule
& & \multicolumn{2}{c}{$K=3$} & \multicolumn{2}{c}{$K=7$} \\
\cmidrule(r){3-4} \cmidrule(r){5-6}
& & ATET RMSE & Unit-level RMSE & ATET RMSE & Unit-level RMSE \\
\midrule
(i) & Per-unit & $\mathbf{8.5}$ & $\mathbf{13.9}$ & $\mathbf{8.3}$ & $\mathbf{16.0}$ \\
(ii) & Two-way global & $8.6$ & $27.6$ & $8.9$ & $32.5$ \\
(iii) & One-way global & $8.5$ & $27.6$ & $8.5$ & $32.5$ \\
\midrule
(iv) & Synthetic control & $9.7$ & $15.9$ & $10.3$ & $19.0$ \\
& (random treat.) & & & & \\
(v) & Diff-in-means & $12.1$ & $29.7$ & $11.5$ & $33.6$ \\
& (random treat.) & & & & \\
\bottomrule
\end{tabular}
\medskip
{\small\textit{Note}: The reported RMSE's are multiplied by $10^3$ for readability. The values in bold are the lowest in the respective columns and correspond to the methods that perform best.}
\end{table}
The specific improvements in terms of the RMSE over the baselines (iv) and (v)---the randomized synthetic control and the randomized difference-in-means---depend on the data and the true treatment effects. The main takeaways, however, are more general. The per-unit problem consistently outperforms the other methods when the underlying treatment effects are heterogeneous. For homogeneous treatment effects the difference between the per-unit and the global problems either vanishes, or the two-way approach starts outperforming the alternatives since any weighting scheme leads to the same value of the ATET. Moreover, in the homogeneous treatment case the global problems always outperform the baselines when estimating the average treatment effect on the treated.
For the particular simulations we run, the one-way and two-way global objectives provide improvements over the baselines that vary from 12\% to 31\% in the homogeneous treatment case and from 11\% to 30\% in the heterogeneous treatment case when estimating the average treatment effect on the treated. The per-unit approach, (i), performs well across the board while being particularly effective when estimating the unit-level effects in the heterogeneous treatment case and providing an improvement of over 13\% relative to the synthetic control approach, (iv), when the number of treated units is small and 16\% when the number of treated units is large. It is not surprising that the per-unit problem provides a smaller improvement over the synthetic control when $K=3$ because in that case the donor pool of units that are used for comparison with the treated units is large relative to the overall number of units and the probability that we will not be able to find a good synthetic comparison is relatively low. The situation is different when we only have 3 control units that are used for constructing synthetic outcomes for the remaining 7 treatment units. Section~\ref{design} provides an additional discussion of the optimal design when the number of treatment units changes. Neither is surprising that the per-unit approach performs poorly relative to the global problems and even the randomized difference-in-means estimator (but not the synthetic control) when estimating unit-level effects in the homogeneous treatment case. Since all treatment effects are the same, it is more efficient to pool all the data together and estimate the constant effect on the full sample rather than estimating the same quantity for each unit individually. What is important though, is the robustness of the per-unit approach which provides similar performance in the case of either homogeneous or heterogeneous treatment effects while the global problems and the randomized difference-in-means perform poorly when estimating unit-level effects in the heterogeneous treatment case.
\section{Practice}\label{practice}
There are a number of practical considerations that need to be addressed when using the proposed design and analysis approaches.
\paragraph{Formulating the mixed-integer programs.} Both the per-unit problem and the global problems can be formulated as mixed-integer programs with quadratic objectives and linear constraints. As discussed in Section~\ref{setting}, the per-unit problem requires either an additional linear constraint that fixes the number of treated units, $K$, or an additional quadratic constraint that allows optimizing over the number of treated units. In addition to that, the per-unit problem uses more variables that need to be optimized since sets of weights vary across treatment units allowing separate estimation of every unit-level treatment effect. This implies that the per-unit problem is generally harder to solve and is only tractable for a smaller number of experimental units, $N$, compared to the global problems.
\paragraph{Choosing the penalty factor.} The penalty factor, $\lambda$, used by all of the optimization problems can be chosen using cross-validation. Specifically, the pre-treatment time periods can be split into the consecutive training and validation time periods and the value of $\lambda$ can be chosen by minimizing the RMSE over the validation period in a simulated experiment that is similar to the one we conduct in Section~\ref{results}. An alternative approach motivated by the setting in Section~\ref{setting} uses an estimate of the variance of the outcome variable. For example, the approach we take in Sections~\ref{design} and~\ref{results} computes the sample variances for every unit $i$ across pre-treatment time periods $t=1,\dots,T$ and then uses the average of those quantities across all units as the penalty factor, $\lambda$.
\paragraph{Quantifying the uncertainty.} Most applied settings require evaluating the uncertainty in the obtained estimates of the treatment effects. We suggest the permutation-based approach for testing the sharp null hypothesis of zero treatment effects across all treated units proposed by \citet{chernozhukov2021exact}. See the supplementary materials for the detailed description of the proposed inference procedure as well as a theoretical result that guarantees its validity---albeit under rather strong assumptions---and the power curves constructed using a simulated setting similar to that from Section~\ref{results}. When the proposed procedure is used in conjunction with the per-unit or global problems it provides the correct (or conservative) test sizes and improves the power relative to the synthetic-control and difference-in-means approaches.
\paragraph{Computational complexity.} Solving the global and the per-unit problems in their mixed-integer formulations becomes computationally burdensome as the total number of units increases, especially if the exact optimal solutions are required. In our simulations we were able to solve problems for $N=50$ units---which is a meaningful threshold corresponding to the number of states, a typical experimental unit in synthetic-control-type studies---on a single machine within hours. However, we prove that the underlying optimization problem is NP-hard (by providing a reduction to the partitioning problem; for the exact proof see the supplementary materials), and therefore exact solutions to substantially larger problems are unlikely.
\section{Conclusion}\label{future}
In this paper we evaluate the role of optimal experimental design in panel-data settings where traditionally the average treatment effect on the treated might be estimated using randomized design and the difference-in-means estimator. We propose several design-and-analysis procedures that can be solved as mixed-integer programs. Our empirical evaluations show that these procedures lead to a substantial improvement in terms of the root-mean-square error relative to the randomized difference-in-means as well as the randomized synthetic-control approaches.
We discuss the roles that underlying assumptions about the nature of the treatment effects, the estimands of interest, and the computational considerations play when deciding which approach should be used. We propose a permutation-based inference procedure that is shown to deliver the correct test sizes in simulations. We also discuss practical considerations when applying this methodology as well as its current limitations.
\bibliographystyle{chicago}
| proofpile-arXiv_065-7753 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The robustness of symmetry-protected topological (SPT) phases to symmetry-respecting perturbations makes them promising candidates for quantum computing~\cite{Chiu}. Recently, the topological characteristics of these phases have been extended to out-of-equilibrium regime~\cite{Cooper}, and more recent studies show that their topologies might be robust even when the initial state breaks the protecting symmetries~\cite{Marks}. Thus it would be of both theoretical and technical interests to understand to what extent these phases are robust during dynamical evolution.
In this paper, we try to focus on this question: how robust the entanglement measurement is if we not only break the protecting symmetries of the initial state, but also the time-dependent Hamiltonian? Naively, one might imagine the SPT state will adiabatically evolve to a trivial state which eliminates the entanglement. However, we will show that one might find a critical point where this adiabatic evolution fails, thus leads to relatively robust entanglement measurement.
We consider the Haldane phase in a more controllable cold atoms system, which was firstly considered by Dalla Torre {\it et al.}, termed as the Haldane insulator (HI), and the phase diagram has been well studied~\cite{Dalla, Rossini}. This HI phase shows non-trivial properties similar to the Haldane phase in spin-1 antiferromagnetic Heisenberg chains~\cite{Haldane, Affleck} (the non-local string order, a two-fold degeneracy of the entanglement spectrum, etc.), but with a revised protecting bond-centered (BC) inversion symmetry~\cite{Berg, Li, Pollmann, Deng, Ejima}. Recent theoretical progress has shed light on realizing this bosonic HI phase by providing strong enough long-range interaction using Feshbach resonance~\cite{Xu}.
In the following, we will consider the middle chain-breaking dynamics of the HI phase with broken BC inversion. This dynamical process has been considered by Pollmann {\it et al.} in the spin-1 chain respecting the BC inversion symmetry, and they find a lower bound ($\log2$) for the half-system von Neumann entanglement entropy by adiabatically breaking the middle chain. This entanglement measurement is due to the double degeneracy of the entanglement spectrum, and is robust with symmetries even not protecting the edge modes and string order~\cite{Pollmann}. In this work, we will focus on the entanglement entropy, and show to what extent this measurement is robust. To our knowledge, this problem has not been fully understood before.
This paper is organized as follows. First, we introduce the middle chain-breaking experiment in an extended Bose-Hubbard system and consider its adiabatic properties. We show the broken system has a vanishing energy gap in the deep HI phase regime, as a result of the degeneracy of two trivial product states appear in this phase. Then we consider the non-adiabatic dynamics. We get the full numerical evolution of the entanglement entropy by integrating the time-dependent Schr\"odinger equation. We find the entanglement entropy is relatively robust in the deep HI regime. We further give an analytical prediction to the entanglement entropy in this regime by mapping the system to a two-level model. At last, we give our conclusions and mention the relevant experiment measurements.
\section{The middle-chain breaking and its adiabatic properties}
We consider a chain-breaking experiment on the extended Bose-Hubbard model $H=H_0+\gamma H'$ with
\begin{eqnarray}
H_0=&&-J\sum_{i\neq L/2}(b^\dagger_{i}b_{i+1}+\mathrm{H.c.})+\frac{U}{2}\sum_i n_i(n_i-1)\nonumber\\
&&+V\sum_{i\neq L/2}n_{i}n_{i+1},\nonumber\\
H'=&&-J(b^\dagger_{L/2}b_{L/2+1}+\mathrm{H.c.})+Vn_{L/2}n_{L/2+1}
\label{eq:eq1}
\end{eqnarray}
describing the broken and linking Hamiltonian. We consider a chain length of $L$. Here $J$ characterizes the nearest-neighbor hopping, and $U$, $V$ are the on-site and nearest-neighbor interaction strength, with $n_i=b_i^\dagger b_i$ the bosonic particle number operator at site $i$. The parameter $\gamma\in[0,1]$ characterizes the strength of middle bond. At $\gamma=1$, we recover the conventional extended Bose-Hubbard model. At $\gamma=0$ this chain breaks into two length-$L/2$ subsystems.
In this work, we run an exact diagonalization calculation with chain length $L=12$. This is a typical system size can be realized in current cold atoms experiments~\cite{Rispoli}. To observe the HI phase, we constrain the maximum on-site particle number to 2, the state space of which can then be mapped to an effective spin-1 system. Experimentally, this constraint can be achieved by appropriately include the Feshbach resonances in this system as recently proposed in~\cite{Xu}. We consider a half-filling case with total particle number $N=L$, which is equivalent to the total spin sector $S_{tot}^z=0$ in the mapped spin system. We fix the edge particle number to be $2$ and $0$ in the following calculation, which helps to break the ground state degeneracy in the HI and density wave phases, and reducing the particle-hole excitations at the edge. In experiment this can be done by reducing or increasing the local potential at the left or right edge of the optical lattice. Such edge configuration has also been considered in previous studies, and the HI is found sandwiched between Mott insulator and density wave phases (for on-site interaction $U/J=4.0$ the HI phase appears in the range $2.1\lesssim V/J\lesssim 3.0$)~\cite{Dalla, Rossini, Ejima}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.6\textwidth]{fig1.pdf}
\caption{Chain-breaking experiment of an open Haldane insulator. (a) An Affleck-Kennedy-Lieb-Tasaki (AKLT) state description for middle chain-breaking, where the black dots represent effective spin-1/2 particles. By adiabatically breaking the middle bond, the AKLT state evolves to one of the trivial states as the ground state. (b) The energy spectrum after breaking the middle chain for different nearest-neighbor interaction strength $V$. The lowest level crossing happens at around $V/J\approx 2.4$. (c) Ground state particle number of the left-half system $N_L$ after breaking the middle chain for different $V$. The particle number shifts at the critical point $V/J\approx 2.4$, which is within the Haldane insulator regime (shaded region in d and e). The energy gap after and before breaking the middle chain is shown in (d) and (e). The results are obtained for $U/J=4$ with the chain length $L=60/120$ based on DMRG calculations using ALPS package~\cite{White, Schollwock, alps}.}
\label{fig:fig1}
\end{center}
\end{figure}
We note that even though the above choice of edge configuration stabilizes the HI state, it in fact has already broken the BC inversion symmetry. Thus even though the linking Hamiltonian $H'$ conserves BC inversion, the system loses its protecting symmetry all along with the time evolution. So by breaking the middle chain, adiabatically the HI will evolve to a trivial product state (here by ``product state" we mean the state is composed of left half-system state producting the right half-system state). However, since there are two product states competing as illustrated in Fig.~\ref{fig:fig1}a, this adiabatic evolution will not be valid if these two states are degenerate. We show the energy spectrum for the lowest $40$ eigenstates in Fig.~\ref{fig:fig1}b. We find a ground state level crossing around $V/J\approx 2.4$, which is due to the competition of these two product states. This is consistent with the particle number partition in the left-half system in Fig.~\ref{fig:fig1}c. Obviously, these two product states are adiabatically connected to the Mott insulator and density wave states respectively by changing the nearest-neighbor interaction $V/J$. In Fig.~\ref{fig:fig1}d and e we show the energy gap after and before breaking the middle chain. As the chain length increases, the level crossing extends to the full HI regime as shown in Fig.~\ref{fig:fig1}d (shaded region), with the regime coincides with Fig.~\ref{fig:fig1}e. These findings suggest that one can find regimes within the deep HI regime where the adiabatic evolution fails, and thus protect the non-trivial state even without the BC inversion symmetry. In the following, we will focus on this regime and try to understand this non-adiabatic dynamics.
\section{Non-adiabatic dynamics of the entanglement entropy}
\subsection{Full time evolution}
We consider the full time evolution in this section, to show whether the experiment is able to follow the above adiabatic processes. We consider the dynamical breaking of middle chain which takes a linear form $\gamma(t)=1-\Gamma t$, where $\Gamma$ characterizes the rate of this breaking. The time-dependent Hamiltonian can be written as $H(t)=H_0+(1-\Gamma t)H'$. The wave function can be expanded using the orthogonal eigenstates at $t=0$,
\begin{equation}
|\psi(t)\rangle=\sum_n c_n(t)e^{-i E_n^0 t}|\psi_n^0\rangle,
\label{eq:fullansatz}
\end{equation}
where $|\psi_n^0\rangle$ and $E_n^0$ are the eigenvector and eigenvalues of the system at $t=0$, i.e., $(H_0+H')|\psi_n^0\rangle=E_n^0|\psi_n^0\rangle$. Under Eq.~(\ref{eq:fullansatz}), the Schr\"odinger equation becomes a set of coupled equations about the coefficients $c_n(t)$,
\begin{equation}
i\dot{c}_n(t)=-\Gamma t \sum_m\langle n|H'|m \rangle c_m(t)e^{-i(E_m^0-E_n^0) t}.
\end{equation}
With the initial condition $c_n(0)=\delta_{n,0}$ the above equations give the full time evolution of the system.
The more important and experimentally relevant quantity is the half-system von Neumann entanglement entropy $S_{L/2}=-\mathrm{Tr}\left[\rho_l\log\rho_l\right]$, where $\rho_l=\mathrm{Tr}_r|\psi(t)\rangle\langle\psi(t)|$ is the reduced density matrix of the left-half system with the trace over the right-half system. We show the final-state entanglement entropy $S_{L/2}$ as a function of inverse breaking rate $1/\Gamma$ and nearest-neighbor interaction $V$ in Fig.~\ref{fig:fig2}a. We pick three typical values $V/J=1.4, 2.4, 3.4$ at breaking rate $\Gamma/J=0.1$ and show the dynamics of $S_{L/2}$ in Fig.~\ref{fig:fig2}b. At the deep HI regime around $V/J\approx 2.4$, we find the entanglement entropy is relatively robust and evolves to a finite value.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\textwidth]{fig2.png}
\caption{(a) The final-state half entanglement entropy $S_{L/2}$ at $t=1/\Gamma$ as a function of nearest-neighbour interaction $V$ and inverse breaking rate $1/\Gamma$ for on-site interaction $U/J=4$ and chain length $L=12$. (b) Full time evolution of $S_{L/2}$ for different nearest-neighbor interaction $V/J=1.4, 2.4, 3.4$ at breaking rate $\Gamma/J=0.1$. The dark points in (a) correspond to the data in (b) at $t=10/J$. The entanglement entropy is relatively robust in the deep Haldane insulator regime at around $V/J\approx 2.4$.}
\label{fig:fig2}
\end{center}
\end{figure}
\subsection{A two-level prediction}
The above full numerical calculations are time consuming and lack of physical insights. Since there are two nearly degenerate states for the broken Hamiltonian $H_0$ in the deep HI regime, we can describe the physics by a two-level model. We label these two states as $|0\rangle$ and $|1\rangle$, and the system can then be projected into these two subspaces as
\begin{eqnarray}
H_{eff}(t)=\left[\begin{array}{cc}
\langle 0|H|0\rangle& \langle 0|H|1\rangle\\
\langle 1|H |0\rangle & \langle 1|H|1\rangle
\end{array}\right].
\end{eqnarray}
We define $\langle 1|H|1\rangle-\langle 0|H|0\rangle=h-2\beta (1-\Gamma t)$, $\langle 0|H|1\rangle=\alpha (1-\Gamma t)$, where $h=\langle 1|H_0|1\rangle-\langle 0|H_0|0\rangle$ is the gap of $H_0$ between these two states. Here $\alpha=\langle 0|H'|1\rangle$ and $\beta=(\langle 0|H'|0\rangle-\langle 1|H'|1\rangle)/2$ come from the hopping and nearest-neighbor interaction terms respectively. In general, $\alpha$ could be a complex number, but we can always drop the complex phase factor into state $|0\rangle$ or $|1\rangle$ and makes $\alpha$ real. Thus in the following we choose $\alpha$ to be its modulus. The physics should not change if we shift to a rotating frame by a unitary transformation $U=e^{i\langle 0|H_0|0\rangle t+i\left(\langle 0|H'|0\rangle+\langle 1|H'|1\rangle\right)\left(t-\Gamma t^2/2\right)/2}$, and the above Hamiltonian becomes more symmetric as
\begin{eqnarray}
\tilde{H}_{eff}(t)=H_{eff}+i\dot{U}U^\dagger=\left[\begin{array}{cc}
\beta (1-\Gamma t) & \alpha (1-\Gamma t)\\
\alpha (1-\Gamma t) & -\beta (1-\Gamma t)+h
\end{array}\right].
\label{eq:twolevel}
\end{eqnarray}
Let's first look at the asymptotic behavior at $h/J\to0$ (which is true in the thermodynamic limit of the HI phase and also for the critical point in our finite system). Under this condition, the time evolution of the Hamiltonian Eq.~(\ref{eq:twolevel}) is readily solved and the solution gives
\begin{eqnarray}
\psi_+(t)=\frac{e^{i\omega(t-\Gamma t^2/2)}}{\sqrt{\alpha^2+(\beta-\omega)^2}}\left(\begin{array}{c}
\beta-\omega\\
\alpha
\end{array}\right)
\label{eq:psi0}
\end{eqnarray}
for initial state as the ground state, and
\begin{eqnarray}
\psi_-(t)=\frac{e^{-i\omega(t-\Gamma t^2/2)}}{\sqrt{\alpha^2+(\beta+\omega)^2}}\left(\begin{array}{c}
\beta+\omega\\
\alpha
\end{array}\right)
\label{eq:psi1}
\end{eqnarray}
for initial state as the first excited state. Here $\omega=\sqrt{\alpha^2+\beta^2}$. Thus it is obvious that besides a dynamical phase, the probabilities of the system in these two states $|0\rangle$ and $|1\rangle$ are constant with time. It means in the limit $h/J\to 0$, the reduced density matrix $\rho^{\pm}_l=\mathrm{Tr}_r|\psi_{\pm}(t)\rangle\langle\psi_{\pm}(t)|$ does not evolve with time and thus the half entanglement entropy of each state $\psi_{\pm}(t)$ is a constant and robust to the middle-bond breaking.
For finite $h$, we consider the long-time nearly adiabatic limit $\Gamma/J\to 0$. In this case we have a rather slow breaking rate and the dynamics is mainly determined by the details of time $t$ close to $1/\Gamma$. So the physics is determined by the region $\beta(1-\Gamma t)\ll h$, and we have the following Hamiltonian
\begin{eqnarray}
H_{ad}(t)=\left[\begin{array}{cc}
0 & \alpha (1-\Gamma t)\\
\alpha (1-\Gamma t) & h
\end{array}\right].
\label{eq:hamad}
\end{eqnarray}
To get the dynamics of $H_{ad}$, we try to expand the solution as a superposition of orthogonal states Eq.~(\ref{eq:psi0}) and (\ref{eq:psi1}) with $\beta=0$,
\begin{eqnarray}
\psi_{ad}(t)&&=c_+(t)\psi_+(t)+c_-(t)\psi_-(t) \nonumber\\
&&=c_+(t)\frac{e^{i\theta(t)}}{\sqrt{2}}
\left(\begin{array}{c}
-1\\
1
\end{array}\right)
+c_-(t)\frac{e^{-i\theta(t)}}{\sqrt{2}}
\left(\begin{array}{c}
1\\
1
\end{array}\right),
\nonumber
\end{eqnarray}
where $\theta(t)=\alpha(t-\Gamma t^2/2)$. The time-dependent Schr\"odinger Equation then becomes
\begin{eqnarray}
i\dot{c}_+e^{i\theta}-i\dot{c}_-e^{-i\theta}&&=0,\\
i\dot{c}_+e^{i\theta}+i\dot{c}_-e^{-i\theta}&&=hc_+e^{i\theta}+hc_-e^{-i\theta},
\end{eqnarray}
which can be reduced to the following second-order differential equation
\begin{equation}
\ddot{c}_++i\left[h+2\alpha(1-\Gamma t)\right]\dot{c}_+-\alpha h(1-\Gamma t)c_+=0,
\label{eq:longtime}
\end{equation}
with
\begin{eqnarray}
c_-=e^{2i\theta}\left(\frac{2i}{h}\dot{c}_+-c_+\right).
\label{eq:cm}
\end{eqnarray}
The solution of Eq.~(\ref{eq:longtime}) has an additional phase term. This can be seen by setting $t\to\pm\infty$, and Eq.~(\ref{eq:longtime}) becomes $2i\dot{c}_+-hc_+=0$ with the solution $c_+(t\to\pm\infty)=e^{-iht/2}$. This phase can be removed by defining
\begin{eqnarray}
c_+(t)=e^{-iht/2}\tilde{c}_+(t),
\label{eq:cp}
\end{eqnarray}
and Eq.~(\ref{eq:longtime}) can then be reduced to
\begin{equation}
\ddot{\tilde{c}}_++2i\alpha(1-\Gamma t)\dot{\tilde{c}}_++h^2\tilde{c}_+/4=0.
\end{equation}
By defining $z=-e^{i\pi/4}(1-\Gamma t)\sqrt{\alpha/\Gamma}$ and $\nu=h^2/(\Gamma \alpha)$, the above equation is equivalent to the Hermite differential equation
\begin{equation}
\ddot{\tilde{c}}_+(z)-2z\cdot\dot{\tilde{c}}_+(z)-i\nu/4\cdot\tilde{c}_+(z)=0,
\end{equation}
the general solution of which is readily written as a linear combination of Hermite polynomial $H_\lambda(z)$ and confluent hypergeometric function $M(\lambda_1,\lambda_2,z)$~\cite{math}
\begin{equation}
\tilde{c}_+(z)=a H_{-i\nu/8}\left(z\right)+b M\left(\frac{i\nu}{16},\frac{1}{2},z^2\right),
\end{equation}
where $a$ and $b$ are constants to be determined by the initial condition. Since we are interested in the nearly adiabatic side, we have the initial condition
\begin{eqnarray}
\left|\tilde{c}_+\left(z\left(t\to-\infty\right)\right)\right|=1, |\tilde{c}_-\left(z\left(t\to-\infty\right)\right)|=0.
\label{eq:initialcondition}
\end{eqnarray}
Consider the asymptotic behaviors $H_\lambda(z\left(t\to-\infty\right))=2^\lambda z^\lambda$, $M(\lambda_1,\lambda_2,z\left(t\to-\infty\right))=\Gamma(\lambda_2)(e^zz^{\lambda_1-\lambda_2}/\Gamma(\lambda_1)+(-z)^{-\lambda_1}/\Gamma(\lambda_2-\lambda_1))$, and plug these into the initial condition Eq.~(\ref{eq:initialcondition}) we have the coefficients
\begin{eqnarray}
a=-2^{\frac{i\nu}{8}}e^{-\frac{\pi\nu}{32}}, b=\frac{e^{-\frac{3\pi\nu}{32}}}{\sqrt{\pi}}\left(1+e^{\frac{\pi\nu}{8}}\right)\Gamma\left(\frac{1}{2}-\frac{i\nu}{16}\right).\nonumber
\end{eqnarray}
Then from Eq.~(\ref{eq:cm}) and (\ref{eq:cp}) we obtain the time-dependent solution of Hamiltonian $H_{ad}$.
Since the two nearly degenerate states $|0\rangle$ and $|1\rangle$ at $t=1/\Gamma$ are trivial product sates and have different particle number partition for the left and right half systems, they contribute to the entanglement entropy only by their probabilities. According to the above solution, we arrive the final probability at the lowest energy state $|0\rangle$
\begin{eqnarray}
P_0&&=\left|c_+(1/\Gamma)e^{i\theta}-c_-(1/\Gamma)e^{-i\theta}\right|^2/2\nonumber\\
&&=\frac{e^{-\frac{3\pi\nu}{16}}}{2\pi}\cdot\left|\frac{e^{\frac{\pi\nu}{16}}\pi}{\Gamma(\frac{1}{2}+\frac{i\nu}{16})}+\frac{4(-1)^{3/4}e^{\frac{\pi\nu}{16}}\pi}{\sqrt{\nu}\Gamma(\frac{i\nu}{16})}-\left(1+e^{\frac{\pi\nu}{8}}\right)\Gamma\left(\frac{1}{2}-\frac{i\nu}{16}\right)\right|^{2}.\nonumber
\end{eqnarray}
Then we have the half entanglement entropy $S_{L/2}=-P_0\log P_0-(1-P_0)\log(1-P_0)$. In Fig.~\ref{fig:fig3} we show $S_{L/2}$ as a function of $\nu$ for different nearest-neighbor interaction strength $V$. Our analytical prediction is shown as the dashed red line and coincides well with the full numerical calculation at the deep HI regime around $V/J\approx2.4$, except at around $\nu=0$ where the long-time nearly adiabatic limit fails. The solid square point labels the result for sufficient slow breaking rate $\Gamma/J=0.1$ as in Fig.~\ref{fig:fig2}b. For larger $\nu$ one needs even slower breaking rate. This indicates the entanglement measurement is relatively robust in this deep HI regime.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.6\textwidth]{fig3.pdf}
\caption{The half entanglement entropy $S_{L/2}$ as a function of $\nu=h^2/(\Gamma\alpha)$. The data points are from full numerical calculations and the dashed red line is our two-level prediction. The dotted black line labels $S_{L/2}=\log 2$. These results are for $U/J=4$ and chain length $L=12$. For each $V$ we first calculate the gap $h$ and coupling parameter $\alpha$, then choose different breaking rate $\Gamma$. For fixed $V$, smaller $\nu$ corresponds to larger $\Gamma$. Around the critical point $V/J\approx 2.4$ the non-adiabatic dynamics is well described by our two-level prediction with the solid square point labels the result for $\Gamma/J=0.1$ as in Fig.~\ref{fig:fig2}b.}
\label{fig:fig3}
\end{center}
\end{figure}
\section{Conclusions}
We have studied the middle chain-breaking dynamics of the HI with broken BC inversion symmetry. We find a critical point within the deep HI regime, where the adiabatic evolution fails as a result of energy level crossing. We show how to understand this non-adiabatic dynamics using a simple two-level model. We give an analytical prediction to the entanglement entropy and find it is relatively robust in this regime. We note that if we rotate the state space to $\left(|0\rangle\pm|1\rangle\right)/\sqrt{2}$ in our two-level prediction, the Hamiltonian near the adiabatic limit $H_{ad}(t)$ reminds us the one of a Landau-Zener system, the infinite-long-time diabatic probability of which is readily solved~\cite{Landau, Zener}. In this paper we provide an alternative analytical solution to the full time evolution. This result is quite general, and should find applications in similar Landau-Zener-like systems. Our results suggest the HI as an ideal system to study the non-equilibrium dynamics of SPT phases with broken symmetries.
The entanglement entropy has been considered theoretically~\cite{Abanin, Daley}, and is realized recently in cold atoms experiments, where a combination of the single-site-resolved microscope and many-body quantum interference is carried out to directly measure the second-order R\'enyi entropy $S_2=-\log\mathrm{Tr}(\rho_A^2)$~\cite{Islam, Kaufman}. This R\'enyi entropy $S_2$ provides a lower bound for the von Neumann entropy we considered here and shares similar behaviors. The generalization of our prediction to the second-order R\'enyi entropy based on the two-level model is straightforward. We also note that since the number partitions are distinct for the two lowest nearly degenerate levels in the deep HI regime, one can directly get the von Neumann entropy by measuring its number entanglement $S_n$ of half system~\cite{Lukin}.
\ack
This research is supported by Fundamental Research Funds for the Central Universities (No. FRF-TP-19-013A3).
\section*{References}
| proofpile-arXiv_065-7755 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\begin{comment}
Given an input content image ``Lunch atop a Skyscraper’’ (Fig 1) and a reference style image ``Sunflowers’’ from Vincent van Gogh (Fig 1), neural style transfer can create a novel image that ``paints’’ the content of the iconic photography in the style of van Gogh. Despite a high quality stylized image, the synthesis is limited to the same viewpoint of the content image. What if van Gogh had happened upon a slightly different view? We can vividly imagine such a ``painting’’, yet none of the existing methods can render a stylized image of a novel view. Such capacity will provide drastically more immersive visual experience for existing internet images when displayed with parallax effect, and support the application of interactive browsing of 3D photos on mobile and AR devices.
\end{comment}
Given an input content image and a reference style image, neural style transfer~\cite{gatys2016image,johnson2016perceptual,chen2017stylebank,huang2017arbitrary,li2017universal,ghiasi2017exploring,gu2018arbitrary,sheng2018avatar,park2019arbitrary,liu2021adaattn} creates a novel image that ``paints’’ the content with the style. Despite a high quality stylized image, the result is limited to the same viewpoint of the content image. What if we can render stylized images from different views? See Fig.\ \ref{fig:teaser} for examples. When displayed with parallax, this capacity will provide drastically more immersive visual experience for 2D images, and support the application of interactive browsing of 3D photos on mobile and AR/VR devices.
In this paper, we address the new task of generating stylized images of novel views {\it from a single input image and an arbitrary reference style image}, as illustrated in Fig~\ref{fig:teaser}. We refer to this task as 3D photo stylization --- a marriage between style transfer and novel view synthesis.
3D photo stylization has several major technical barriers. As observed in~\cite{huang2021stylenvs}, directly combining existing methods of style transfer and novel view synthesis yields blurry or inconsistent stylized images, even with dense 3D geometry obtained from structure from motion and multi-view stereo. This challenge is further manifested with a single content image as the input, where a method must resort to monocular depth estimation with incomplete and noisy 3D geometry, leading to holes and artifacts when synthesizing stylized images of novel views. In addition, training deep models for this task requires a large-scale dataset of diverse scenes with dense geometry annotation that is currently lacking.
To bridge this gap, we draw inspiration from one-shot 3D photography~\cite{niklaus20193d,kopf2020one,shih20203d}, and adopt a point cloud based scene representation~\cite{niklaus20193d,wiles2020synsin,huang2021stylenvs}. Our key innovation is a deep model that learns 3D geometry-aware features on the point cloud {\it without using 2D image features from the content image} for rendering novel views with a consistent style. Our method accounts for the input noise from depth maps, and jointly models style transfer and view synthesis. Moreover, we propose a novel training scheme that enables learning our model using standard image datasets (\eg, MS-COCO~\cite{lin2014microsoft}), without the need of multi-view images or ground-truth depth maps.
Our contributions are summarized into three folds. {\bf (1)} We present the first method to address the new task of 3D photo stylization --- synthesizing stylized novel views from a single content image with arbitrary styles. {\bf (2)} Unlike previous methods, our method learns geometry-aware features on a point cloud without using 2D content image features and from only 2D image datasets. {\bf (3)} Our method demonstrates superior qualitative and quantitative results, and supports several interesting applications
\begin{comment}
\squishlist
\item We present the first method to address the new task of 3D photo stylization --- synthesizing stylized novel views from a single content image given arbitrary styles.
\item Unlike previous style transfer methods, our method learns 3D geometry-aware features on a point cloud without using 2D image features from the content image.
\item We propose a novel training scheme that leverages existing view synthesis methods to generate training samples, and thus enables learning using standard image datasets such as MS-COCO.
\item Our method demonstrates superior qualitative and quantitative results on style quality, view consistency and overall synthesis quality.
\item We extend our method to multiview stylization, and showcase applications on interactive stylization and 3D stylization for historical photos.
\squishend
\end{comment}
\section{Related work}
\label{sec:related}
\noindent {\bf Neural Style Transfer}.
Neural style transfer has received considerable attention. Image style transfer~\cite{gatys2015neural,gatys2016image} renders the content of one image in the style of another. Video style transfer~\cite{ruder2018artistic} injects a style to a sequence of video frames to produce temporally consistent stylization, often by enforcing smoothness constraint on optical flow~\cite{huang2017real,chen2017coherent,ruder2018artistic,wang2020consistent} or in the feature space~\cite{deng2020arbitrary,liu2021adaattn}. Our method faces the same challenge as video style transfer; that the style must be consistent across views. However, our task of 3D photo stylization is more challenging, as it requires the synthesis of novel views and a consistent style among all views.
Technically, early methods formulate style transfer as a slow iterative optimization process~\cite{gatys2015neural,gatys2016image}. Fast feed-forward models later perform stylization in a single forward pass, but can only accommodate one~\cite{johnson2016perceptual,ulyanov2016texture} or a few styles~\cite{dumoulin2016learned,chen2017stylebank}. Most relevant to our work are methods that allow for the transfer of {\it arbitrary} styles while retaining the efficiency of a feed-forward model~\cite{chen2016fast,huang2017arbitrary,li2017universal}.
Our style transfer module builds on Liu~\etal~\cite{liu2021adaattn}, extending an attention-based method to support arbitrary 3D stylization.
\noindent {\bf Novel View Synthesis from a Single Image}.
Novel view synthesis from a single image, also known as one-shot 3D photography, has seen recent progress thanks to deep learning.
Existing approach can be broadly classified as end-to-end models~\cite{tulsiani2018layer,chen2019monocular,wiles2020synsin,tucker2020single,yu2021pixelnerf,rockwell2021pixelsynth,li2021mine,hu2021worldsheet} and modular systems~\cite{niklaus20193d,kopf2020one,shih20203d,jampani2021slide}. End-to-end methods
often fail to recover accurate scene geometry and have difficulty generalizing beyond the scene categories present in training. Hence, our method builds on modular systems.
Modular systems for one-shot 3D photography combine depth estimation~\cite{ranftl2020midas,wei2021leres,ranftl2021dpt} and inpainting models~\cite{liu2018image}, and have demonstrated strong results for in-the-wild images. Niklaus~\etal~\cite{niklaus20193d} maintains and rasterizes a point cloud representation of the scene to synthesize 3D Ken Burns effect. Later methods~\cite{kopf2020one,shih20203d} improve on synthesis quality via local content and depth inpainting on a layered depth image (LDI) of the scene. Jampani~\etal~\cite{jampani2021slide} further introduces soft scene layering to better preserve appearance details. Our work is closely related to Shih~\etal\cite{shih20203d}. We extend their LDI inpainting method for point cloud, and leverage their system to generate ``pseudo'' views during training. Our method also uses the differentiable rasterizer from~\cite{niklaus20193d}.
\noindent {\bf 3D Stylization}. There has been a growing interest in the stylization of 3D content for creative shape editing~\cite{cao2020psnet,yin20213DStyleNet}, visual effect simulation~\cite{guo2021volumetric}, stereoscopic image editing~\cite{chen2018stereoscopic,gong2018neural} and novel view synthesis~\cite{huang2021stylenvs,chiang2021style3d}. Our method falls in this category and is most relevant to stylized novel view synthesis~\cite{huang2021stylenvs,chiang2021style3d}. The key difference is that our method generates stylized novel views from a single image, while previous methods need hundreds of calibrated views as input. Another difference is that our model learns 3D geometry aware features on a point cloud. In contrast, Huang~\etal~\cite{huang2021stylenvs} back-projects 2D image features to 3D space without accounting for scene geometry. While their point aggregation module enables {\it post hoc} processing of image-derived features, the point features remain 2D, leading to visual artifacts and inadequate stylization in renderings. Our work is also related to point cloud stylization \eg, PSNet~\cite{cao2020psnet} and 3DStyleNet~\cite{yin20213DStyleNet}. Both our method and~\cite{cao2020psnet,yin20213DStyleNet} use point cloud as the representation. The difference is that point cloud is an enabling device for stylization and view synthesis in our method, and not as the end product as in~\cite{cao2020psnet,yin20213DStyleNet}.
\begin{figure*}
\centering
\resizebox{0.9\textwidth}{!}{
\includegraphics{latex/figures/workflow_alt.pdf}}\vspace{-1em}
\caption{{\bf Method overview. } Central to our method is a point cloud based scene representation that enables geometry-aware feature learning, attention-based feature stylization and consistent stylized renderings across views. Specifically, we first construct an RGB point cloud from the content image and its estimated depth map. Content features are then extracted directly from the point cloud and stylized given an image of the reference style. Finally, the stylized point features are rendered to novel views and decoded into stylized images.}
\label{fig:workflow}
\vspace{-0.2in}
\end{figure*}
\noindent {\bf Deep Models for Point Cloud Processing}.
Many deep models have been developed for point cloud processing.
Among the popular architectures are models of set based \cite{qi2017pointnet,qi2017pointnet++},
graph convolution based \cite{wang2019dgcnn,li2021deepgcns_pami} and point convolution based \cite{hua2018pointwise,thomas2019KPConv}.
Our model extends a graph based model~\cite{wang2019dgcnn} to handle dense point clouds
(one million points) for high quality stylization.
\section{3D Photo Stylization}
\label{sec:inference}
Given {\it a single input content image} and {\it an arbitrary style image}, the goal of 3D photo stylization is to generate stylized novel views of the content image.
The key of our method is the learning of 3D geometry aware content features directly from a point cloud representation of the scene for high-quality stylization that is consistent across views.
In this section, we describe our workflow at {\it inference} time.
\noindent {\bf Method Overview}.
Fig.\ \ref{fig:workflow} presents an overview of our method. Our method starts by back-projecting the input content image into an RGB point cloud using its estimated depth map. The point cloud is further ``inpainted'' to cover dissoccluded parts of the scene and then ``normalized'' (Section~\ref{subsec:construct}). An efficient graph convolutional network is designed to process the point cloud and extract 3D geometry aware features on the point cloud, leading to point-wise features tailored for 3D stylization (Section~\ref{subsec:encode}). A style transfer module is subsequently adapted to modulate those point-wise features using the input style image (Section~\ref{subsec:stylize}). Finally, a differentiable rasterizer projects the featurized points to novel views for the synthesis of stylized images that are consistent across views (Section~\ref{subsec:render}).
\subsection{Point Cloud Construction}
\label{subsec:construct}
Our method starts by lifting the content image into an RGB point cloud, and further normalizes the point cloud to account for scale ambiguity and uneven point density.
\noindent {\bf Depth Estimation and Synthesis of Hidden Geometry}.
Our method first estimates a dense depth map using an off-the-shelf deep model for monocular depth estimation (LeReS~\cite{wei2021leres}). A key challenge for single-image novel view synthesis is the occlusion in the scene. A dense depth map might expose many ``holes'' when projected to a different view. Inpainting the occluded geometry is thus critical for view synthesis. To this end, we further employs the method of Shih~\etal~\cite{shih20203d} for the synthesis of occluded geometry on a layered depth image (LDI). Thanks to the duality between point cloud and LDI, we map the LDI pixels to an RGB point cloud via perspective back-projection.
\noindent {\bf Point Cloud Normalization}. In light of scale ambiguity and uneven point density characteristic of image-derived point clouds, we transform them into Normalized Device Coordinate (NDC)~\cite{marschner2021fundamentals} before further processing.
The resulting points fall within the $[-1, 1]$ cube with density adjusted accordingly to account for perspectivity. As shown in Fig~\ref{fig:normalization}, this simple procedure is crucial for our method to generalize across scene categories, and allows us to switch to different depth estimators without re-training our model.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{latex/figures/normalization.pdf}\vspace{-0.5em}
\caption{{\bf Effect of point cloud normalization. } Model without normalization (-) performs poorly due to scale ambiguity in depth estimation and non-uniformity in point distribution. In contrast, model with normalization (+) captures fine appearance detail and produces strong stylization irrespective of depth estimator in use.}
\label{fig:normalization}
\vspace{-1.5em}
\end{figure}
\subsection{Encoding Features on Point Cloud}
\label{subsec:encode}
Our next step is to learn features amenable to stylization.
While virtually all existing style transfer algorithms make use of ImageNet pre-trained VGG features, we found that associating 3D points with back-projected VGG features (such as in Huang~\etal~\cite{huang2021stylenvs}) is sub-optimal for stylized novel view synthesis, leading to geometric distortion and structural artifacts as shown in our ablation. We argue that features from a network pre-trained on 2D images are incompetent to describe the intricacy of 3D geometry. This leads us to design an efficient graph convolutional network (GCN) that learns geometry aware features directly from an RGB point cloud, as opposed to using 2D image features.
\begin{figure*}
\centering \vspace{-0.4em}
\resizebox{0.9\textwidth}{!}{
\includegraphics{latex/figures/technical.pdf}}\vspace{-1em}
\caption{{\bf Components of our deep model. } Our model includes three modules --- a point cloud encoder, a stylizer and a neural renderer. The encoder applies MRConvs~\cite{li2021deepgcns_pami} along with farthest point sampling to embed and sub-sample the input RGB point cloud. The stylizer computes attention between the embedded content and style features, and uses attention-weighted affine transformation to modulate the content features for stylization. The neural render consists of a rasterizer that anti-aliases the modulated point features and projects them to novel views, and a U-Net~\cite{ronneberger2015unet} that refines the resulting 2D feature maps and decodes them into stylized images.}
\label{fig:technical}
\vspace{-1.5em}
\end{figure*}
\noindent \textbf{Efficient GCN}. One common drawback for GCN architectures lies in their scalability.
Existing GCNs are designed for points clouds with a few thousand points~\cite{li2021deepgcns_pami}, whereas an image at 1K resolution results in {\it one million} points after inpainting.
To bridge this gap, we propose a highly efficient GCN encoder by drawing strength from multiple point-based network architectures.
Our GCN encoder adopts the max-relative convolution~\cite{li2021deepgcns_pami} for its computational and memory efficiency. To further improve the efficiency, we replace the expensive dynamic k-NN graphs with radius-based ball queries~\cite{qi2017pointnet++} for point aggregation. Moreover, we follow the hierarchical design of VGG network by repeatedly sub-sampling the point cloud via farthest point sampling, as opposed to maintaining the full set of points throughout the model~\cite{li2021deepgcns_pami}.
We illustrate our encoder design in Fig.\ \ref{fig:technical}. The output of our encoder is a sub-sampled, featurized point cloud.
\subsection{Stylizing the Point Cloud}
\label{subsec:stylize}
Going further, our model injects style into the content features. The technical barrier here is the misalignment of content and style features, as the former are defined on a 3D point cloud while the latter (from a pre-trained VGG network) lie in a 2D plane. To address this discrepancy, we make use of learned feature mappings and Adaptive Attention Normalization (AdaAttN)~\cite{liu2021adaattn} to match and combine the content and style features. Let $F_{c}$ be the point-wise content features and $F_{s}$ the style features on a 2D grid. Our style transfer operation is given by
\begin{equation}
\small
F_{cs} = \psi(\mbox{AdaAttN}(\phi(F_{c}), F_{s})),
\end{equation}
where $\phi$ and $\psi$, implemented as point-wise multi-layer perceptrons (MLPs), are learned mappings between the content and style feature spaces, and $\mbox{AdaAttN}$ is the attention-weighted adaptive instance normalization from~\cite{liu2021adaattn}. $\mbox{AdaAttN}$ computes attention between every content feature (a point) and each style feature (a pixel), and uses the attention map to modulate the affine parameters within the instance normalization applied on content features. As a result, $F_{cs}$ incorporates both content and style, and will be further used to render stylized images.
\subsection{Stylized Neural Rendering}
\label{subsec:render}
Our final step is to render stylized point features $F_{cs}$ into stylized images with specified viewpoints.
As illustrated in Fig~\ref{fig:technical}, this is accomplished by (1) projecting point features to an image plane given camera pose and intrinsics; and (2) decoding the projected features into an image using a 2D convolutional network.
\noindent {\bf Feature Rasterization}.
Our rasterizer follows Niklaus~\etal~\cite{niklaus20193d}, and projects the point cloud features $F_{cs}$ into a single-view 2D feature map $F_{2d}$. There is one important difference: we up-sample $F_{cs}$ using inverse distance weighted interpolation~\cite{qi2017pointnet++} {\it before rasterization}. This is reminiscent of super-sampling --- a classical anti-aliasing technique in graphics. In doing so, we grant more flexibility for decoding the projected features into stylized images.
\noindent {\bf Image Decoding}.
Our decoder further maps the 2D feature map $F_{2d}$ to a stylized RGB image at input resolution. The decoder is realized using a 2D convolutional network, following the architecture of U-Net~\cite{ronneberger2015unet}, with transposed convolutions at the entry of each stage for up-sampling.
\section{Learning from 2D Images}
\label{sec:train}
We now present our training scheme. Our model is trained using 2D images following a two-stage approach.
\begin{figure*}
\centering \vspace{-1em}
\includegraphics[width=0.9\linewidth]{latex/figures/baseline1_1.pdf}\vspace{-0.8em}
\caption{{\bf Depth estimation fails on stylized images. } One alternative to 3D photo stylization is to combine stylized content image and its depth estimate. Unfortunately, strong depth estimators such as DPT~\cite{ranftl2021dpt} and LeReS~\cite{wei2021leres} fail on image style transfer output from AdaIN~\cite{huang2017real}, LST~\cite{li2019learning} and AdaAttN~\cite{liu2021adaattn} because stylized images do not follow natural image statistics.}\vspace{-1.2em}
\label{fig:baseline1_1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{latex/figures/baseline1_2.pdf}\vspace{-1em}
\caption{{\bf 3D photo of a stylized content image manifests ubiquitous visual artifacts. } Another alternative to stylizing 3D photos is to combine stylized content image with depth estimate from the {\it original} content image. While depth estimation is unaffected, the style effect bleeds through depth discontinuities. 3D photo inpainting thus fails, with ubiquitous visual artifacts (\textcolor{red}{{\bf red}} arrows) in novel view renderings.}\vspace{-1.5em}
\label{fig:baseline1_2}
\end{figure*}
\noindent \textbf{Generating Multi-view Images for Training}. Training our model requires images from multiple views of the same scene. Unfortunately, a large-scale multi-view image dataset with a diverse set of scenes is lacking. To bridge this gap, we propose to learn from the results of existing one-shot 3D photography methods.
Concretely, we use 3DPhoto~\cite{shih20203d} to convert images from a standard dataset (MS-COCO) into high-quality 3D meshes, from which we synthesize {\it arbitrary} pseudo target views to train our model. In doing so, our model learns from a diverse collection of scenes present in MS-COCO. Learning from synthesized images leads to an inevitable bias residing in 3DPhoto results in trade of dataset diversity. Through our experiments, we show that our model generalizes well across a large set of in-the-wild images at inference time.
\subsection{Two-Stage Training}
The training of our model is divided into a {\it view synthesis} stage where the model learns 3D geometry aware features for novel view synthesis, and a {\it stylization} stage where the model is further trained for novel view stylization.
\noindent {\bf Enforcing Multi-view Consistency}.
A key technical contribution of our work is a multi-view consistency loss. Building a point cloud representation of the input content image allows us to impose additional constraint on {\it pixel values} of the rendered images.\footnote{While the sharing of a featurized point cloud entails multi-view consistency of rasterized {\it feature maps}, the features are subject to a learnable decoding process, through which inconsistency will be introduced.} The key idea is that a scene point $\mathbf{p}$ in the point cloud $\mathbf{P}$ should produce the same pixel color in the views to which it is visible. To this end, we define our consistency loss as
\begin{equation}
\small
\mathcal{L}_{cns} = \sum_{\mathbf{p} \in \mathbf{P}}\sum_{i,j \in \mathbf{V}} \mathcal{V}(p; i, j) \cdot \|\mathbf{I}_{i}(\pi_{i}(\mathbf{p})) - \mathbf{I}_{j}(\pi_{j}(\mathbf{p}))\|_{1},
\end{equation}
where $\mathbf{V}$ is the set of sampled views, $\mathbf{I}_i$ the rendered image from view $i$, $\pi_{i}(\cdot)$ the projection to view $i$, and $\mathcal{V}(p; \cdot, \cdot)$ a visibility function which evaluates to 1 if $p$ is visible to both views and 0 otherwise. Computing the loss incurs minimal overhead since the evaluation of $\pi$ and $\mathcal{V}$ is part of rasterization. As evidenced by our ablation study, our proposed loss significantly improves consistency of stylized renderings.
\noindent {\bf View Synthesis Stage}.
We first train our model for view synthesis, a surrogate task that drives the learning of geometry aware content features. Given an input image, we randomly sample novel views of the scene and ask the model to reconstruct them. To train our model, we make use of an L1 loss $\mathcal{L}_{rgb}$ defined on pixel values, a VGG perceptual loss $\mathcal{L}_{feat}$ defined on network features, and our multi-view consistency loss $\mathcal{L}_{cns}$. The overall loss function is
\begin{equation}
\small
\mathcal{L}_{view} = \mathcal{L}_{rgb} + \mathcal{L}_{feat} + \mathcal{L}_{cns},
\end{equation}
\noindent {\bf Stylization Stage}.
Our model learns to stylize novel views in the second stage. We freeze the encoder for content feature extraction, train the stylizer, and fine-tune the neural renderer. This is done by randomly sampling novel views of the scene and style images from WikiArt~\cite{nichol2016wikiart}, and training our model using
\begin{equation}
\small
\mathcal{L}_{style} = \mathcal{L}_{adaattn} + \mathcal{L}_{cns},
\end{equation}
where $\mathcal{L}_{adaattn}$ is the same AdaAttN loss from \cite{liu2021adaattn} and $\mathcal{L}_{cns}$ is again our multi-view consistency loss.
\noindent {\bf Training Details}.
For view synthesis, we train for 20K iterations (2 epochs) on MS-COCO with a batch size of 8 using Adam~\cite{kingma2015adam} and set the learning rate to 1e-4. We apply the same training schedule for stylization.
\begin{figure*}
\centering \vspace{-1em}
\resizebox{0.95\textwidth}{!}{
\includegraphics{latex/figures/baseline2.pdf}}\vspace{-1.2em}
\caption{{\bf Stylizing rendered images from a 3D photo introduces inconsistency in stylization. } A third baseline is to na\"ively build a 3D photo from the raw content image, then stylize its renderings either one view at a time (\eg, using LST~\cite{li2019learning} or AdaAttN~\cite{liu2021adaattn}) or collectively as a video (\eg, using ReReVST~\cite{wang2020consistent} or the video variant of AdaAttN). Despite stronger results than the other two baselines, the stylization is agnostic to the scene geometry shared by all views and thus produces inconsistent results (\textcolor{yellow}{{\bf yellow}} arrows).}\vspace{-1.5em}
\label{fig:baseline2}
\end{figure*}
\section{Experiments}
\label{sec:result}
We now present the main results of our paper and leave additional results to the supplementary material.
\subsection{Qualitative results}
\label{subsec:qualitative}
By permuting the steps of (1) depth estimation, (2) inpainting, (3) rendering and (4) style transfer, one could imagine two alternative workflows that combine existing models for 3D photo stylization. To compare them with our method, we instantiate these baselines by combining six different style transfer methods (AdaIN~\cite{huang2017arbitrary}, LST~\cite{li2019learning} and AdaAttN~\cite{liu2021adaattn} for image style transfer, and ReReVST~\cite{wang2020consistent}, MCC~\cite{deng2020arbitrary} and the video variant of AdaAttN for video style transfer) with DPT~\cite{ranftl2021dpt} for depth estimation and 3DPhoto~\cite{shih20203d} for inpainting and rendering. Results are created using images from Unsplash~\cite{unsplash2020}, a free-licensed, professional-grade dataset of in-the-wild images.
(1) {\it Style $\rightarrow$ Depth $\rightarrow$ Inpainting $\rightarrow$ Rendering}:
While geometric consistency is granted, depth estimation fails catastrophically on stylized images (Figure~\ref{fig:baseline1_1}). One may alternatively back-project a stylized image using depth estimation from the raw input. Despite better geometry, inpainting remains error-prone due to color bleed-through and shift in color distribution caused by stylization (Figure~\ref{fig:baseline1_2}).
(2) {\it Depth $\rightarrow$ Inpainting $\rightarrow$ Rendering $\rightarrow$ Style}:
This baseline often produces inconsistent stylization across views (Figure~\ref{fig:baseline2}), as each view's style is independent and agnostic to the underlying scene geometry.
In contrast, our method manages to generate high-quality stylized renderings free of visual artifacts and inconsistency. The second baseline produces gentle inconsistency under small viewpoint change typical to 3D photo browsing. This is more benign than the visual artifacts produced by the first baseline. We further compare our method with the second baseline via quantitative experiments and a user study.
\subsection{Quantitative results}
\label{subsec:quantitative}
Given that evaluation of style quality is a very subjective matter, we defer it to the user study and focus on the evaluation of consistency in our quantitative experiments.
\noindent {\bf Evaluation Protocol and Metrics}.
We run our method and the baseline on ten diverse content images from the web and 40 styles sampled from the compilation of Gao~\etal~\cite{gao2020fast}. The baseline, as discussed before, runs 3DPhoto to synthesize {\it plain} novel-view images, then stylizes them using one of the six style transfer algorithms. Ultimately, this results in 400 stylized 3D photos from each of the seven candidate methods. To quantify inconsistency between a pair of stylized views, we warp one view to the other according to the point cloud based scene geometry, and compute RMSE and the masked LPIPS metric as defined in Huang~\etal~\cite{huang2021stylenvs}. We average the result over 400 pairs of views for each stylized 3D photo and report the mean over all available photos.
\noindent {\bf Results}.
Our results are summarized in Table~\ref{tab:cns}. Our method outperforms all six instantiations of the baseline by a significant margin in terms of both RMSE and LPIPS. Not surprisingly, video style transfer methods produce more consistent results than image style transfer methods owing to their extra smoothness constraint. The fact that our method performs even better without such a constraint shows the effectiveness of maintaining a central featurized point cloud for 3D photo stylization.
\begin{table}[t]
\centering
\resizebox{0.8\columnwidth}{!}{
\begin{tabular}{ll||cc}
\hline
\multicolumn{2}{c||}{Method} & \Gape[10pt][10pt]{RMSE} & LPIPS \\
\hline
\multirow{6}{*}{3DPhoto~\cite{shih20203d} $\rightarrow$} & AdaIN~\cite{huang2017arbitrary} & 0.222 & 0.304 \\
& LST~\cite{li2019learning} & 0.195 & 0.287 \\
& AdaAttN (image)~\cite{liu2021adaattn} & 0.187 & 0.329 \\
\cline{2-4}
& ReReVST~\cite{wang2020consistent} & 0.115 & 0.213 \\
& MCC~\cite{deng2020arbitrary} & 0.092 & 0.200 \\
& AdaAttN (video)~\cite{liu2021adaattn} & 0.135 & 0.209 \\
\hline
\multicolumn{2}{l||}{\textbf{Ours}} & \textbf{0.086} & \textbf{0.133} \\
\hline
\end{tabular}}\vspace{-0.5em}
\caption{{\bf Results on consistency. } We compare our model against baselines that sequentially combine 3DPhoto and image/video style transfer on consistency using RMSE ($\downarrow$) and LPIPS ($\downarrow$).}
\vspace{-1em}
\label{tab:cns}
\end{table}
\begin{figure}[t!]
\centering
\resizebox{0.9\columnwidth}{!}{
\includegraphics{latex/figures/user_study.pdf}}\vspace{-1em}
\caption{{\bf User study. } We conduct a user study to compare our method against baselines that sequentially combine 3DPhoto and image/video style transfer. Methods are evaluated on (a) style quality, (b) multi-view consistency and (c) overall synthesis quality. Results show percentage of users voting for an algorithm.}
\label{fig:user}
\vspace{-1.5em}
\end{figure}
\subsection{User study}
\label{subsec:user}
Going further, we conduct a user study to better understand the perceptual quality of stylized images produced by our method and the baselines. Our study includes three sections for the assessment of style quality, multi-view consistency and overall synthesis quality. Our analysis is based on 5,400 votes from 30 participants. We elaborate on our study design in the supplementary material.
\noindent {\bf Results}.
We visualize the results in Figure~\ref{fig:user}. For style quality, our method is consistently rated better than the alternatives, with the only exception being LST, which our method is on par with. Not coincidentally, our method excels at multi-view consistency, harvesting an overwhelming 95 percent of the votes in four of the six tests. Finally, our method remains the most preferred for overall synthesis quality, beating all alternatives by a large gap. Putting things together, our results provide solid validation on the strength of our approach in producing high-quality stylization that is consistent across views.
\begin{figure}
\centering \vspace{-0.5em}
\includegraphics[width=1.0\linewidth]{latex/figures/geometry.pdf}\vspace{-1em}
\caption{{\bf Effect of geometry-aware feature learning. } 3D photo stylization with back-projected 2D VGG features suffers from geometric distortion (\textcolor{yellow}{{\bf yellow}} arrows) and visual artifacts (\textcolor{red}{{\bf red}} boxes). In contrast, our geometry-aware learning scheme better maintains content structure and produces more pleasant texture.}\vspace{-0.5em}
\label{fig:geometry}
\end{figure}
\begin{table}
\centering
\resizebox{0.55\linewidth}{!}{
\begin{tabular}{cc||cc}
\hline
\multicolumn{2}{c||}{Training stage} & \multicolumn{1}{l}{\multirow{2}{*}{RMSE}} & \multicolumn{1}{l}{\multirow{2}{*}{LPIPS}} \\
\textit{ViewSyn} & \textit{Stylize} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\
\hline
\begin{tabular}[c]{@{}c@{}}$-$\\\end{tabular} & $-$ & 0.113 & 0.199 \\
\begin{tabular}[c]{@{}c@{}}$+$\\\end{tabular} & $-$ & 0.109 & 0.190 \\
\begin{tabular}[c]{@{}c@{}}$-$\\\end{tabular} & $+$ & \textbf{0.081} & 0.132 \\
\begin{tabular}[c]{@{}c@{}}$+$\\\end{tabular} & $+$ & 0.086 & \textbf{0.128} \\
\hline
\end{tabular}}\vspace{-0.5em}
\caption{{\bf Effect of consistency loss. } We compare models trained with (+) or without (-) the loss using RMSE ($\downarrow$) and LPIPS ($\downarrow$).}
\vspace{-1.5em}
\label{tab:abcns}
\end{table}
\subsection{Ablation studies}
\label{subsec:ablation}
\smallskip
\noindent {\bf Effect of Geometry-aware Feature Learning}.
We study the strength of geometry-aware feature learning. Specifically, we construct a variant of our model with the only difference that content features are not learned on the point cloud, but rather come from a pre-trained VGG network as in 2D style transfer methods. In particular, we sidestep our proposed GCN encoding scheme by projecting an RGB point cloud to eight extreme views defined by a bounding volume, running the VGG encoder for feature extraction, and back-projecting the 2D features to a point cloud from which stylization and rendering proceed as before. As shown in Fig~\ref{fig:geometry}, this VGG-based variant produces geometric distortion and visual artifacts in stylized images, as opposed to our model using geometry-aware feature learning.
\begin{figure*}[t]
\centering \vspace{-1em}
\includegraphics[width=0.95\linewidth]{latex/figures/extension.pdf}\vspace{-1em}
\caption{{\bf Extension to multi-view input. } Compared with StyleScene~\cite{huang2021stylenvs}, our method more closely resembles the reference style, better preserves the content geometry (\textcolor{red}{{\bf red}} boxes), and is more robust to change in viewpoint distribution (second row).}
\label{fig:extension}
\vspace{-0.5em}
\end{figure*}
\begin{table*}[t]
\centering
\resizebox{1.0\textwidth}{!}{
\begin{tabular}{c||cc|cc|cc|cc||cc|cc|cc|cc}
\hline
\multirow{3}{*}{Method} & \multicolumn{8}{c||}{Short-range consistency} & \multicolumn{8}{c}{Long-range consistency} \\
\cline{2-17}
& \multicolumn{2}{c|}{\textit{Truck}} & \multicolumn{2}{c|}{\textit{Playground}} & \multicolumn{2}{c|}{\textit{Train}} & \multicolumn{2}{c||}{\textit{M60}} & \multicolumn{2}{c|}{\textit{Truck}} & \multicolumn{2}{c|}{\textit{Playground}} & \multicolumn{2}{c|}{\textit{Train}} & \multicolumn{2}{c}{\textit{M60}} \\
& RMSE & LPIPS & RMSE & LPIPS & RMSE & LPIPS & RMSE & LPIPS & RMSE & LPIPS & RMSE & LPIPS & RMSE & LPIPS & RMSE & LPIPS \\
\hline
\multicolumn{1}{l||}{StyleScene (global)} & 0.124 & 0.143 & 0.108 & 0.142 & 0.121 & 0.157 & 0.120 & 0.143 & 0.163 & 0.188 & 0.146 & 0.189 & 0.159 & 0.213 & 0.160 & 0.192 \\
\multicolumn{1}{l||}{StyleScene (local)} & 0.119 & 0.168 & 0.127 & 0.169 & 0.161 & 0.169 & N/A & N/A & 0.152 & 0.203 & 0.166 & 0.205 & 0.204 & 0.220 & N/A & N/A \\
\hline
\multicolumn{1}{l||}{\textbf{Ours (local)}} & \textbf{0.099} & \textbf{0.107} & \textbf{0.093} & \textbf{0.111} & \textbf{0.104} & \textbf{0.112} & \textbf{0.117} & \textbf{0.112} & \textbf{0.113} & \textbf{0.128} & \textbf{0.110} & \textbf{0.127} & \textbf{0.120} & \textbf{0.145} & \textbf{0.136} & \textbf{0.136} \\
\hline
\end{tabular}}\vspace{-0.5em}
\caption{{\bf Consistency in the multi-view scenario. } On the Tanks and Temples dataset~\cite{knapitsch2017tanks}, we compare our method with StyleScene on short- and long-range consistency as defined in~\cite{huang2021stylenvs} using RMSE ($\downarrow$) and LPIPS ($\downarrow$).}\vspace{-1.5em}
\label{tab:extension}
\end{table*}
\noindent {\bf Effect of Consistency Loss}.
We evaluate the contribution of our consistency loss in Table~\ref{tab:abcns}. Despite a shared point cloud, model trained without the consistency loss produces less consistent renderings measured in RMSE and LPIPS. We attribute this to the learnable feature decoding step, which is too flexible to preserve consistency in output images in the absence of a constraint. In this respect, our consistency loss, especially when applied in the stylization stage of training, acts as a strong regularizer on the decoder.
\subsection{Extension to Multi-view Inputs}
\label{subsec:extension}
Our method can be easily extended for stylized novel view synthesis given multi-view inputs. We compare our extension with StyleScene~\cite{huang2021stylenvs}, which similarly operates on point cloud but requires multiple input views. We perform experiments on the Tanks and Temples dataset~\cite{knapitsch2017tanks} under two protocols. The {\it global} protocol uses all available views (up to 300) as in~\cite{huang2021stylenvs} for point cloud reconstruction, whereas the more challenging {\it local} protocol uses a sparse set of 6-8 views on the camera trajectory for novel view synthesis. In Fig~\ref{fig:extension} and Table~\ref{tab:extension}, we show that our method is better in terms of style quality, short- and long-range consistency, and robustness to the distribution of input views.
\subsection{Applications}
\label{sec:application}
\begin{figure}[thp]
\centering \vspace{-0.5em}
\includegraphics[width=0.95\linewidth]{latex/figures/application_alt.pdf}\vspace{-0.8em}
\caption{{\bf Demonstration of Applications. } Layered stylization for AR {\it (upper)} and 3D browsing of a stylized historical photo\protect\footnotemark {\it (lower)}---``A small arch welcomes the President to Metlakatla, Alaska, created by D. L. Hollandy 1923.''}\vspace{-1.5em}
\label{fig:application}
\end{figure}
\noindent {\bf Layered Stylization for AR applications. }
Human centered photography is of central interest in mobile AR applications. As a proof-of-concept experiment to demonstrate our method's potential in AR, we apply PointRend~\cite{kirillov2020pointrend} to segment foreground human subjects in images from Unsplash~\cite{unsplash2020}, and stylize the background scene using our method while leaving the foreground human untouched (Fig~\ref{fig:application}a). The final stylized 3D photo upon rendering initiates a virtual tour into a 3D environment in an artistic style.
\noindent {\bf 3D Exploration of Stylized Historical Photos. }
Historical photos represent a large fraction of existing image assets and remain under-explored in computer vision and graphics. As we demonstrate on the Keystone dataset~\cite{luo2020keystonedepth} (Fig~\ref{fig:application}b), our method can be readily applied for the 3D browsing of historical photos in an artistic style, bringing past moments back alive in an unexpected way.
\section{Discussion}
\label{sec:discuss}
In this paper, we connected neural style transfer and one-shot 3D photography for the first time, and introduced the novel task of 3D photo stylization -- generating stylized novel views from a single image given an arbitrary style. We showed that a na\"ive combination of solutions from the two worlds do not work well, and proposed a deep model that jointly models style transfer and view synthesis for high-quality 3D photo stylization. We demonstrated the strength of our approach using extensive qualitative and quantitative studies, and presented interesting applications of our method for 3D content creation. We hope our method will open an exciting avenue of applications in 3D content creation from 2D photos.
\section{%
\usepackage[pagebackref,breaklinks,colorlinks]{hyperref}
\usepackage[capitalize]{cleveref}
\crefname{section}{Sec.}{Secs.}
\Crefname{section}{Section}{Sections}
\Crefname{table}{Table}{Tables}
\crefname{table}{Tab.}{Tabs.}
\newcommand{\squishlist}{
\begin{list}{$\bullet$}
{ \setlength{\itemsep}{0pt}
\setlength{\parsep}{1pt}
\setlength{\topsep}{1pt}
\setlength{\partopsep}{0pt}
\setlength{\leftmargin}{1.5em}
\setlength{\labelwidth}{1em}
\setlength{\labelsep}{0.5em} } }
\newcommand{\squishend}{\end{list}
}
\def\cvprPaperID{9458}
\defCVPR{CVPR}
\def2022{2022}
\title{3D Photo Stylization: \\Learning to Generate Stylized Novel Views from a Single Image\vspace{-0.8em}}
\author{
Fangzhou Mu$^1$\thanks{}\quad
Jian Wang$^2$\thanks{}\quad
Yicheng Wu$^2$\footnotemark[2]\quad
Yin Li$^1$\footnotemark[2]
\\
$^1$University of Wisconsin-Madison\quad
$^2$Snap Research
\\
$^1${\tt\small \{fmu2, yin.li\}@wisc.edu}\quad
$^2${\tt\small \{jwang4, yicheng.wu\}@snap.com}
}
\begin{document}
\twocolumn[{
\renewcommand\twocolumn[1][]{#1}
\maketitle
\vspace{-3.6em}
\begin{center}
\centering
\captionsetup{type=figure}
\includegraphics[width=.86\textwidth]{latex/figures/teaser.pdf}\vspace{-1.25em}
\captionof{figure}{{\bf 3D photo stylization. } Given a {\it single} content image, our method synthesizes novel views of the scene in an arbitrary style. In doing so, our method delivers immersive viewing experience of a memorable moment within existing photos.
}\vspace{-0.5em}
\label{fig:teaser}
\end{center}
}]
{
\renewcommand{\thefootnote}
{\fnsymbol{footnote}}
\footnotetext[1]{Work partially done when Fangzhou was an intern at Snap Research}
\footnotetext[2]{co-corresponding authors}
}
\begin{abstract}\vspace{-0.75em}
Visual content creation has spurred a soaring interest given its applications in mobile photography and AR / VR. Style transfer and single-image 3D photography as two representative tasks have so far evolved independently. In this paper, we make a connection between the two, and address the challenging task of 3D photo stylization --- generating stylized novel views from a single image given an arbitrary style.
Our key intuition is that style transfer and view synthesis have to be jointly modeled for this task. To this end, we propose a deep model that learns geometry-aware content features for stylization from a point cloud representation of the scene, resulting in high-quality stylized images that are consistent across views. Further, we introduce a novel training protocol to enable the learning using only 2D images. We demonstrate the superiority of our method via extensive qualitative and quantitative studies, and showcase key applications of our method in light of the growing demand for 3D content creation from 2D image assets.\footnote{Project page: \url{http://pages.cs.wisc.edu/~fmu/style3d}}
\end{abstract}
\input{latex/sections/01_intro}
\input{latex/sections/02_related_work}
\input{latex/sections/03_method}
\input{latex/sections/04_training}
\input{latex/sections/05_experiment}
{\small
\bibliographystyle{ieee_fullname}
| proofpile-arXiv_065-7765 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsubsection*{Acknowledgements}
We would like to thank
Luca Rossini,
Alex Shestopaloff
and
Efi Kokiopoulou
for helpful comments on an earlier draft of the paper.
\section{More details on the methods}
\label{sec:app}
\subsection{Linear bandits}
\label{app:Linear}
\label{app:linearBandit}
In this section, we discuss how to do belief updating for a linear bandit,
where the reward model has the form
$f^{\text{dnn}}(s,a;{\bm{\theta}}) = {\bm{w}}_{a}^{\mkern-1.5mu\mathsf{T}} s$,
where ${\bm{\theta}}=\myvec{W}$ are the parameters.
(We ignore the bias term, which can be accomodated by augmenting
the input features $s$ with a constant 1.)
To simplify the notation,
we give the derivation for a single arm.
In practice, this procedure is repeated
separately for each arm, using the contexts and rewards
for the time periods where that arm was used.
\subsubsection{Known variance $\sigma^2$}
\label{sec:knownSigma}
For now, we assume the observation noise $\sigma^2$ is known.
We start with
the uniformative prior ${\bm{b}}_{0} = \mathcal{N}({\bm{w}}|{\bm{\mu}}_{0},\myvecsym{\Sigma}_{0})$,
where ${\bm{\mu}}_{0}={\bm{0}}$ is the prior mean
and $\myvecsym{\Sigma}_{0}=(1/\epsilon) \myvec{I}$ is the prior covariance
for some small $\epsilon > 0$.
Let $\myvec{X}$ be the $N \times N_s$ matrix of contexts
for this arm during the warmup period
(so $N=N_w$ if we pull each arm $N_w$ times),
and let ${\bm{y}}$ be the corresponding $N \times 1$ vector of rewards.
We can compute the initial belief state based on the warmup data
by applying Bayes rule to the uninformative prior to get
\begin{align}
p({\bm{w}}|\myvec{X}_{\tau},{\bm{y}}_{\tau}) &= \mathcal{N}({\bm{w}}|{\bm{\mu}}_{\tau},\myvecsym{\Sigma}_{\tau}) \\
\myvecsym{\Sigma}_{\tau} &= (\myvecsym{\Sigma}_{0}^{-1} + \frac{1}{\sigma^2} \myvec{X}^{\mkern-1.5mu\mathsf{T}} \myvec{X})^{-1} \\
{\bm{\mu}}_{\tau}&= \myvecsym{\Sigma}_{\tau}(\myvecsym{\Sigma}_{0}^{-1} {\bm{\mu}}_{0} + \frac{1}{\sigma^2} \myvec{X}^{\mkern-1.5mu\mathsf{T}} {\bm{y}})
\end{align}
After this initial batch update, we can perform incremental updates.
We can use the Sherman-Morrison formula for rank one updating
to efficiently compute the new covariance, without any matrix inversions:
\begin{eqnarray}
\myvecsym{\Sigma}_t = (\myvecsym{\Sigma}_{t-1}^{-1} + \frac{1}{\sigma^2} {\bm{x}}_t {\bm{x}}_t^{\mkern-1.5mu\mathsf{T}})^{-1}
= \myvecsym{\Sigma}_{t-1} - \frac{\myvecsym{\Sigma}_{t-1} {\bm{x}}_t {\bm{x}}_t^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{t-1}}
{\sigma^2 + {\bm{x}}_t^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{t-1} {\bm{x}}_t}
\end{eqnarray}
To compute the mean, we will assume ${\bm{\mu}}_0={\bm{0}}$
and $\myvecsym{\Sigma}_0 = \kappa^2 \myvec{I}$.
Then we have
\begin{align}
{\bm{\mu}}_t &=
\frac{1}{\sigma^2} \myvecsym{\Sigma}_t \myvec{X}^{\mkern-1.5mu\mathsf{T}} {\bm{y}} = \frac{1}{\sigma^2} \myvecsym{\Sigma}_t \myvecsym{\psi}_t \\
\myvecsym{\psi}_t &= \myvecsym{\psi}_{t-1} + {\bm{x}}_t y_t
\end{align}
An alternative (but equivalent) approach
is to use the recursive least squares (RLS) algorithm,
which is a special case of
the Kalman filter
(see e.g., \citep{Borodachev2016} for the derivation).
The updates are as follows:
\begin{align}
e_t &= y_t - {\bm{x}}_t^{\mkern-1.5mu\mathsf{T}} {\bm{\mu}}_{t-1} \\
s_t &= {\bm{x}}_t^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{t-1} {\bm{x}}_t + \sigma^2 \\
{\bm{k}}_t&= \frac{1}{s_t} \myvecsym{\Sigma}_{t-1} {\bm{x}}_t \\
{\bm{\mu}}_{t} &= {\bm{\mu}}_{t-1} + {\bm{k}}_t e_t \\
\myvecsym{\Sigma}_t &=
\myvecsym{\Sigma}_{t-1} - {\bm{k}}_t {\bm{k}}_t^{\mkern-1.5mu\mathsf{T}} s_t
\end{align}
(Of course, we only update the belief state for the
arm that was actually pulled at time $t$.)
\subsubsection{Unknown variance $\sigma^2$}
\label{sec:unknownSigma}
Now we consider the case where $\sigma^2$ is also unknown,
as in \citep{Riquelme2018,Nabati2021}.
This lets the algorithm explictly represent uncertainty in the reward
for each action, which will increase the dynamic range of the sampled
parameters,
leading to more aggressive exploration.
We have noticed this gives improved results over fixing $\sigma$.
We will use a conjugate normal inverse Gamma prior
$\mathrm{NIG}({\bm{w}},\sigma^2|{\bm{\mu}}_{0}, \myvecsym{\Sigma}_{0}, a_{0}, b_{0})$.
The batch update is as follows,
where $\myvec{X}$ is all the contexts for this arm up to $t$,
and ${\bm{y}}$ is all the rewards for this arm up to $t$:
\begin{align}
p({\bm{w}},\sigma^2|\myvec{X}, {\bm{y}})
&= \mathrm{NIG}({\bm{w}},\sigma^2|{\bm{\mu}}_{t}, \myvecsym{\Sigma}_{t}, a_{t}, b_{t}) \\
\myvecsym{\Sigma}_{t} &= (\myvecsym{\Sigma}_{0}^{-1} + \myvec{X}^{\mkern-1.5mu\mathsf{T}} \myvec{X})^{-1} \\
{\bm{\mu}}_{t} &= \myvecsym{\Sigma}_{t}(\myvecsym{\Sigma}_{0}^{-1} {\bm{\mu}}_{t} + \myvec{X}^{\mkern-1.5mu\mathsf{T}} {\bm{y}}) \\
a_{t} &= a_{0} + \frac{N_t}{2} \\
b_{t} &= b_{0} + \frac{1}{2} \left(
{\bm{y}}^{\mkern-1.5mu\mathsf{T}} {\bm{y}} + {\bm{\mu}}_{0}^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{0}^{-1} {\bm{\mu}}_{0}
- {\bm{\mu}}_{t} \myvecsym{\Sigma}_{t}^{-1} {\bm{\mu}}_{t} \right)
\end{align}
This matches Equations 1--2 of \citep{Riquelme2018}.\footnote{
There is a small typo in Equation 2 of \citep{Riquelme2018}:
the $\myvecsym{\Sigma}_0$ should be inverted.
}
To sample from this posterior,
we first sample $\tilde{\sigma}^2 \sim \mathrm{IG}(a_{t},b_{t})$,
and then sample ${\bm{w}} \sim \mathcal{N}({\bm{\mu}}_{t}, \tilde{\sigma}^2 \myvecsym{\Sigma}_{t})$.
We can rewrite the above equations in incremental form as follows:
\begin{align}
p({\bm{w}},\sigma^2|D_{0:t})
&= \mathrm{NIG}({\bm{w}},\sigma^2|{\bm{\mu}}_t, \myvecsym{\Sigma}_t,a_t, b_t) \\
\myvecsym{\Sigma}_t &= (\myvecsym{\Sigma}_{t-1}^{-1} + {\bm{x}}_t {\bm{x}}_t^{\mkern-1.5mu\mathsf{T}})^{-1} \\
{\bm{\mu}}_t &= \myvecsym{\Sigma}_t(\myvecsym{\Sigma}_{t-1}^{-1} {\bm{\mu}}_{t-1} + {\bm{x}}_t y_t) \\
a_t &= a_{t-1} + \frac{1}{2} \\
b_t &= b_{t-1} + \frac{1}{2} \left(
y_t^2 + {\bm{\mu}}_{t-1}^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{t-1}^{-1} {\bm{\mu}}_{t-1}
- {\bm{\mu}}_t \myvecsym{\Sigma}_t^{-1} {\bm{\mu}}_t \right)
\end{align}
It is natural to want to derive a version of these equations
which avoids the matrix inversion at each step.
We can incrementally update $\myvecsym{\Sigma}_t$
without inverting $\myvecsym{\Sigma}_{t-1}$,
using Sherman-Morrison,
as in \cref{sec:knownSigma}.
However, computing $b_t$ needs access to $\myvecsym{\Sigma}_t^{-1}$.
Fortunately, we can generalize the Kalman filter
to the case where $V=\sigma^2$ is unknown,
as described in \citep[Sec 4.6]{West97};
this avoids any matrix inversions.
To describe this algorithm,
let
the likelihood at time $t$ be defined as follows:
\begin{align}
p_t(y_t|{\bm{w}}_t,V) &= \mathcal{N}(y_t|{\bm{x}}_t^{\mkern-1.5mu\mathsf{T}} {\bm{w}}_t, V)
\end{align}
Let $\lambda=1/V$ be the observation precision.
To start the algorithm, we use the following prior:
\begin{align}
p_0(\lambda) &= \mathrm{Ga}(\frac{\nu_0}{2}, \frac{\nu_0 \tau_0}{2}) \\
p_0({\bm{w}}|\lambda) &= \mathcal{N}({\bm{\mu}}_0, V \myvecsym{\Sigma}_0^*)
\end{align}
where $\tau_0$ is the prior mean for $\sigma^2$,
and $\nu_0 > 0$ is the strength of this prior.
We now discuss the belief updating step.
We assume that the prior belief state at time $t-1$ is
\begin{eqnarray}
\mathcal{N}({\bm{w}},\lambda|D_{1:t-1})
= \mathcal{N}({\bm{z}}_w|{\bm{\mu}}_{t-1}, V \myvecsym{\Sigma}_{t-1}^*)
\mathrm{Ga}(\lambda|\frac{\nu_{t-1}}{2}, \frac{\nu_{t-1} \tau_{t-1}}{2})
\end{eqnarray}
The posterior is given by
\begin{eqnarray}
\mathcal{N}({\bm{w}},\lambda|D_{1:t})
= \mathcal{N}({\bm{w}}|{\bm{\mu}}_{t}, V \myvecsym{\Sigma}_{t}^*)
\mathrm{Ga}(\lambda|\frac{\nu_{t}}{2}, \frac{\nu_{t} \tau_{t}}{2})
\end{eqnarray}
where
\begin{align}
e_t &= y_t - {\bm{x}}_t^{\mkern-1.5mu\mathsf{T}} {\bm{\mu}}_{t-1} \\
s_t^* &= {\bm{x}}_t^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{t-1}^* {\bm{x}}_t + 1\\
{\bm{k}}_t &= \frac{1}{s_t^*} \myvecsym{\Sigma}_{t-1}^* {\bm{x}}_t \\
{\bm{\mu}}_t &= {\bm{\mu}}_{t-1} + {\bm{k}}_t e_t \\
\myvecsym{\Sigma}^*_t &= \myvecsym{\Sigma}_{t|t-1}^* - {\bm{k}}_t {\bm{k}}_t^{\mkern-1.5mu\mathsf{T}} s_t^* \\
\nu_{t} &= \nu_{t-1} + 1 \\
\nu_t \tau_t &= \nu_{t-1} \tau_{t-1} + e_t^2/s_t^*
\end{align}
If we marginalize out $V$, the marginal distribution
for ${\bm{z}}_t$ is a Student distribution.
However, for Thompson sampling,
it is simpler to sample $\tilde{\lambda} \sim \mathrm{Ga}(\frac{\nu_t}{2},\frac{\nu_t \tau_t}{2})$,
and then to sample ${\bm{w}} \sim \mathcal{N}({\bm{\mu}}_t, \tilde{\sigma}^2 \myvecsym{\Sigma}_t^*)$,
where $\tilde{\sigma}^2=1/\tilde{\lambda}$.
\subsection{Neural linear bandits}
\label{app:NeuralLinear}
\label{app:neuralLinear}
The neural linear model assumes
that $f^{\text{dnn}}(s,a;\vtheta) = {\bm{w}}_{a}^{\mkern-1.5mu\mathsf{T}} \myvecsym{\phi}(s;\myvec{V})$,
where $\myvecsym{\phi}(s;\myvec{V})$ is the feature extractor.
It approximates
the posterior over all the parameters
by using a point estimate for $\myvec{V}$,
a Gaussian distribution for each ${\bm{w}}_i$ (conditional on $\sigma^2_i$), and an inverse
Gamma distributon for each $\sigma^2_i$, i.e.,
\begin{eqnarray}
p({\bm{\theta}}|D_{1:t}) =
\delta(\myvec{V}-\hat{\myvec{V}}_t)
\prod_{i=1}^{N_a} \mathcal{N}({\bm{w}}_i|{\bm{\mu}}_{t,i}, \sigma^2_i \myvecsym{\Sigma}_{t,i})
\mathrm{IG}(\sigma^2_i|a_i,b_i)
\end{eqnarray}
where ${\bm{\theta}}=(\myvec{V},\myvec{W},{\bm{a}},{\bm{b}})$ are all the parameters,
and $\delta({\bm{u}})$ is a delta function.
Furthermore,
to avoid catastrophic forgetting,
we also need to store all of the previous observations,
so the belief state
has the form
${\bm{b}}_t = (D_{1:t}, \hat{\myvec{V}}_t, {\bm{\mu}}_{t,1:N_a}, \myvecsym{\Sigma}_{t,1:N_a},
{\bm{a}}_{1:N_a}, {\bm{b}}_{1:N_a})$.
The neural network parameters are computed using SGD.
After updating $\hat{\myvec{V}}_t$,
we update the parameters of the Normal-Inverse-Gamma distribution
for the final layer weights $\myvec{W}$,
using the following equations
\begin{align}
\myvecsym{\Sigma}_{i} &= (\myvecsym{\Sigma}_{0,i}^{-1} + \myvec{X}_{i}^{\mkern-1.5mu\mathsf{T}} \myvec{X}_i)^{-1} \\
{\bm{\mu}}_{t,i} &= \myvecsym{\Sigma}_{i}(\myvecsym{\Sigma}_{0,i}^{-1} {\bm{\mu}}_{0,i} + \myvec{X}_{i}^{\mkern-1.5mu\mathsf{T}} {\bm{y}}_i) \\
a_{i} &= a_{0,i} + \frac{N_{i}}{2} \\
b_{i} &= b_{0,i} + \frac{1}{2}({\bm{y}}_i^{\mkern-1.5mu\mathsf{T}} {\bm{y}}_i + {\bm{\mu}}_{0,i}^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{0,i} {\bm{\mu}}_{0,i}
- {\bm{\mu}}_{i}^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{i} {\bm{\mu}}_{i})
\end{align}
where we define
$\myvec{X}_i = [\myvecsym{\phi}_j: a_j=i]$ as the matrix whose
rows are the features $\myvecsym{\phi}_j$ from time steps where action $i$ was taken,
and ${\bm{y}}_i = [r_j: a_j=i]$ is the vector of rewards
from time steps where action $i$ was taken.
See \cref{algo:neuralLinear}
for the pseudocode.
\begin{algorithm}
\caption{Neural Linear.}
\label{algo:neuralLinear}
\For{$t=(\tau+1):T$}{
$s_t = \text{Environment.GetState}(t)$ \;
$\tilde{\sigma}_i \sim \text{InverseGamma}(a_i, b_i)$ for all $i$ \;
$\tilde{{\bm{w}}}_i \sim \mathcal{N}({\bm{\mu}}_i, \tilde{\sigma}_i \myvecsym{\Sigma}_i)$ for all $i$\;
$a_t = \operatornamewithlimits{argmax}_i \tilde{{\bm{w}}}_i^{\mkern-1.5mu\mathsf{T}} \myvecsym{\phi}(s_t;\myvec{V}_t)$ \;
$y_t = \text{Environment.GetReward}(s_t,a_t)$ \;
$D_t = (s_t, a_t, y_t)$ \;
\uIf{$t$ \rm{is an SGD update step}}{
${\bm{\theta}}$ = \text{SGD}$({\bm{\theta}}, D_{1:t})$ \;
$\myvec{V} = \text{parameters-for-body}({\bm{\theta}})$ \;
Compute new features: $\myvecsym{\phi}_j = \myvecsym{\phi}(s_j;\myvec{V})$ for all $j \in D_{1:t}$ \;
\For{$i=1:N_a$}{
// Update sufficient statistics \;
$\myvecsym{\psi}_{i} = \sum_{j \leq t: a_t = i} \myvecsym{\phi}_j y_j$ \;
$\myvecsym{\Phi}_{i} = \sum_{j \leq t: a_t=i} \myvecsym{\phi}_j \myvecsym{\phi}_j^{\mkern-1.5mu\mathsf{T}}$ \;
$R_{i}^2 = \sum_{j \leq t: a_t=i} y_t^2$ \;
$N_{i} = \sum_{j \leq t: a_t=i} 1$ \;
// Update belief state \;
$({\bm{\mu}}_i, \myvecsym{\Sigma}_i, a_i, b_i) = \text{update-bel}({\bm{\mu}}_{0,i}, \myvecsym{\Sigma}_{0,i}, a_{0,i}, b_{0,i},
\myvecsym{\psi}_i, \myvecsym{\Phi}_i, R_i^2, N_i)$
}
}
\uElse{
$i = a_t$ \;
$\myvecsym{\psi}_{i} = \myvecsym{\psi}_{i} + \myvecsym{\phi}_t y_t$ \;
$\myvecsym{\Phi}_{i} = \myvecsym{\Phi}_{i} + \myvecsym{\phi}_t \myvecsym{\phi}_t^{\mkern-1.5mu\mathsf{T}}$ \;
$R_{i}^2 = R_{i}^2 + y_t^2$ \;
$N_{i} = N_{i} + 1 $ \;
$({\bm{\mu}}_i, \myvecsym{\Sigma}_i, a_i, b_i) = \text{update-bel}({\bm{\mu}}_{0,i}, \myvecsym{\Sigma}_{0,i}, a_{0,i}, b_{0,i},
\myvecsym{\psi}_i, \myvecsym{\Phi}_i, R_i^2, N_i)$
}
}
\;
function update-bel(${\bm{\mu}}_{0,i}, \myvecsym{\Sigma}_{0,i}, a_{0,i}, b_{0,i}, \myvecsym{\psi}_i, \myvecsym{\Phi}_i, R_i^2, N_i)$) \;
$ \myvecsym{\Sigma}_{i} = (\myvecsym{\Sigma}_{0,i}^{-1} + \myvecsym{\Phi}_{i})^{-1}$ \;
${\bm{\mu}}_{i} = \myvecsym{\Sigma}_{i}(\myvecsym{\Sigma}_{0,i}^{-1} {\bm{\mu}}_{0,i} + \myvecsym{\psi}_i)$ \;
$a_{i} = a_{0,i} + \frac{N_{i}}{2}$ \;
$b_{i} = b_{0,i} + \frac{1}{2}(R_{i}^2 + {\bm{\mu}}_{0,i}^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{0,i} {\bm{\mu}}_{0,i}
- {\bm{\mu}}_{i}^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{i} {\bm{\mu}}_{i}) $ \;
return $({\bm{\mu}}_i, \myvecsym{\Sigma}_i, a_i, b_i)$
\end{algorithm}
\eat{
We now explain how to update the final layer distributions,
following the notation of the LiM2 paper
\citep{Nabati2021}.
Let $({\bm{\mu}}_{0,i},\myvecsym{\Sigma}_{0,i})$ represent
the prior for ${\bm{w}}_i$.
Let $\myvecsym{\phi}_t=\myvecsym{\phi}(s_t;\myvec{V}_t)$ be the feature
vector for the context at step $t$.
After each update to the neural network parameters,
we recompute the sufficient statistics
of all the data seen up to step $t$
for each action $i$:
\begin{align}
\myvecsym{\psi}_{i} &= \sum_{j \leq t: a_t = i} \myvecsym{\phi}_j y_j \\
\myvecsym{\Phi}_{i} &= \sum_{j \leq t: a_t=i} \myvecsym{\phi}_j \myvecsym{\phi}_j^{\mkern-1.5mu\mathsf{T}} \\
R_{i}^2 &= \sum_{j \leq t: a_t=i} y_t^2 \\
N_{i} &= \sum_{j \leq t: a_t=i} 1
\end{align}
For time steps where we do not update the neural network,
we just perform incremental updates
for the action $i$ that was used in that step:
\begin{align}
\myvecsym{\psi}_{i} &= \myvecsym{\psi}_{i} + \myvecsym{\phi}_t y_t \\
\myvecsym{\Phi}_{i} &= \myvecsym{\Phi}_{i} + \myvecsym{\phi}_t \myvecsym{\phi}_t^{\mkern-1.5mu\mathsf{T}} \\
R_{i}^2 &= R_{i}^2 + y_t^2 \\
N_{i} &= N_{i} + 1
\end{align}
We then update
the posterior for the weight vectors,
if the sufficient statistics have changed:
\begin{align}
\myvecsym{\Sigma}_{i} &= (\myvecsym{\Sigma}_{0,i}^{-1} + \myvecsym{\Phi}_{i})^{-1} \\
{\bm{\mu}}_{t,i} &= \myvecsym{\Sigma}_{i}(\myvecsym{\Sigma}_{0,i}^{-1} {\bm{\mu}}_{0,i} + \myvecsym{\psi}_{i})
\end{align}
Finally, we update the posterior for the variance term:
\begin{align}
a_{i} &= a_{0,i} + \frac{N_{i}}{2} \\
b_{i} &= b_{0,i} + \frac{1}{2}(R_{i}^2 + {\bm{\mu}}_{0,i}^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{0,i} {\bm{\mu}}_{0,i}
- {\bm{\mu}}_{i}^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{i} {\bm{\mu}}_{i})
\end{align}
}
\eat{
For time steps where we do not update the neural network,
we just perform incremental updates:
\begin{align}
\myvecsym{\psi}_{t,i} &=
\begin{cases}
\myvecsym{\psi}_{t-1,i} + \myvecsym{\phi}_t y_t & \mbox{if $a_t=i$} \\
\myvecsym{\psi}_{t-1,i} & \mbox{otherwise}
\end{cases} \\
\myvecsym{\Phi}_{t,i} &=
\begin{cases}
\myvecsym{\Phi}_{t-1,i} + \myvecsym{\phi}_t \myvecsym{\phi}_t^{\mkern-1.5mu\mathsf{T}} & \mbox{if $a_t=i$} \\
\myvecsym{\Phi}_{t-1,i} & \mbox{otherwise}
\end{cases}
\end{align}
At each step, we also compute
the posterior for each weight vector:
\begin{align}
\myvecsym{\Sigma}_{t,i} &=
\begin{cases}
(\myvecsym{\Sigma}_{*,i}^{-1} + \myvecsym{\Phi}_{t,i})^{-1} & \mbox{if $a_t=i$} \\
\myvecsym{\Sigma}_{t-1,i} &\mbox{otherwise}
\end{cases} \\
{\bm{\mu}}_{t,i} &=
\begin{cases}
\myvecsym{\Sigma}_{t,i}(\myvecsym{\Sigma}_{*,i}^{-1} {\bm{\mu}}_{*,i} + \myvecsym{\psi}_{t,i}) & \mbox{if $a_t=i$} \\
\myvecsym{\Sigma}_{t-1,i} &\mbox{otherwise}
\end{cases}
\end{align}
Finally, we update the posterior for the variance
term, which is shared across actions:
\begin{align}
r_{t,i}^2 &= \begin{cases}
r_{t-1,i}^2 + y_t^2 & \mbox{if $a_t=i$} \\
r_{t-1,i}^2 &\mbox{otherwise} \end{cases} \\
n_{t,i} &= \begin{cases}
n_{t-1,i} + 1 & \mbox{if $a_t=i$} \\
n_{t-1,i} &\mbox{otherwise} \end{cases} \\
a_{t,i} &= \begin{cases}
a_{*,i} + \frac{n_{t,i}}{2} & \mbox{if $a_t=i$} \\
a_{t-1,i} &\mbox{otherwise} \end{cases} \\
b_{t,i} &= \begin{cases}
b_{*,i} + \frac{1}{2}(r_{t,i}^2 + {\bm{\mu}}_{*,i}^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{*,i} {\bm{\mu}}_{*,i}
- {\bm{\mu}}_{t,i}^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{t,i} {\bm{\mu}}_{t,i}) & \mbox{if $a_t=i$} \\
b_{t-1,i} &\mbox{otherwise} \end{cases}
\end{align}
}
\eat{
To do this,
we compute the hidden feature vectors $\myvecsym{\phi}_n=\myvecsym{\phi}({\bm{x}}_n;\hat{\myvec{U}}_t)$
for each ${\bm{x}}_n=(s_n,a_n) \in D_{0:t}$,
and apply Bayesian updating to compute ${\bm{\mu}}_{t,a}$ and $\myvecsym{\Sigma}_{t,a}$,
starting with the prior prior $\mathcal{N}({\bm{v}}_a|{\bm{m}}^0,\myvecsym{\Sigma}^0)$.
Thus the posterior on the final later weights needs to be recomputed
every time the feature extractor changes
this takes an additional $O(N_z^3 T)$ time.
Thus the algorithm takes $O(T^2)$ time in total.
}
\eat{
For efficiency, the authors of \citep{Riquelme2018} only
perform SGD updating
every $T_u=400$ steps.
If the feature extractor weights are frozen on a given step, only the final layer needs
to be updated, which can be done in $O(N_z^3)$ time,
since we just need to update ${\bm{\mu}}_{t,a}$ and $\myvecsym{\Sigma}_{t,a}$ for the chosen action.
}
\subsection{LiM2}
\label{app:LIM}
\label{app:Lim}
\label{app:LIM2}
In this section, we describe the LiM2
method of \citep{Nabati2021}.
It is similar to the neural linear method
except that the prior
(${\bm{\mu}}_{0,i}, \myvecsym{\Sigma}_{0,i})$ gets updated after each SGD step,
so as to not forget old information.
In addition, SGD is only applied to a rolling window
of the last $M$ most recent observations,
so the memory cost is bounded.
See \cref{algo:LIM} for the pseudocode.
\eat{
We denote this update by
\begin{eqnarray}
({\bm{\theta}}, \{{\bm{\mu}}_{0,i}, \myvecsym{\Sigma}_{0,i}\}) = \text{update-DNN-and-prior}({\bm{\theta}},
\{{\bm{\mu}}_{0,i}, \myvecsym{\Sigma}_{0,i}\},
D_{t-M:t})
\end{eqnarray}
See \cref{algo:LIM} for the pseudocode for this function.
We use this instead of
${\bm{\theta}} = \text{SGD}({\bm{\theta}},D_{1:t})$ in
\cref{algo:neuralLinear};
the rest of the code remains the same.
}
\begin{algorithm}
\caption{LiM2}
\label{algo:LIM}
\For{$t=(\tau+1):T$}{
$s_t = \text{Environment.GetState}(t)$ \;
$\tilde{\sigma}_i \sim \mathrm{IG}(a_i, b_i)$ for all $i$ \;
$\tilde{{\bm{w}}}_i \sim \mathcal{N}({\bm{\mu}}_i, \tilde{\sigma}_i \myvecsym{\Sigma}_i)$ for all $i$\;
$a_t = \operatornamewithlimits{argmax}_i \tilde{{\bm{w}}}_i^{\mkern-1.5mu\mathsf{T}} \myvecsym{\phi}(s_t;\myvec{V}_t)$ \;
$y_t = \text{Environment.GetReward}(s_t,a_t)$ \;
$D_t = (s_t, a_t, y_t)$ \;
$\mymathcal{M}_t = \text{push}(D_t)$ \;
\If{$|\mymathcal{M}_t| > M$}{$\mymathcal{M}_t = \text{pop}(\mymathcal{M}_t)$}
$({\bm{\theta}}, \{{\bm{\mu}}_{0,i}, \myvecsym{\Sigma}_{0,i}\})$ = \text{update-DNN-and-prior}
$({\bm{\theta}},\{{\bm{\mu}}_{0,i}, \myvecsym{\Sigma}_{0,i}\}, \mymathcal{M}_t)$ \;
$\myvec{V} =\text{body}({\bm{\theta}})$ \;
Compute new features: $\myvecsym{\phi}_j = \myvecsym{\phi}(s_j;\myvec{V})$ for all $j \in \mymathcal{M}_t$ \;
\For{$i=1:N_a$}{
// Update sufficient statistics \;
$\myvecsym{\psi}_{i} = \sum_{j \in \mymathcal{M}_t: a_t = i} \myvecsym{\phi}_j y_j$ \;
$\myvecsym{\Phi}_{i} = \sum_{j \in \mymathcal{M}_t: a_t=i} \myvecsym{\phi}_j \myvecsym{\phi}_j^{\mkern-1.5mu\mathsf{T}}$ \;
$R_{i}^2 = \sum_{j \in \mymathcal{M}_t: a_t=i} y_t^2$ \;
$N_{i} = \sum_{j \in \mymathcal{M}_t: a_t=i} 1$ \;
// Update belief state \;
$({\bm{\mu}}_i, \myvecsym{\Sigma}_i, a_i, b_i) = \text{update-bel}({\bm{\mu}}_{0,i}, \myvecsym{\Sigma}_{0,i}, a_{0,i}, b_{0,i},
\myvecsym{\psi}_i, \myvecsym{\Phi}_i, R_i^2, N_i)$
}
}
\end{algorithm}
See \cref{algo:LIMupdate} for the pseudocode for the step
that updates the DNN and the prior on the last layer,
to avoid catastrophic forgetting.
\begin{algorithm}
\caption{LiM2 update step}
\label{algo:LIMupdate}
Input: ${\bm{\theta}}=(\myvec{V},\myvec{W})$, $\{{\bm{\mu}}_{0,i}, \myvecsym{\Sigma}_{0,i}\}$, $D$ \;
\For{$P_1$ \rm{steps}}{
Sample mini batch $D' = \{ s_j, a_j, y_j) : j =1:N_b\}$ from $D$ \;
Compute old features: $\myvecsym{\phi}_{j,\mathrm{old}} = \myvecsym{\phi}(s_j;\myvec{V})$ for all $j \in D'$ \;
${\bm{\theta}}$ = \text{SGD}(${\bm{\theta}}$, $D'$) \;
$\myvec{V}$ = \text{params-for-body}(${\bm{\theta}}$), $\myvec{W}$ = \text{params-for-head}(${\bm{\theta}}$) \;
Compute new features: $\myvecsym{\phi}_j = \myvecsym{\phi}(s_j;\myvec{V})$ for all $j \in D'$ \;
\For{$i=1:N_a$}{
$\myvecsym{\Sigma}_{0,i}$ = \text{PGD}$( \myvecsym{\Sigma}_{0,i},
\{\phi_{j,\mathrm{old}}: a_j=i\}, \{\phi_{j}: a_j=i\})$ \;
}
}
${\bm{\mu}}_{0,i} = {\bm{w}}_i$ for each $i$ \;
Return ${\bm{\theta}}$, $\{{\bm{\mu}}_{0,i}, \myvecsym{\Sigma}_{0,i}\}$ \;
\end{algorithm}
\begin{algorithm}
\caption{Projected Gradient Descent}
\label{algo:PGD}
Input: $\myvec{A}$, $\{\myvecsym{\phi}_{j,old}\}$, $\{\myvecsym{\phi}_{j}\}$ \\
$s_{j}^2 = \myvecsym{\phi}_{j,\mathrm{old}}^{\mkern-1.5mu\mathsf{T}} \myvec{A} \myvecsym{\phi}_{j,\mathrm{old}}$ for all $j$ \;
$\myvecsym{\Phi}_j = \myvecsym{\phi}_j \myvecsym{\phi}_j^{\mkern-1.5mu\mathsf{T}}$ for all $j$ \;
\For{$P_2$ \rm{steps}}{
${\bm{g}} = 2 \sum_{j} (\mathrm{tr}(\myvec{A} \myvecsym{\Phi}_j) - s_j^2) \myvecsym{\Phi}_j$ \\
$\myvec{A} = \myvec{A} - \eta {\bm{g}}$ \\
$(\myvecsym{\Lambda},\myvec{V}) = \text{eig}(\myvec{A})$ \\
$\mymathcal{N} = \{ k: \lambda_k < 0 \}$ \\
$\myvecsym{\Lambda}[k,k] = 0 \text{ for all } k \in \mymathcal{N}$ \\
$\myvec{V}[:,k] = 0 \text{ for all } k \in \mymathcal{N}$ \\
$\myvec{A} = \myvec{V} \myvecsym{\Lambda} \myvec{V}^{\mkern-1.5mu\mathsf{T}}$
}
Return $\myvec{A}$\;
\end{algorithm}
See \cref{algo:PGD}
for the projected gradient descent (PGD) step,
which solves
a semi definite program to optimize the new covariance.
\eat{
More precisely,
every time we perform a minibatch update of
the neural network parameters,
we perform the following procedure.
We first compute the old feature
vectors for all the examples in the minibatch,
which we denote by $\myvecsym{\phi}_{j,\mathrm{old}}$,
then we update ${\bm{\theta}}$,
and then we compute the new features,
$\myvecsym{\phi}_j$.
Next we compute the
following quantities
for each example in the minibatch:
\begin{eqnarray}
s_{j,i}^2 = \myvecsym{\phi}_{j,\mathrm{old}}^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{0,i} \myvecsym{\phi}_{j,\mathrm{old}}
\end{eqnarray}
for each $j \in B_i$,
where $B_i = \{j: a_j=i\}$ is the set of examples
in the minibatch where action $i$ was taken,
and $\myvecsym{\Sigma}_{0,i}$ is the current covariance.
Now we compute the new covariance matrix
$\myvecsym{\Sigma}_{0,i}$ by solving the following SDP:
\begin{eqnarray}
\myvecsym{\Sigma}_{0,i} = \operatornamewithlimits{argmin}_{\myvec{A} \succ 0}
\calL_{ji}(\myvec{A})
\end{eqnarray}
where
\begin{eqnarray}
\calL_{ji}(\myvec{A}) =
(\mathrm{tr}(\myvecsym{\Phi}_{ji}^{\mkern-1.5mu\mathsf{T}} \myvec{A}) - s_{ji}^2)^2
= (\mathrm{tr}(\myvec{A} \myvecsym{\Phi}_{ji}) - s_{ji}j^2)^2
=\sum_{j \in B_i} (\mathrm{tr}(\myvecsym{\Phi}_{ji}^{\mkern-1.5mu\mathsf{T}} \myvec{A}) - s_{j,i}^2)^2
\end{eqnarray}
where $\myvecsym{\Phi}_{ji} = \myvecsym{\phi}_j \myvecsym{\phi}_j^{\mkern-1.5mu\mathsf{T}}$ for each
example $j \in B_i$.
To solve the SDP, we perform several steps of projected
gradient descent.
The gradient is given by
\begin{eqnarray}
\nabla_{\myvec{A}} \calL_{ji}(\myvec{A}) = 2 (\mathrm{tr}(\myvec{A} \myvecsym{\Phi}_{ji}) - s_{ji}^2) \myvecsym{\Phi}_{ji}
\end{eqnarray}
We can solve the optimization problem at iteration $t$
by computing
$\myvecsym{\Sigma}_{0,i} = \text{PGD}(\myvecsym{\Sigma}_{0,i}, \{\myvecsym{\Phi}_{ji} \}, \{ s_{ji}^2 \}, P, \eta=0.01/(t+1))$,
where the PGD function is defined in \cref{alg:PGD}.
(In \citep{Nabati2021}, they use $P=1$ PGD step.)
Finally, we set the new prior covariance to
For the new prior mean,
we use ${\bm{\mu}}_{0,i} = \hat{{\bm{w}}}_i$,
as computed by SGD.
}
\eat{
After updating the prior,
we recompute the sufficient statistics,
$\myvecsym{\Phi}_i$ and $\myvecsym{\psi}_i$,
for all examples in the memory,
as in the neural linear method
of \cref{app:neuralLinear}.
Then we compute
$\myvecsym{\Sigma}_i = (\myvecsym{\Psi}_{0,i} + \myvecsym{\Phi}_i)^{-1}$
and
${\bm{\mu}}_i = \myvecsym{\Sigma}_i(\myvecsym{\Psi}_{0,i} {\bm{\mu}}_{0,i} + \myvecsym{\psi}_i)$,
where $\myvecsym{\Psi}_{0,i} = \myvecsym{\Sigma}_{0,i}^{-1}$ is the prior precision.
}
\eat{
In the code,
$\myvecsym{\psi}_{i}$ is denoted by {\tt f},
$\myvecsym{\Phi}_i$ is denoted by {\tt precision},
$\myvecsym{\Psi}_{0,i}$ is denoted by {\tt precision-prior},
$\myvecsym{\Sigma}_i$ is denoted by {\tt cov}
and
${\bm{\mu}}_i$ is denoted by {\tt mu}.
(Each of these are lists of length $N_a$.)
}
\subsection{Neural Thompson}
\label{app:NeuralTS}
\label{app:neuralTS}
In this section,
we discuss the ``Neural Thompson Sampling'' method of
\citep{neuralTS}.
We follow the presentation of
\citep{Levecque2021}, that shows the connection with linear
TS.
First consider the linear model
$r_{t,a} = {\bm{x}}_{t,a}^{\mkern-1.5mu\mathsf{T}} {\bm{w}}$.
We assume $\sigma^2$ is fixed, $\myvecsym{\Sigma}_0 = \kappa^2 \myvec{I}$,
${\bm{\mu}}_0 = {\bm{0}}$, and $\lambda = \frac{\sigma^2}{\kappa^2}$.
Recall from \cref{sec:knownSigma}
that the posterior over the parameters
is given by
\begin{align}
\myvecsym{\Sigma}_t &=
\left[ \frac{1}{\sigma^2} ( \sigma^2 \myvecsym{\Sigma}_0^{-1}
+ \sum_{j=1}^t {\bm{x}}_j {\bm{x}}_j^{\mkern-1.5mu\mathsf{T}} \right]^{-1}
= \sigma^2 \left[
\underbrace{\lambda \myvec{I}
+ \sum_{j=1}^t {\bm{x}}_j {\bm{x}}_j^{\mkern-1.5mu\mathsf{T}}}_{\myvec{B}_t} \right]^{-1} \\
{\bm{\mu}}_t &= \frac{1}{\sigma^2} \myvecsym{\Sigma}_t \myvecsym{\psi}_t = \myvec{B}_t^{-1} \myvecsym{\psi}_t
= \myvec{B}_t^{-1} \sum_{j=1}^T {\bm{x}}_j y_j
\end{align}
Thus the posterior over the parameters is given by
\begin{eqnarray}
p({\bm{w}}|D_{1:t})
= \mathcal{N}({\bm{w}}|{\bm{\mu}}_t, \lambda \kappa^2 \myvec{B}_t^{-1})
\end{eqnarray}
The induced posterior predictive
distribution over the reward is given by
\begin{align}
p(y|s,a,D_{1:t-1})
&= \mathcal{N}(y|\mu_{t,a},v_{t,a}) \\
\mu_{t,a} &= {\bm{x}}_{t,a}^{\mkern-1.5mu\mathsf{T}} \expect{{\bm{w}}} = {\bm{x}}_{t,a}^{\mkern-1.5mu\mathsf{T}} {\bm{\mu}}_{t-1} \\
v_{t,a} &= {\bm{x}}_{t,a}^{\mkern-1.5mu\mathsf{T}} \var{{\bm{w}}} {\bm{x}}_{t,a}
= \kappa^2 \lambda {\bm{x}}_{t,a}^{\mkern-1.5mu\mathsf{T}} \myvec{B}_{t-1}^{-1} {\bm{x}}_{t,a}
\end{align}
Now consider the NTK case.
We replace ${\bm{x}}_{t,a} $ with
\begin{eqnarray}
\myvecsym{\phi}_{t,a} = \frac{1}{\sqrt{N_h}}
\nabla_{\vtheta} f^{\text{dnn}}(s,a;\vtheta)|_{\vtheta_{t-1}}
\end{eqnarray}
which is the gradient
of the neural net (an MLP with $N_h$ units per layer).
If we set $\kappa^2=1/N_h$, then the posterior predictive
distribution for the reward becomes
\begin{align}
p(y|s,a,D_{1:t})
&= \mathcal{N}(y|\mu_{t,a},v_{t,a}) \\
\mu_{t,a} &= f^{\text{dnn}}(s_t, a; \vtheta_{t-1}) \\
v_{t,a} &=
\lambda \myvecsym{\phi}_{t,a}^{\mkern-1.5mu\mathsf{T}} \myvec{B}_{t-1}^{-1} \myvecsym{\phi}_{t,a}
\end{align}
where
\begin{align}
\myvec{B}_t &= \myvec{B}_{t-1} +
\myvecsym{\phi}(s_t, a_t; {\bm{\theta}}_t) \myvecsym{\phi}(s_t, a_t; {\bm{\theta}}_t)^{\mkern-1.5mu\mathsf{T}}
\end{align}
and we initialize with $\myvec{B}_0 = \lambda \myvec{I}$.
We sample the reward from this distribution for each action $a$,
and then the greedy action is chosen.
\subsection{EKF}
\label{app:EKF}
In this section, we describe the extended Kalman filter
(EKF) formulation in more detail.
Consider the following nonlinear Gaussian state space model:
\begin{align}
{\bm{z}}_t &= {\bm{f}}_{t}({\bm{z}}_{t-1}) + \mathcal{N}({\bm{0}},\myvec{Q}_{t}) \\
{\bm{y}}_t &= {\bm{h}}_t({\bm{z}}_{t}) + \mathcal{N}({\bm{0}},\myvec{R}_t)
\end{align}
where $\vz_t \in \mathbb{R}^{N_z}$ is the hidden state,
$\vy_t \in \mathbb{R}^{N_y}$ is the observation,
${\bm{f}}_t: \mathbb{R}^{N_z} {\textnormal{a}} \mathbb{R}^{N_z}$ is the dynamics model,
and
${\bm{h}}_t: \mathbb{R}^{N_z} {\textnormal{a}} \mathbb{R}^{N_y}$ is the observation model.
The EKF linearizes the model at each step
by computing the following Jacobian matrices:
\begin{align}
\myvec{F}_{t} &= \frac{\partial {\bm{f}}_t({\bm{z}})}{\partial {\bm{z}}}|_{{\bm{\mu}}_{t-1}} \\
\myvec{H}_{t} &= \frac{\partial {\bm{h}}_t({\bm{z}})}{\partial {\bm{z}}}|_{{\bm{\mu}}_{t|t-1}}
\end{align}
(These terms are easy to compute using standard libraries such as JAX.)
The updates then become
\begin{align}
{\bm{\mu}}_{t|t-1} &= {\bm{f}}({\bm{\mu}}_{t-1}) \\
\myvecsym{\Sigma}_{t|t-1}&= \myvec{F}_{t} \myvecsym{\Sigma}_{t-1} \myvec{F}_{t} + \myvec{Q}_{t} \\
{\bm{e}}_t &= \vy_t - {\bm{h}}({\bm{\mu}}_{t|t-1}) \\
\myvec{S}_t &= \myvec{H}_t \myvecsym{\Sigma}_{t|t-1} \myvec{H}_t^{\mkern-1.5mu\mathsf{T}} + \myvec{R}_t \\
\myvec{K}_t &= \myvecsym{\Sigma}_{t|t-1} \myvec{H}_t \myvec{S}_t^{-1} \\
{\bm{\mu}}_t &= {\bm{\mu}}_{t|t-1} + \myvec{K}_t {\bm{e}}_t \\
\myvecsym{\Sigma}_t &= \myvecsym{\Sigma}_{t|t-1} - \myvec{K}_t \myvec{H}_t \myvecsym{\Sigma}_{t|t-1}
= \myvecsym{\Sigma}_{t|t-1} - \myvec{K}_t \myvec{S}_t \myvec{K}_t^{\mkern-1.5mu\mathsf{T}}
\end{align}
(In the case of Bernoulli bandits,
we can use the exponential family formulation of the EKF
discussed in \citep{Ollivier2018}.)
The cost of the EKF is $O(N_y N_z^2)$,
which can be prohibitive for large state spaces.
In such cases, a natural approximation is to use a block diagonal
approximation.
Let us define the following
Jacobian matrices for block $i$:
\begin{align}
\myvec{F}_{t}^i &= \frac{\partial {\bm{f}}^i_t({\bm{z}})}{\partial {\bm{z}}}|_{{\bm{\mu}}_{t-1}} \\
\myvec{H}_{t}^i &= \frac{\partial {\bm{h}}^i_t({\bm{z}})}{\partial {\bm{z}}}|_{{\bm{\mu}}_{t|t-1}}
\end{align}
We then compute the following updates for each block:
\begin{align}
{\bm{\mu}}_{t|t-1}^i &= {\bm{f}}^i({\bm{\mu}}_{t-1}) \\
\myvecsym{\Sigma}_{t|t-1}^i &= (\myvec{F}_{t-1}^i)^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{t-1}^i \myvec{F}_t^i + \myvec{Q}_{t-1}^i \\
\myvec{S}_t &= \sum_i (\myvec{H}_t^i)^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{t|t-1}^i \myvec{H}_t^i + \myvec{R}_t \\
\myvec{K}_t^i &= \myvecsym{\Sigma}_{t|t-1}^i \myvec{H}_t^i \myvec{S}_t^{-1} \\
{\bm{\mu}}_t^i &= {\bm{\mu}}_{t|t-1} + \myvec{K}_t^i {\bm{e}}_t \\
\myvecsym{\Sigma}_t^i &= \myvecsym{\Sigma}_{t|t-1}^i -\myvec{K}_t^i \myvec{H}_t^i \myvecsym{\Sigma}_{t|t-1}^i
\end{align}
Now we specialize the above equations to the setting
of this paper, where the latent state
is ${\bm{z}}_t={\bm{\theta}}_t$, and the dynamics model $f_t$ is the identify function.
Thus the state space model becomes
\begin{align}
p({\bm{\theta}}_t|{\bm{\theta}}_{t-1}) &= \mathcal{N}({\bm{\theta}}_t|{\bm{\theta}}_{t-1}, \myvec{Q}_t) \\
p(y_t|{\bm{x}}_t, {\bm{\theta}}_{t}) &= \mathcal{N}(y_t|f^{\text{dnn}}({\bm{x}}_t,{\bm{\theta}}_{t}), \myvec{R}_t)
\end{align}
where ${\bm{x}}_t=(s_t,a_t)$.
We set $\myvec{R}_t = \sigma^2$,
and $\myvec{Q}_t=\epsilon \myvec{I}$, to allow for a small amount
of parameter drift.
The EKF updates become
\begin{align}
\myvecsym{\Sigma}_{t|t-1} &= \myvecsym{\Sigma}_{t-1} + \myvec{Q}_t \\
\myvec{S}_t &= \myvec{H}_t^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{t|t-1} \myvec{H}_t + \myvec{R}_t \\
\myvec{K}_t &= \myvecsym{\Sigma}_{t|t-1} \myvec{H}_t \myvec{S}_t^{-1} \\
{\bm{\mu}}_t &= {\bm{\mu}}_{t-1} + \myvec{K}_t {\bm{e}}_t \\
\myvecsym{\Sigma}_t &= \myvecsym{\Sigma}_{t|t-1} - \myvec{K}_t \myvec{H}_t \myvecsym{\Sigma}_{t|t-1}
\end{align}
The block diagonal version becomes
\begin{align}
\myvecsym{\Sigma}_{t|t-1}^i &= \myvecsym{\Sigma}_{t-1}^i + \myvec{Q}_{t-1}^i \\
\myvec{S}_t &= \sum_i (\myvec{H}_t^i)^{\mkern-1.5mu\mathsf{T}} \myvecsym{\Sigma}_{t|t-1}^i \myvec{H}_t^i + \myvec{R}_t \\
\myvec{K}_t^i &= \myvecsym{\Sigma}_{t|t-1}^i \myvec{H}_t^i \myvec{S}_t^{-1} \\
{\bm{\mu}}_t^i &= {\bm{\mu}}_{t-1}^i + \myvec{K}_t^i {\bm{e}}_t \\
\myvecsym{\Sigma}_t^i &= \myvecsym{\Sigma}_{t|t-1}^i -\myvec{K}_t^i \myvec{H}_t^i \myvecsym{\Sigma}_{t|t-1}^i
\end{align}
This is called the ``decoupled EKF''
\citep{Puskorius1991,Puskorius2003}.
To match the notation in \citep{Puskorius2003},
let us define
$\myvec{P}_t=\myvecsym{\Sigma}_{t|t-1}$,
${\bm{w}}_t = {\bm{\mu}}_{t|t-1}$,
$\myvec{A}_t=\myvec{S}_t^{-1}$,
$\hat{\myvec{H}}_t^{\mkern-1.5mu\mathsf{T}} = \myvec{H}_t$.
(Note that $\myvec{A}_t$ is a $N_o \times N_o$ matrix, so is a scalar
if $y_t \in \mathbb{R}$.)
\eat{
Then we can rewrite the above as follows:
\begin{align}
\myvec{A}_t &= \left( \myvec{R}_t + \myvec{H}_t^{\mkern-1.5mu\mathsf{T}} \myvec{P}_t \myvec{H}_t\right)^{-1} \\
\myvec{K}_t &= \myvec{P}_t \myvec{H}_t \myvec{A}_t \\
{\bm{w}}_{t+1} &= {\bm{w}}_t + \myvec{K}_t ({\bm{y}}_t - \hat{{\bm{y}}}_t) \\
\myvec{P}_{t+1} &= \myvec{P}_t - \myvec{K}_t \hat{\myvec{H}}_t^{\mkern-1.5mu\mathsf{T}} \myvec{P}_t + \myvec{Q}_t
\end{align}
}
Then we can rewrite the above as follows:
\begin{align}
\myvec{A}_t &= \left( \myvec{R}_t + \sum_i (\myvec{H}_t^i)^{\mkern-1.5mu\mathsf{T}} \myvec{P}_t^i \myvec{H}_t^i\right)^{-1} \\
\myvec{K}_t^i &= \myvec{P}_t^i \myvec{H}_t^i \myvec{A}_t^i \\
{\bm{w}}_{t+1}^i &= {\bm{w}}_t^i + \myvec{K}_t^i {\bm{e}}_t\\
\myvec{P}_{t+1}^i &= \myvec{P}_t^i - \myvec{K}_t^i (\hat{\myvec{H}}_t^i)^{\mkern-1.5mu\mathsf{T}} \myvec{P}_t^i + \myvec{Q}_t^i
\end{align}
\subsubsection*{\bibname}}
\newcommand{../figures}{figures}
\input{packages}
\input{macros}
\input{commands}
\usepackage[
style=alphabetic,
citestyle=alphabetic,
natbib=true,
backend=bibtex,
maxcitenames=3, mincitenames=1,
maxbibnames=6, minbibnames=1,
firstinits=true,
backref=true,
hyperref=true,
doi=false, isbn=false, url=false,
arxiv=abs
]{biblatex}
\eat{
\usepackage[
style=apa,
citestyle=apa,
natbib=true,
backend=biber,
backref=false,
hyperref=true,
doi=false, isbn=false, url=false,
arxiv=abs
]{biblatex}
}
\DefineBibliographyStrings{english}{%
backrefpage = {page}
backrefpages = {pages}
}
\addbibresource{bib.bib}
\usepackage{authblk}
\begin{document}
\eat{
\twocolumn[
\aistatstitle{Efficient Online Bayesian Inference for Neural Bandits}
\aistatsauthor{ Gerardo Duran-Martin \And Aleyna Kara \And Kevin Murphy }
\aistatsaddress{ Queen Mary University \And Boğaziçi University \And Google Research }
]
}
\title{Efficient Online Bayesian Inference for Neural Bandits}
\author[1]{Gerardo Duran-Martin}
\affil[1]{Queen Mary University, UK}
\author[2]{Aleyna Kara}
\affil[2]{Boğaziçi University, Turkey}
\author[3]{Kevin Murphy}
\affil[3]{Google Research, USA}
\maketitle
\input{abstract}
\input{intro}
\input{related}
\input{methods}
\input{results}
\input{discussion}
\input{ack}
\newpage
\section{Discussion}
\label{sec:discuss}
\label{sec:discussion}
We have shown that we can perform efficient online
Bayesian inference for large neural networks by applying
the extended Kalman filter to a low dimensional version
of the parameter space.
In future work, we would like to apply the method
to other sequential decision problems,
such as Bayesian optimization and active learning.
We also intend to extend it
to Bernoulli and other GLM bandits \citep{Filippi2010}.
Fortunately, we can generalize the EKF (and hence our method)
to work with the exponential family,
as explained in \citep{Ollivier2018}.
\eat{
Another extension we hope to pursue in the future
is to replace the EKF algorithm with particle filtering,
possibly with an EKF proposal;
this would be similar to \citep{deFreitas00},
but would perform inference in a subspace.
The non-parametric nature of PF could give improve posterior approximations,
which could in turn result in better decision making and lower regret
\citep{Phan2019}.
}
Finally, a note on societal impact.
Our method makes online Bayesian inference for neural networks more tractable,
which could increase their use. We view this as a positive thing, since
Bayesian methods can express uncertainty, and may be less prone to
making confident but wrong decisions \citep{Bhatt2021}.
However, we acknowledge that bandit algorithms are often
used for recommender systems and online advertising, which can have some
unintended harmful societal effects \citep{Milano2020}.
\section{Introduction}
Contextual bandit problems
(see e.g., \citep{Lattimore2019,Slivkins2019})
are a special case of reinforcement learning,
in which the state (context) at each time step is chosen independently,
rather than being dependent on the past history of states and actions.
Despite this limitation,
contextual bandits
are widely used in real-world applications,
such as
recommender systems \citep{Li10linucb,Guo2020bandits},
advertising \citep{McMahan13,Du2021kdd},
healthcare \citep{Greenewald2017,Aziz2021},
etc.
The goal is to maximize the sequence of rewards $y_t$
obtained by picking actions $a_t$ in response
to each input context or state $s_t$.
To do this, the decision making agent
must learn a reward model $\expect{y_t|s_t,a_t,{\bm{\theta}}} = f^{\text{dnn}}(s_t,a_t;{\bm{\theta}})$,
where ${\bm{\theta}}$ are the unknown model parameters.
Unlike supervised learning, the agent does not get to see
the ``correct'' output, but instead only
gets feedback on whether the choice it made was good or bad
(in the form of the reward signal).
If the agent knew ${\bm{\theta}}$, it could pick the optimal
action using
$a_t^* = \operatornamewithlimits{argmax}_{a \in \mymathcal{A}} f^{\text{dnn}}(s_t,a;{\bm{\theta}})$.
However, since ${\bm{\theta}}$ is unknown, the agent must ``explore'',
so it can gather information about the reward function,
before it can ``exploit'' its model.
In the bandit literature,
the two most common solutions to solving the explore-exploit dilemma
are based on the upper confidence bound (UCB) method
(see e.g., \citep{Li10linucb,Kaufmann2012})
and the Thompson Sampling (TS) method (see e.g., \citep{Agrawal2013icml,Russo2018}).
The key bottleneck in both UBC and TS is efficiently computing the posterior
$p({\bm{\theta}}|D_{1:t})$ in an online fashion,
where $D_{1:t}=\{(s_i,a_i,y_i): i=1:t\}$
is all the data seen so far.
This can be done in closed form for linear-Gaussian models,
but for nonlinear models, such as deep neural networks (DNNs), it is computationally infeasible.
In this paper, we propose to use
a version of the extended Kalman filter
to recursively approximate the parameter posterior
$p({\bm{\theta}}|D_{1:t})$ using constant time and memory
(i.e., independent of $T$).
The main novelty of our approach is that
we show how to scale the EKF to large neural networks
by leveraging recent results that show that deep neural networks often
have very few ``degrees of freedom''
(see e.g.,
\citep{Li2018Intrinsic,Izmailov2019,Larsen2021degrees}).
Thus we can compute a low-dimensional
subspace
and perform Bayesian filtering in the subspace
rather than the original parameter space.
We therefore call our method ``Bayesian subspace bandits''.
Although Bayesian inference in DNN subspaces has previously
been explored (see related work in \cref{sec:related}),
it has not been done in an online or bandit setting, as far as we know.
Since we are using approximate inference,
we lose the well-known optimality
of Thompson sampling \citep{Phan2019};
we leave proving regret bounds for our method to future work.
In this paper, we restrict attention to an empirical comparison.
We show that our method works well in practice
on various datasets,
including the ``Deep Bayesian Bandits Showdown'' benchmark
\citep{Riquelme2018},
the MNIST dataset, and a recommender system dataset.
In addition, our method uses much less memory and time
than most other methods.
Our algorithm is not specific to bandits,
and can be applied to any situation that requires efficient online computation
of the posterior. This includes tasks such as life long learning,
Bayesian optimization,
active learning, reinforcement learning, etc.\footnote{
These problems are all very closely related.
For example, BayesOpt is a kind of (non-contextual) bandit problem
with an infinite number of arms;
the goal is to identify the action (input to the reward
function $f: \mathbb{R}^D {\textnormal{a}} \mathbb{R}$)
that maximizes the output.
Active learning is closely related to BayesOpt, but now the actions correspond to
choosing data points ${\bm{x}} \in \mathbb{R}^n$ that we want to label, and our
objective is to minimize uncertainty about the underlying function $f$,
rather than find the location of its maximum.
}.
However, we leave such extensions to future work.
\chapter[#1]{#1\raisebox{.3\baselineskip}{\normalsize\footnotemark}} \chapterfootnote{#2}}
\newcommand{*}{*}
\newcommand{ {\bf (Unfinished)} }{ {\bf (Unfinished)} }
\newcommand{scikit-learn\xspace}{scikit-learn\xspace}
\newcommand{NumPy\xspace}{NumPy\xspace}
\newcommand{SciPy\xspace}{SciPy\xspace}
\newcommand{JAX\xspace}{JAX\xspace}
\newcommand{\keywordIndex}[1]{#1\index{#1}}
\newcommand{\keyword}[1]{\keywordIndex{#1}}
\newcommand{\keywordSpecial}[2]{{\bf #1}\index{#2}}
\newcommand{\keywordDef}[1]{{\bf #1}\index{#1|textbf}}
\newcommand{\keywordBold}[1]{{\bf #1}}
\newcommand{\partial}{\partial}
\newcommand{H}{H}
\newcommand{W}{W}
\newcommand{\ensuremath{\vR_{i,j}}}{\ensuremath{\myvec{R}_{i,j}}}
\newcommand{\ensuremath{\vR_{i}}}{\ensuremath{\myvec{R}_{i}}}
\newcommand{s}{s}
\newcommand{\vs}{{\bm{s}}}
\newcommand{\textsc{AProx}\xspace}{\textsc{AProx}\xspace}
\newcommand{\textsc{AdaGrad}\xspace}{\textsc{AdaGrad}\xspace}
\newcommand{\textsc{AdaDelta}\xspace}{\textsc{AdaDelta}\xspace}
\newcommand{\textsc{RMSProp}\xspace}{\textsc{RMSProp}\xspace}
\newcommand{\textsc{RPROP}\xspace}{\textsc{RPROP}\xspace}
\newcommand{\textsc{Adam}\xspace}{\textsc{Adam}\xspace}
\newcommand{\textsc{AdamW}\xspace}{\textsc{AdamW}\xspace}
\newcommand{\textsc{Padam}\xspace}{\textsc{Padam}\xspace}
\newcommand{\textsc{Nadam}\xspace}{\textsc{Nadam}\xspace}
\newcommand{\textsc{AMSGrad}\xspace}{\textsc{AMSGrad}\xspace}
\newcommand{\textsc{NAG}\xspace}{\textsc{NAG}\xspace}
\newcommand{\textsc{Yogi}\xspace}{\textsc{Yogi}\xspace}
\newcommand{\params}{\vtheta}
\newcommand{f^{\text{lin}}}{f^{\text{lin}}}
\newcommand{\vT}{\myvec{T}}
\newcommand{\vK}{\myvec{K}}
\newcommand{\calT}{\mymathcal{T}}
\newcommand{\mathrm{MMD}}{\mathrm{MMD}}
\newcommand{\text{MDS}}{\text{MDS}}
\newcommand{\text{PCA}}{\text{PCA}}
\newcommand{\text{kPCA}}{\text{kPCA}}
\newcommand{Nystr{\"o}m\xspace}{Nystr{\"o}m\xspace}
\newcommand{x}{x}
\newcommand{\vx}{{\bm{x}}}
\newcommand{\vv}{{\bm{v}}}
\newcommand{\msg}[2]{m_{#1 {\textnormal{a}} #2}}
\newcommand{\vm}{{\bm{m}}}
\newcommand{\msgbottom}[2]{\msg{#1}{#2}^-}
\newcommand{\msgtop}[2]{\msg{#1}{#2}^+}
\newcommand{\zeta}{\zeta}
\newcommand{\Psi}{\Psi}
\newcommand{\ell}{\ell}
\newcommand{\linkfn_{\text{can}}}{\ell_{\text{can}}}
\newcommand{H}{H}
\newcommand{K}{K}
\newcommand{V}{V}
\newcommand{\nunits}[1]{D_{#1}}
\newcommand{N}{N}
\newcommand{N_*}{N_*}
\newcommand{M}{M}
\newcommand{X}{X}
\newcommand{*}{*}
\newcommand{Z}{Z}
\newcommand{T}{T}
\newcommand{\Kmat}[2]{\myvec{K}_{#1,#2}}
\newcommand{\Kmat{\queryset}{\queryset}}{\Kmat{T}{T}}
\newcommand{\Kmat{\queryset}{\induceset}}{\Kmat{T}{Z}}
\newcommand{\Kmat{\induceset}{\queryset}}{\Kmat{Z}{T}}
\newcommand{\Kmat{\trainset}{\trainset}}{\Kmat{X}{X}}
\newcommand{\Kmat{\testset}{\testset}}{\Kmat{*}{*}}
\newcommand{\Kmat{\trainset}{\testset}}{\Kmat{X}{*}}
\newcommand{\Kmat{\testset}{\trainset}}{\Kmat{*}{X}}
\newcommand{\Kmat{\induceset}{\induceset}}{\Kmat{Z}{Z}}
\newcommand{\Kmat{\trainset}{\induceset}}{\Kmat{X}{Z}}
\newcommand{\Kmat{\induceset}{\trainset}}{\Kmat{Z}{X}}
\newcommand{\Kmat{\testset}{\induceset}}{\Kmat{*}{Z}}
\newcommand{\vk_{\testset,\induceset}}{{\bm{k}}_{*,Z}}
\newcommand{\vk_{\induceset,\testset}}{{\bm{k}}_{Z,*}}
\newcommand{\Kmat{\induceset}{\testset}}{\Kmat{Z}{*}}
\newcommand{\Qmat}[2]{\myvec{Q}_{#1,#2}}
\newcommand{\Qmat{\trainset}{\trainset}}{\Qmat{X}{X}}
\newcommand{\tilde{\vQ}_{\trainset,\trainset}}{\tilde{\myvec{Q}}_{X,X}}
\newcommand{\Qmat{\testset}{\testset}}{\Qmat{*}{*}}
\newcommand{\tilde{\vQ}_{\testset,\testset}}{\tilde{\myvec{Q}}_{*,*}}
\newcommand{\Qmat{\trainset}{\testset}}{\Qmat{X}{*}}
\newcommand{\Qmat{\testset}{\trainset}}{\Qmat{*}{X}}
\newcommand{\Qmat{\induceset}{\induceset}}{\Qmat{Z}{Z}}
\newcommand{\Qmat{\trainset}{\induceset}}{\Qmat{X}{Z}}
\newcommand{\Qmat{\induceset}{\trainset}}{\Qmat{Z}{X}}
\newcommand{\Qmat{\testset}{\induceset}}{\Qmat{*}{Z}}
\newcommand{\Qmat{\induceset}{\testset}}{\Qmat{Z}{*}}
\newcommand{\vX_{\trainset}}{\myvec{X}}
\newcommand{\vX_{\testset}}{\myvec{X}_{*}}
\newcommand{\vX_{\queryset}}{\myvec{X}_{T}}
\newcommand{\vx_{\testset}}{{\bm{x}}_{*}}
\newcommand{\vX_{\induceset}}{\myvec{Z}}
\newcommand{\vy_{\trainset}}{{\bm{y}}}
\newcommand{\myvec{y}_*}{\myvec{y}_*}
\newcommand{\vF_{\trainset}}{\myvec{F}_{X}}
\newcommand{\vf_{\trainset}}{{\bm{f}}_{X}}
\newcommand{\vf_{\testset}}{{\bm{f}}_{*}}
\newcommand{f_{\testset}}{f_{*}}
\newcommand{\vf_{\induceset}}{{\bm{f}}_{Z}}
\newcommand{\vF_{\induceset}}{\myvec{F}_{Z}}
\newcommand{\vf_{\queryset}}{{\bm{f}}_{T}}
\newcommand{\vmu_{\trainset}}{{\bm{\mu}}_{X}}
\newcommand{\vmu_{\testset}}{{\bm{\mu}}_{*}}
\newcommand{\mu_{\testset}}{\mu_{*}}
\newcommand{\hat{\vK}_{\trainset,\trainset}}{\hat{\myvec{K}}_{X,X}}
\newcommand{\hat{\vQ}_{\trainset,\trainset}}{\hat{\myvec{Q}}_{X,X}}
\newcommand{p_{\text{keep}}}{p_{\text{keep}}}
\newcommand{q_{\data,\infparams}}{q_{D,\vphi}}
\newcommand{p_{N}}{p_{N}}
\newcommand{\fnoise}{\overline{f}}
\mdfdefinestyle{fearns}{%
linecolor=red,
outerlinewidth=50pt,
roundcorner=20pt,
innertopmargin=20pt,
innerbottommargin=20pt,
innerrightmargin=20pt,
innerleftmargin=20pt,
backgroundcolor=yellow!50!white}
\newenvironment{fearns}
{
\begin{mdframed}[style=fearns]
}
{
\end{mdframed}
}
\mdfdefinestyle{coptcomment}{%
linecolor=blue,
innertopmargin=10pt,
innerbottommargin=10pt,
innerrightmargin=10pt,
innerleftmargin=10pt,
backgroundcolor=green!25!white}
\newenvironment{coptcomment}
{
\begin{mdframed}[style=coptcomment]
}
{
\end{mdframed}
}
\newcommand{\smallissue}[1]{\textcolor{blue}{#1}}
\mdfdefinestyle{alemi}{%
linecolor=blue,
outerlinewidth=50pt,
roundcorner=20pt,
innertopmargin=20pt,
innerbottommargin=20pt,
innerrightmargin=20pt,
innerleftmargin=20pt,
backgroundcolor=blue!30!white}
\newenvironment{alemi}
{
\begin{mdframed}[style=alemi]
}
{
\end{mdframed}
}
\newcommand{COVID-19\xspace}{COVID-19\xspace}
\newcommand{SARS-CoV-2\xspace}{SARS-CoV-2\xspace}
\newcommand{\blacksquare}{\blacksquare}
\newcommand{\square}{\square}
\newtheorem{lemma}{Lemma}
\newtheorem{definition}{Definition}
\newtheorem{theorem}{Theorem}
\newcommand{a}{a}
\newcommand{h}{h}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\va}{{\bm{a}}}
\newcommand{\ytilde}{\tilde{y}}
\def\code#1{\small{\texttt{#1}}}
\newenvironment{myexercises}
{\begin{exercises}}
{\end{exercises}}
\newcommand{{\bf *}}{{\bf *}}
\newcommand{\myexercise}[1]{\exercisenote{#1} }
\newcommand{\exsrc}[1]{(Source: #1.)}
\newcommand{}{}
\newcommand{\mysoln}[1]{\subsection{#1}}
\newcommand{\solnsrc}[1]{Source: #1.}
\newcommand{\points}[1]{}
\newcommand{\mytext}[1]{\scriptsize{\textsc{#1}}}
\newcommand{\mytext{DEPT}}{\mytext{DEPT}}
\newcommand{\mytext{MALE}}{\mytext{MALE}}
\newcommand{\mytext{GENDER}}{\mytext{GENDER}}
\newcommand{\mathbb{H}_{\mathrm{MF}}}{\mathbb{H}_{\mathrm{MF}}}
\newcommand{\mathbb{H}_{\mathrm{Bethe}}}{\mathbb{H}_{\mathrm{Bethe}}}
\newcommand{\mathbb{H}_{\mathrm{TRBP}}}{\mathbb{H}_{\mathrm{TRBP}}}
\newcommand{\mathbb{H}_{\mathrm{Kikuchi}}}{\mathbb{H}_{\mathrm{Kikuchi}}}
\newcommand{\mathbb{H}_{\mathrm{ep}}}{\mathbb{H}_{\mathrm{ep}}}
\newcommand{\mathbb{H}_{\mathrm{Convex}}}{\mathbb{H}_{\mathrm{Convex}}}
\newcommand{\mathbb{H}}{\mathbb{H}}
\newcommand{\calF}{\mymathcal{F}}
\newcommand{\calF_{\mathrm{Bethe}}}{\mymathcal{F}_{\mathrm{Bethe}}}
\newcommand{\calF_{\mathrm{MF}}}{\mymathcal{F}_{\mathrm{MF}}}
\newcommand{\calF_{\mathrm{Exact}}}{\mymathcal{F}_{\mathrm{Exact}}}
\newcommand{\calF_{\mathrm{TRBP}}}{\mymathcal{F}_{\mathrm{TRBP}}}
\newcommand{\calF_{\mathrm{Kikuchi}}}{\mymathcal{F}_{\mathrm{Kikuchi}}}
\newcommand{\calF_{\mathrm{Convex}}}{\mymathcal{F}_{\mathrm{Convex}}}
\newcommand{\calL_{\mathrm{Bethe}}}{\mymathcal{L}_{\mathrm{Bethe}}}
\newcommand{\calL_{\mathrm{MF}}}{\mymathcal{L}_{\mathrm{MF}}}
\newcommand{\calL_{\mathrm{Exact}}}{\mymathcal{L}_{\mathrm{Exact}}}
\newcommand{\calL_{\mathrm{TRBP}}}{\mymathcal{L}_{\mathrm{TRBP}}}
\newcommand{\calL_{\mathrm{Kikuchi}}}{\mymathcal{L}_{\mathrm{Kikuchi}}}
\newcommand{\calL_{\mathrm{Convex}}}{\mymathcal{L}_{\mathrm{Convex}}}
\newcommand{\text{exact}}{\text{exact}}
\newcommand{\text{convex}}{\text{convex}}
\newcommand{\text{concave}}{\text{concave}}
\newcommand{\textsc{GraphEDM}\xspace}{\textsc{GraphEDM}\xspace}
\newcommand{\mathrm{ENC}}{\mathrm{ENC}}
\newcommand{\mathrm{DEC}}{\mathrm{DEC}}
\newcommand{\mathcal{L}_{G,\mathrm{RECON}}}{\mathcal{L}_{G,\mathrm{RECON}}}
\newcommand{\mathcal{L}_\mathrm{SUP}^S}{\mathcal{L}_\mathrm{SUP}^S}
\newcommand{\mathcal{L}_\mathrm{REG}}{\mathcal{L}_\mathrm{REG}}
\newcommand{\vz_{(i)}}{{\bm{z}}_{(i)}}
\newcommand{\bl}[1]{{\color{blue}{#1}}}
\newcommand{\mr}[1]{{\color{cyan}{#1}}}
\newcommand{\sm}[1]{{\color{red}{#1}}}
\newcommand{\km}[1]{{\color{green}{#1}}}
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
\newcommand{\mathcal{G}(\vz; \vtheta)}{\mathcal{G}({\bm{z}}; {\bm{\theta}})}
\newcommand{\mathcal{D}(\vx; \vphi)}{\mathcal{D}({\bm{x}}; \myvecsym{\phi})}
\newcommand{\mathcal{R}(\vx; \vphi)}{\mathcal{R}({\bm{x}}; \myvecsym{\phi})}
\newcommand{p^\ast\!} %\breve{p}}{p^\ast\!}
\newcommand{\tdist(\vx)}{p^\ast\!} %\breve{p}({\bm{x}})}
\newcommand{q(\vz)}{q({\bm{z}})}
\newcommand{q(\vz')}{q({\bm{z}}')}
\newcommand{q_{\theta}}{q_{\theta}}
\newcommand{q_{\theta}(\vx)}{q_{\theta}({\bm{x}})}
\newcommand{q_{\theta_t}(\vx)}{q_{\theta_t}({\bm{x}})}
\newcommand{\qtheta}{q_{\theta}}
\newcommand{\qthetax}{q_{\theta}(\vx)}
\newcommand{r_{\phi}}{r_{\phi}}
\newcommand{r_{\phi}(\vx)}{r_{\phi}({\bm{x}})}
\newcommand{\blue}[1]{\textcolor{blue}{#1}}
\newcommand{\bb}[1]{\mathbf{#1}}
\newcommand{\bb{S}}{\bb{S}}
\newcommand*{^{\mkern-1.5mu\mathsf{T}}}{^{\mkern-1.5mu\mathsf{T}}}
\newcommand{{\boldsymbol{\theta}}}{{\boldsymbol{\theta}}}
\newcommand{{\boldsymbol{\phi}}}{{\boldsymbol{\phi}}}
\newcommand{p_{\bT}}{p_{{\boldsymbol{\theta}}}}
\newcommand{p_{\bT^*}}{p_{{\boldsymbol{\theta}}^*}}
\newcommand{\mcal}[1]{\mathcal{#1}}
\newcommand{\mbs}[1]{{\boldsymbol{#1}}}
\newcommand{\mrel}[1]{\mathrel{#1}}
\newcommand{p_{\bT^*}}{p_{{\boldsymbol{\theta}}^*}}
\newcommand{p_{\bT'}}{p_{{\boldsymbol{\theta}}'}}
\newcommand{p_{\mathcal{D}}}{p_{\mathcal{D}}}
\renewcommand{\ensuremath{\mathrm{pd}}\xspace}{p_{\mathrm{data}}}
\newcommand{p_{\mathrm{n}}}{p_{\mathrm{n}}}
\newcommand{p_{\mathrm{n,data}}}{p_{\mathrm{n,data}}}
\newcommand{p_{\mathrm{n,\bT}}}{p_{\mathrm{n,{\boldsymbol{\theta}}}}}
\newcommand{p_{\mathrm{n,\bT^*}}}{p_{\mathrm{n,{\boldsymbol{\theta}}^*}}}
\newcommand{E_{\bT}}{E_{{\boldsymbol{\theta}}}}
\newcommand{E_{\bT*}}{E_{{\boldsymbol{\theta}}*}}
\newcommand{Z_{\bT}}{Z_{{\boldsymbol{\theta}}}}
\newcommand{\abs}[1]{\lvert#1\rvert}
\newcommand{\mbf}[1]{\mathbf{#1}}
\newcommand{\bs}[1]{\boldsymbol{#1}}
\newcommand{\mbb}[1]{\mathbb{#1}}
\newcommand{\mathrm{d}}{\mathrm{d}}
\newcommand{\mathrm}{\mathrm}
\def\mathrm{\mathchar'26\mkern-12mu d}{\mathrm{\mathchar'26\mkern-12mu d}}
\newcommand{\mathbf{x}}{\mathbf{x}}
\newcommand{\mathbf{A}}{\mathbf{A}}
\newcommand{\mathbf{B}}{\mathbf{B}}
\newcommand{\mathbf{W}}{\mathbf{W}}
\newcommand{\mathbf{V}}{\mathbf{V}}
\newcommand{\mathbf{M}}{\mathbf{M}}
\newcommand{\mathbf{b}}{\mathbf{b}}
\newcommand{\mathbf{v}}{\mathbf{v}}
\newcommand{\mathbf{z}}{\mathbf{z}}
\newcommand{\mathbf{I}}{\mathbf{I}}
\newcommand{\mathbf{t}}{\mathbf{t}}
\newcommand{\mathbf{u}}{\mathbf{u}}
\newcommand{\mathbf{r}}{\mathbf{r}}
\newcommand{\mathbf{0}}{\mathbf{0}}
\newcommand{{\bs{\epsilon}}}{{\bs{\epsilon}}}
\newcommand{{\boldsymbol{\theta}}}{{\boldsymbol{\theta}}}
\newcommand{{\boldsymbol{\alpha}}}{{\boldsymbol{\alpha}}}
\newcommand{{\boldsymbol{\phi}}}{{\boldsymbol{\phi}}}
\newcommand{{\boldsymbol{\epsilon}}}{{\boldsymbol{\epsilon}}}
\newcommand{\mathbf{y}}{\mathbf{y}}
\newcommand{\mathbf{s}}{\mathbf{s}}
\newcommand{\mathbf{h}}{\mathbf{h}}
\newcommand{.\xspace}{.\xspace}
\def\emph{e.g}.\xspace{\emph{e.g}.\xspace}
\def\emph{E.g}\onedot{\emph{E.g}.\xspace}
\def\emph{i.e}.\xspace{\emph{i.e}.\xspace}
\def\emph{I.e}\onedot{\emph{I.e}.\xspace}
\def\emph{c.f}\onedot{\emph{c.f}.\xspace}
\def\emph{C.f}\onedot{\emph{C.f}.\xspace}
\def\emph{etc}\onedot{\emph{etc}.\xspace}
\defw.r.t.\xspace{w.r.t.\xspace}
\defd.o.f.\xspace{d.o.f.\xspace}
\defa.k.a\onedot{a.k.a.\xspace}
\defi.i.d.\xspace{i.i.d.\xspace}
\def\emph{et al}\onedot{\emph{et al}.\xspace}
\def{\textnormal{$\eta$}}{{\textnormal{$\eta$}}}
\def{\textnormal{b}}{{\textnormal{b}}}
\def{\textnormal{c}}{{\textnormal{c}}}
\def{\textnormal{d}}{{\textnormal{d}}}
\def{\textnormal{e}}{{\textnormal{e}}}
\def{\textnormal{f}}{{\textnormal{f}}}
\def{\textnormal{g}}{{\textnormal{g}}}
\def{\textnormal{h}}{{\textnormal{h}}}
\def{\textnormal{i}}{{\textnormal{i}}}
\def{\textnormal{j}}{{\textnormal{j}}}
\def{\textnormal{k}}{{\textnormal{k}}}
\def{\textnormal{l}}{{\textnormal{l}}}
\def{\textnormal{n}}{{\textnormal{n}}}
\def{\textnormal{o}}{{\textnormal{o}}}
\def{\textnormal{p}}{{\textnormal{p}}}
\def{\textnormal{q}}{{\textnormal{q}}}
\def{\textnormal{r}}{{\textnormal{r}}}
\def{\textnormal{s}}{{\textnormal{s}}}
\def{\textnormal{t}}{{\textnormal{t}}}
\def{\textnormal{u}}{{\textnormal{u}}}
\def{\textnormal{v}}{{\textnormal{v}}}
\def{\textnormal{w}}{{\textnormal{w}}}
\def{\textnormal{x}}{{\textnormal{x}}}
\def{\textnormal{y}}{{\textnormal{y}}}
\def{\textnormal{z}}{{\textnormal{z}}}
\def{\mathbf{\epsilon}}{{\mathbf{\epsilon}}}
\def{\mathbf{\theta}}{{\mathbf{\theta}}}
\def{\mathbf{a}}{{\mathbf{a}}}
\def{\mathbf{b}}{{\mathbf{b}}}
\def{\mathbf{c}}{{\mathbf{c}}}
\def{\mathbf{d}}{{\mathbf{d}}}
\def{\mathbf{e}}{{\mathbf{e}}}
\def{\mathbf{f}}{{\mathbf{f}}}
\def{\mathbf{g}}{{\mathbf{g}}}
\def{\mathbf{h}}{{\mathbf{h}}}
\def{\mathbf{u}}{{\mathbf{i}}}
\def{\mathbf{j}}{{\mathbf{j}}}
\def{\mathbf{k}}{{\mathbf{k}}}
\def{\mathbf{l}}{{\mathbf{l}}}
\def{\mathbf{m}}{{\mathbf{m}}}
\def{\mathbf{n}}{{\mathbf{n}}}
\def{\mathbf{o}}{{\mathbf{o}}}
\def{\mathbf{p}}{{\mathbf{p}}}
\def{\mathbf{q}}{{\mathbf{q}}}
\def{\mathbf{r}}{{\mathbf{r}}}
\def{\mathbf{s}}{{\mathbf{s}}}
\def{\mathbf{t}}{{\mathbf{t}}}
\def{\mathbf{u}}{{\mathbf{u}}}
\def{\mathbf{v}}{{\mathbf{v}}}
\def{\mathbf{w}}{{\mathbf{w}}}
\def{\mathbf{x}}{{\mathbf{x}}}
\def{\mathbf{y}}{{\mathbf{y}}}
\def{\mathbf{z}}{{\mathbf{z}}}
\def{\bm{I}}{{\bm{I}}}
\def{\bm{0}}{{\bm{0}}}
\def{\bm{1}}{{\bm{1}}}
\def{\bm{\mu}}{{\bm{\mu}}}
\def{\bm{\theta}}{{\bm{\theta}}}
\def{\bm{a}}{{\bm{a}}}
\def{\bm{b}}{{\bm{b}}}
\def{\bm{c}}{{\bm{c}}}
\def{\bm{d}}{{\bm{d}}}
\def{\bm{e}}{{\bm{e}}}
\def{\bm{f}}{{\bm{f}}}
\def{\bm{g}}{{\bm{g}}}
\def{\bm{h}}{{\bm{h}}}
\def{\bm{i}}{{\bm{i}}}
\def{\bm{j}}{{\bm{j}}}
\def{\bm{k}}{{\bm{k}}}
\def{\bm{l}}{{\bm{l}}}
\def{\bm{m}}{{\bm{m}}}
\def{\bm{n}}{{\bm{n}}}
\def{\bm{o}}{{\bm{o}}}
\def{\bm{p}}{{\bm{p}}}
\def{\bm{q}}{{\bm{q}}}
\def{\bm{r}}{{\bm{r}}}
\def{\bm{s}}{{\bm{s}}}
\def{\bm{t}}{{\bm{t}}}
\def{\bm{u}}{{\bm{u}}}
\def{\bm{v}}{{\bm{v}}}
\def{\bm{w}}{{\bm{w}}}
\def{\bm{x}}{{\bm{x}}}
\def{\bm{y}}{{\bm{y}}}
\def{\bm{z}}{{\bm{z}}}
\section{Methods}
\label{sec:methods}
In this section, we discuss various methods for tackling
bandit problems, including our proposed new method.
\eat{
https://github.com/ZeroWeight/NeuralTS
https://github.com/SMPyBandits/SMPyBandits/blob/master/NonStationaryBandits.md
https://github.com/david-cortes/contextualbandits
https://github.com/st-tech/zr-obp
}
\subsection{Algorithmic framework}
\begin{algorithm}
\caption{Online-Eval(Agent, Env, $T$, $\tau$)}
\label{algo:generic}
\footnotesize
$D_{\tau} = \text{Environment.Warmup}(\tau)$\;
${\bm{b}}_{\tau} = \text{Agent.InitBelief}(D_{\tau})$ \;
$R = 0$ // cumulative reward \;
\For{$t=(\tau+1):T$}{
$s_t = \text{Environment.GetState}(t)$ \;
$a_{t} = \text{Agent.ChooseAction}({\bm{b}}_{t-1}, s_t)$ \;
$y_t = \text{Environment.GetReward}(s_t,a_t)$ \;
$R += y_t$ \;
$D_t = (s_t, a_t, y_t)$ \;
${\bm{b}}_{t} = \text{Agent.UpdateBelief}({\bm{b}}_{t-1}, D_t)$\;
}
Return $R$
\end{algorithm}
In \cref{algo:generic},
we give the pseudocode for a way to estimate the expected
reward for a bandit policy (agent),
given access to an environment or simulator.
In the case of a Thompson sampling agent,
the action selection is usually implemented
by first sampling a parameter vector from the posterior (belief state),
$\tilde{\vtheta}_{t} \sim p(\vtheta_{t}|D_{1:t-1})$,
and then predicting the reward for each action and greedily picking the best,
$a_t = \operatornamewithlimits{argmax}_{a \in \mymathcal{A}}
\expect{y| s_t,a_t, \tilde{\vtheta}_t}$.
In the case of a UCB agent, the action is chosen by first computing
the posterior predicted mean and variance, and then picking the
action with the highest optimistic estimate of reward:
\begin{align}
p_{t|t-1}(y|s, a) &\mathrel{\ensurestackMath{\stackon[2pt]{=}{\scriptstyle\Delta}}}
\int p(y|s,a, \vtheta) p(\vtheta|D_{1:t-1}) d\vtheta \\
\mu_{a} &= \expectQ{y|s_t,a}{p_{t|t-1}} \\
\sigma_a &= \sqrt{\varQ{y|s_t,a}{p_{t|t-1}}} \\
a_t &= \operatornamewithlimits{argmax}_{a \in \mymathcal{A}} \mu_a + \alpha \sigma_a
\end{align}
where $\alpha>0$ is a tuning parameter that controls the degree of exploration.
In this paper, we focus on Thompson sampling,
but our methods can be extended to UCB in a straightforward way.
Since the prior on the parameters is usually uninformative,
the initial actions are effectively random.
Consequently we let the agent have a ``warmup period'',
in which we systematically try each action $N_w$ times, in a round robin fashion,
for a total of $\tau=N_a \times N_w$ steps.
We then use this warmup data
to initialize the belief state to get an informative prior.
If we have a long warmup period,
then we will have a better initial estimate,
but we may incur high regret during this period,
since we are choosing actions ``blindly''.
Thus we can view $\tau$ as a hyperparameter of the algorithm.
The optimal value
will depend on the expected lifetime $T$ of the agent
(if $T$ is large, we can more easily amortize the cost of a long warmup period).
\subsection{Modeling assumptions}
\begin{figure*}
\centering
\footnotesize
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[height=1.2in]{arxiv-figures/mlp-multi-head}
\caption{ }
\label{fig:mlp-multi-head}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[height=1.2in]{arxiv-figures/mlp-concat-input}
\caption{ }
\label{fig:mlp-concat}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[height=1.2in]{arxiv-figures/mlp-repeat-input}
\caption{ }
\label{fig:mlp-repeat}
\end{subfigure}
\caption{
Illustration of some common MLP architectures
used in bandit problems.
${\bm{s}}$ represents the state (context) vector,
${\bm{a}}$ represents the action vector,
${\bm{y}}$ represents the reward vector (for each possible action),
and $z_i^l$ is the $i$'th hidden node in layer $l$.
(a) The input is ${\bm{s}}$,
and there are $A$ output ``heads'', $y_1,\ldots,y_A$, one per action.
(b) The input is a concatentation of ${\bm{s}}$ and ${\bm{a}}$;
the output is the predicted reward for this $({\bm{s}},{\bm{a}})$ combination.
(c) The input is a block structured vector,
where we insert ${\bm{s}}$ into the $a$'th block (when evaluating action $a$),
and the remaining input blocks are zero.
}
\label{fig:MLP}
\end{figure*}
We will assume a Gaussian bandit setting,
in which the observation model for the reward
is a Gaussian
with a fixed or inferred observation variance:
$p(y_t|s_t,a_t) = \mathcal{N}(y_t|f(s_t,a_t;\vtheta_t),\sigma^2)$.
(We discuss extensions to the Bernoulli bandit case in \cref{sec:discuss}.)
Many current bandit algorithms assume the reward function
is a linear model applied to a set of learned features.
That is, it
has the form
$f(s,a;\vtheta) = {\bm{w}}_{a}^{\mkern-1.5mu\mathsf{T}} \myvecsym{\phi}(s;\myvec{V})$,
where $\myvecsym{\phi}(s;\myvec{V}) \in \mathbb{R}^{N_z}$ is the hidden state computed by a feature
extractor, $\myvec{V} \in \mathbb{R}^{D_b}$
are the parameters of this feature extractor ``body'',
and $\myvec{W} \in \mathbb{R}^{N_z \times N_a}$ is the final linear layer,
with one output ``head'' per action.
For example, in \cref{fig:mlp-multi-head},
we show a 2 layer model
where $\myvecsym{\phi}(s;\myvec{V}) = \ensuremath{\mathrm{ReLU}}\xspace(\myvec{V}_2 \; \ensuremath{\mathrm{ReLU}}\xspace(\myvec{V}_1 s))$
is the feature vector,
and $\myvec{V}_1 \in \mathbb{R}^{N_h^{(1)} \times N_s}$
and $\myvec{V}_2 \in \mathbb{R}^{N_h^{(2)} \times N_h^{(1)}}$
are the first and second layer weights.
(We ignore the bias terms for simplicity.)
Thus $N_z=N_h^{(2)}$ is the size of the feature vector
that is passed to the final linear layer.
If the feature vector is fixed (i.e., is not learned),
so $\myvecsym{\phi}({\bm{s}})={\bm{s}}$,
we get a linear model
of the form
$f(s,a;{\bm{w}}) = {\bm{w}}_{a}^{\mkern-1.5mu\mathsf{T}} s$.
An alternative model structure is to
concatentate the state vector, $\myvecsym{\phi}(s_t)$,
with the action vetcor, $\myvecsym{\phi}(a_t)$
to get an input of the form
${\bm{x}}_t=(\myvecsym{\phi}(s_t),\myvecsym{\phi}(a_t))$.
This is shown in \cref{fig:mlp-concat}.
This can be useful if we have many possible actions;
in this case, we can represent arms in terms of their features instead
of their indices, just as we represent states in terms of their features.
In this formulation, the linear output layer returns
the predicted reward for the specified $({\bm{s}},{\bm{a}})$ input combination,
and we require $N_a$ forwards passes to evaluate the reward vector
for each possible action.
Instead of concatenating the state and action vectors,
we can compute their outer product and then flatten the result,
to get ${\bm{x}}_t = \text{flatten}(\myvecsym{\phi}(s_t) \myvecsym{\phi}(a_t)^{\mkern-1.5mu\mathsf{T}})$.
This can model interaction effects,
an proposed in
\citep{Li10linucb}.
If $\myvecsym{\phi}(a_t)$ is a one-hot encoding,
we get the block-structured input
${\bm{x}}_t = ({\bm{0}}, \cdots, {\bm{0}}, \myvecsym{\phi}(s_t), {\bm{0}}, \cdots, {\bm{0}})$,
where we insert the state feature vector into the
block corresponding to the chosen action
(see \cref{fig:mlp-repeat}).
This approach is used by recent NTK methods.
If we assume $\myvecsym{\phi}({\bm{s}})={\bm{s}}$, so the state features are fixed,
and we assume that the MLP has no hidden layers,
then this model becomes equivalent to the linear model,
since ${\bm{w}}^{\mkern-1.5mu\mathsf{T}} {\bm{x}}_t = {\bm{w}}_a^{\mkern-1.5mu\mathsf{T}} s_t$.
\subsection{Existing methods}
In this section, we briefly describe existing inference methods that we will compare to.
More details on all methods can be found in the Supplementary Information.
These methods differ in the kind of belief state
they use to represent uncertainty about the model parameters,
and in their mechanism for updating this belief state.
See \cref{tab:methods} for a summary.
\begin{table*}
\centering
\footnotesize
\begin{tabular}{llll }
Method & Belief state & Memory & Time \\ \hline
Linear & $({\bm{\mu}}_{t,a},\myvecsym{\Sigma}_{t,a})$
& $O(N_a N_z^2)$
& $O(T (C_f + N_a N_x^3))$
\\
Neural-Greedy & ${\bm{\theta}}_t=(\myvec{V}_t,\myvec{W}_t)$
& $O(D_b + N_a N_z + T N_x)$
&$O(T' T N_e C_f)$
\\
Neural-Linear & $(\myvec{V}_t, {\bm{\mu}}_{t,a},\myvecsym{\Sigma}_{t,a}, D_{1:t})$
& $O(D_b + N_a N_z^2 + T N_x)$
& $O(T' T N_e C_f + T N_a N_z^3)$
\\
LiM2 & $(\myvec{V}_t, {\bm{\mu}}_{t,a}, \myvecsym{\Sigma}_{t,a}, D_{t-M:t})$
& $O(D_b + N_a N_z^2 + M N_x)$
& $O(T' M N_e N_p(C_f + N_z^3) + T N_a N_z^3)$
\\
Neural-Thompson & $({\bm{\theta}}_t, \myvecsym{\Sigma}_{t}, D_{1:t})$
& $O(D + D^2 + T N_x)$
& $O(T (T N_e C_f + D^3))$
\\
EKF & $({\bm{\mu}}_t,\myvecsym{\Sigma}_t)$
& $O(D^2)$
& $O(T (C_f + D^3))$
\\
EKF-Subspace & $({\bm{\mu}}_t,\myvecsym{\Sigma}_t,{\bm{\theta}}_*,\myvec{A})$
& $O(\nparamsSub^2 + D \nparamsSub)$
& $O(T (C_f + \nparamsSub^3 + D \nparamsSub))$
\end{tabular}
\caption{Summary of the methods for Bayesian inference
considered in this paper.
Notation:
$T$: num steps taken by the agent in the environment;
$T_u$: update frequency for SGD;
$T' =T / T_u$: total num. times that we invoke SGD;
$N_e$: num. epochs over the training data for each run of SGD;
$C_f$: cost of to evaluate gradient of the network on one example,
$N_a$: num. actions;
$N_x$: size of input feature vector for state and action.
$N_z$: num. features in penultimate (feature) layer;
$D_b$: num. parameters in the body (feature extractor);
$D_h = N_a N_z$: num. parameters in final layer linear;
$D=D_b + D_h$: total num. parameters;
$\nparamsSub$: size of subspace;
$M$: size of memory buffer;
}
\label{tab:methods}
\end{table*}
\paragraph{Linear method}
\label{sec:linear}
The most common approach to bandit problems is to assume a linear model for
the expected reward,
$f(s,a;\vtheta) = {\bm{w}}_{a}^{\mkern-1.5mu\mathsf{T}} s$.
If we use a Gaussian prior, and assume a Gaussian likelihood,
then we can
represent the belief state as a Gaussian,
${\bm{b}}_t = \{({\bm{\mu}}_{t,a}, \myvecsym{\Sigma}_{t,a}): a=1:N_a\}$.
This can be efficiently
updated online using the recursive least squares algorithm,
which is a special case of the Kalman filter
(see \cref{app:linearBandit} for details).
\paragraph{Neural linear method}
\label{sec:neuralLinear}
In \citep{Riquelme2018},
they proposed a method called
``neural linear'', which they showed outperformed many other more sophisticated approaches,
such as variational inference,
on their bandit showdown benchmark.
It assumes that the reward model has the form
$f(s,a;\vtheta) = {\bm{w}}_{a}^{\mkern-1.5mu\mathsf{T}} \myvecsym{\phi}(s;\myvec{V})$,
where $\myvecsym{\phi}(s;\myvec{V}) \in \mathbb{R}^{N_z}$ is the hidden state computed by a feature
extractor
(see \cref{fig:mlp-multi-head} for an illustration).
The neural linear method computes
a point estimate of $\myvec{V}$ by using SGD,
and uses Bayesian linear regression to update the posterior
over each ${\bm{w}}_a$, and optionally $\sigma^2$.
If we just update $\myvec{V}$ at each step using $D_t$,
we run the risk of
``catastrophic forgetting'' (see \cref{sec:related}).
The standard solution to this
is to store all the past data,
and to re-run (minibatch) SGD on all the data at each step.
Thus the belief state is represented as
${\bm{b}}_t = ({\bm{\theta}}_t, D_{1:t})$.
See \cref{app:neuralLinear} for details.
The time cost is $O(T^2 N_e C_f)$,
where $N_e$ is the number of epochs (passes over the data) at each step,
and $C_f$ is the cost of a single forwards-backwards pass
through the network (needed to compute the objective and its gradient).\footnote{
The reason for the quadratic cost is that each epoch
passes over $O(T)$ examples, even if we use minibatching.
}
Since it is typically too expensive to run SGD on each step,
we can just perform updating every $T_u$ steps.
The total time then becomes
$O(T' T N_e C_f)$,
where $T'=T/T_u$ is the total number of times we invoke SGD.
The memory cost is
$O(D + T N_x)$,
where $N_x$ is the size of
each input example, ${\bm{x}}_t=(s_t, a_t)$.
If we limit the memory to the last $M$ observations
(also called a ``replay buffer''),
the memory reduces to $O(D + M N_x)$,
and the time reduces to
$O(T' M N_e C_f)$.
However, naively limiting the memory in this way can hurt (statistical) performance,
as we will see.
\paragraph{LiM2}
\label{sec:LIM}
In \citep{Nabati2021},
they propose a method called ``LiM2'', which stands for
``Limited Memory Neural-Linear with Likelihood Matching''.
This is an extension of the neural
linear method
designed to solve the ``catastrophic forgetting''
that occurs when using a fixed memory buffer.
The basic idea is to approximate the covariance
of the old features in the memory buffer before
replacing them with the new features,
computed after updating the network parameters.
This old covariance can be used as a prior
during the Bayesian linear regression step.
Computing the updated prior covariance requires
solving a semi-definite program (SDP) after each SGD step.
In practice, the SDP can be solved using
an inner loop of projected gradient descent (PGD),
which involves solving an eigendecomposition at each step.
This takes $O(T' M N_e N_p(C_f + N_z^3))$ time,
where $N_p$ is the number of PGD steps per SGD step.
See \cref{app:LIM2} for details.
\paragraph{NTK methods}
\label{sec:neuralTS}
In \citep{neuralTS}, they propose a method called
``Neural Thompson Sampling'',
and in \citep{neuralUCB},
the propose a related method called
``neural UCB''. Both methods are based on approximating
the MLP with a neural tangent kernel or NTK \citep{NTK}.
Specifically, the feature vector at time $t$ is defined
to be
$\myvecsym{\phi}_t(s,a) = (1/\sqrt{N_h}) \nabla_{{\bm{\theta}}} f^{\text{dnn}}(s,a)|_{{\bm{\theta}}_{t-1}}$,
where $N_h$ is the width of each hidden layer,
and the gradient is evaluated at the most recent parameter estimate.
They use a linear Gaussian model on top of these features.
The network parameters are re-estimated at each step based on all the past data,
and then the method
effectively performs Bayesian linear regression on the output layer
(see \cref{app:neuralTS} for details).
\subsection{Our method: Subspace EKF}
\label{sec:subspace}
\label{sec:EKF}
A natural alternative to just modeling uncertainty
in the final layer weights is to ``be Bayesian'' about {\em all} the network parameters.
Since our model is nonlinear,
we must use approximate Bayesian inference.
In this paper we choose to use
the Extended Kalman Filter (EKF),
which is a popular
deterministic inference scheme for nonlinear state-space models
based on linearizing the model
(see \cref{app:EKF} for details).
It was first applied to inferring the parameters of an MLP in
\citep{Singhal1988},
although it has not been applied to bandit problems, as far as we know.
In more detail, we define the latent variable
to be the unknown parameters ${\bm{\theta}}_t$.
The (non-stationary) observation model is given by
$p_t(y_t|{\bm{\theta}}_t) =
\mathcal{N}(y_t|f(s_t,a_t;{\bm{\theta}}_t,\sigma^2)$,
where $s_t$ and $a_t$ are inputs to the model,
and the dynamics model for the parameters is given by
$p({\bm{\theta}}_t|{\bm{\theta}}_{t-1}) = \mathcal{N}({\bm{\theta}}_t|{\bm{\theta}}_{t-1},\tau^2 \myvec{I})$.
We can set $\tau^2 = 0$ to encode the assumption that the
parameters of the reward function are constant over time.
However in practice we use a small non-zero value for $\tau$, for numerical stability.
The belief state of an EKF has the form
${\bm{b}}_t = ({\bm{\mu}}_t, \myvecsym{\Sigma}_t)$.
This takes $O(D^2)$ space
and $O(T D^3)$ time to compute.
Modern neural networks often have millions of parameters,
which makes direct application of the EKF intractable.
We can reduce the memory from $O(D^2)$ to $O(D)$ and the time
from $O(T D^3)$
to $O(T D^2)$ by using a diagonal approximation to $\myvecsym{\Sigma}_t$.
However, this ignores correlations between the parameters,
which is important for good performance (as we show empirically in \cref{sec:results}).
We can improve the approximation by using a block structured approximation,
with one block per layer of the MLP, but this still ignores correlations
between layers.
In this paper, we explore a different approach to scaling the EKF
to large neural networks.
Our key insight is to exploit the fact that
the DNN pararameters
are not independent ``degrees of freedom''.
Indeed,
\citep{Li2018Intrinsic}
showed empirically that
we can replace the original neural network weights
${\bm{\theta}} \in \mathbb{R}^{D}$ with
a lower dimensional version, ${\bm{z}} \in \mathbb{R}^{\nparamsSub}$,
by defining the affine mapping
${\bm{\theta}}({\bm{z}}) = \myvec{A} {\bm{z}} + {\bm{\theta}}_*$,
and then optimizing the low-dimensional
parameters ${\bm{z}}$.
Here $\myvec{A} \in \mathbb{R}^{D \times \nparamsSub}$ is a fixed but random Gaussian matrix
with columns normalized to 1,
and ${\bm{\theta}}_* \in \mathbb{R}^{D}$ is a random initial guess of the parameters
(which we call an ``offset'').
In \citep{Li2018Intrinsic},
they show that optimizing in the ${\bm{z}}$ subspace gives good
results on standard classification and RL benchmarks,
even when $\nparamsSub \ll D$,
provided that $\nparamsSub > \nparamsSub_{\min}$,
where $\nparamsSub_{\min}$ is a critical threshold.
In \citep{Larsen2021degrees}, they provide a theoretical
explanation for why such a threshold exists,
based on geometric properties of the high dimensional loss landscape.
Instead of using a random offset ${\bm{\theta}}_*$,
we can optimize it by performing SGD in the original
${\bm{\theta}}$ space during a warmup period.
Similarly, instead of using a random basis matrix $\myvec{A}$,
we can optimize it
by applying SVD
to the iterates of SGD during the warmup period,
as proposed in \citep{Izmailov2019,Larsen2021degrees}.
(If we wish, we can just keep a subset of the iterates, since consecutive
samples are correlated.)
These two changes reduce the dimensionality of the subspace $\nparamsSub$
that
we need to use in order to get good performance.
(We can use cross-validation on the data from the warmup phase
to find a good value for $\nparamsSub$.)
\begin{figure}
\centering
\includegraphics[height=2in]{arxiv-figures/subspace-bandit-ssm}
\caption{
Graphical model for the subspace bandit.
}
\label{fig:SSM}
\end{figure}
Once we have computed the subspace, we can perform
Bayesian inference for the embedded parameters ${\bm{z}} \in \mathbb{R}^d$
instead of the original parameters ${\bm{\theta}} \in \mathbb{R}^D$.
We do this by applying the EKF to the
a state-space model with a (non-stationary) observation model
of the form
$p_t(y_t|{\bm{z}}_t) =
\mathcal{N}(y_t|f(s_t,a_t;\myvec{A} {\bm{z}}_t + {\bm{\theta}}_*),\sigma^2)$,
and a deterministic transition model of the form
$p({\bm{z}}_t|{\bm{z}}_{t-1}) = \mathcal{N}({\bm{z}}_t|{\bm{z}}_{t-1}, \tau^2 \myvec{I})$.
This is illustrated as a graphical model in
\cref{fig:SSM}.
\eat{
Our overall policy is a two-stage method,
in which we first choose actions uniformly for a few steps (as is standard),
and then we switch to Thompson sampling,
combined with our subspace EKF method.
}
The overall algorithm is summarized in \cref{algo:subspaceEKF}.
(If we use a random subspace, we can skip the warmup phase,
but results are worse, as we show in \cref{sec:results}.)
The algorithm takes $O(\nparamsSub^3)$ time per step.
Empirically we find that we can reduce
models with $D \sim 10^6$ down to
$\nparamsSub \sim 10^2$ while getting the same (or sometimes better)
performance, as we show in \cref{sec:results}.
We can further reduce the time to $O(\nparamsSub)$
by using a diagonal covariance matrix, with little change to the performance,
as we shown in \cref{sec:results}.
The time cost of the warmup phase is dominated by SVD.
If we have $\nwarmup$ samples,
the time complexity for exact SVD is $O(\min(\nwarmup^2 D, D^2 \nwarmup))$.
However, if
we use randomized SVD \citep{Halko2011}
this reduces the time to
$O(\nwarmup D \log \nparamsSub + (\nwarmup+D) \nparamsSub^2)$.
The memory cost is
$O(\nparamsSub^2 + D \nparamsSub)$,
since we need to store the belief state,
${\bm{b}}_t = ({\bm{\mu}}_t, \myvecsym{\Sigma}_t)$,
as well as the offset ${\bm{\theta}}_*$
and the $D \times d$ basis matrix $\myvec{A}$.
We have succesfully scaled this to models with $\sim 1M$ parameters,
but going beyond this may require the use of a sparse random
orthogonal matrix to represent $\myvec{A}$ \citep{Choromanski2017}.
We leave this to future work.
Note that our method can be applied to any kind of DNN,
not just MLPs. The low dimensional vector ${\bm{z}}$
depends on all of the parameters in the model.
By contrast, the neural linear and Lim2
methods assume that the model has a linear final layer,
and they only capture parameter uncertainty in this final layer.
Thus these methods cannot be combined with the subspace trick.
\eat{
We find that we can capture 99\% of the variance
using $\nparamsSub \sim 5$ dimensions, but we get better end-to-end performance
using a value of $\nparamsSub \sim 200$.
(The estimate of the dimensionality
based on the warmup data is so low
because the initial iterates only explore a small portion
of parameter space, near to the random initialization.)
}
\begin{algorithm}
\caption{Neural Subspace Bandits}
\label{algo:subspaceEKF}
\footnotesize
$D_{\tau} = \text{Environment.Warmup}(\tau)$\;
${\bm{\theta}}_{1:\tau} = \text{SGD}(D_{\nwarmup})$ \;
${\bm{\theta}}_* = {\bm{\theta}}_{\tau}$ \;
$\myvec{A} = \text{SVD}({\bm{\theta}}_{1:\tau})$\;
$({\bm{\mu}}_{\tau}, \myvecsym{\Sigma}_{\tau}) = \text{EKF}({\bm{\mu}}_{0},\myvecsym{\Sigma}_{0}, D_{1:\tau})$\;
\For{$t=(\tau+1):T$}{
$s_t = \text{Environment.GetState}(t)$ \;
$\tilde{{\bm{z}}}_t \sim \mathcal{N}({\bm{\mu}}_t, \myvecsym{\Sigma}_t)$ \\
$a_t = \operatornamewithlimits{argmax}_a f(s_t,a_t;\myvec{A} \tilde{{\bm{z}}}_t + {\bm{\theta}}_*)$ \;
$y_t = \text{Environment.GetReward}(s_t,a_t)$ \;
$D_t = (s_t, a_t, y_t)$ \;
$({\bm{\mu}}_{t}, \myvecsym{\Sigma}_{t}) = \text{EKF}({\bm{\mu}}_{t-1},\myvecsym{\Sigma}_{t-1}, D_t)$\;
}
\end{algorithm}
\section{Related work}
\label{sec:related}
In this section, we briefly review related work.
We divide the prior work into several groups:
Bayesian neural networks,
neural net subspaces,
and neural contextual bandits.
Most work on Bayesian inference for neural networks
has focused on the offline (batch) setting.
Common approaches include
the Laplace approximation
\citep{MacKay1992, MacKay95,Daxberger2021laplace};
Hamiltonian MCMC
\citep{neal1995bayesian,Izmailov2021icml};
variational inference,
such as the ``Bayes by backprop'' method of
\citep{Blundell2015},
and the
``variational online Gauss-Newton''
method of \citep{Osawa2019nips};
expectation propagation,
such as the
``probabilistic backpropagation'' method of
\citep{PBP};
and many others.
(For more details and references,
see e.g., \citep{Polson2017,Wilson2020BDL,Wilson2020prob,Khan2020tutorial}.)
There are several techniques for online or sequential Bayesian inference
for neural networks.
\citep{Ritter2018online} propose an online version of the Laplace
approximation, \citep{Nguyen2018vcl}
propose an online version of variational inference,
and \citep{Ghosh2016} propose
to use assumed density filtering (an online version of expectation
propagation).
However, in \citep{Riquelme2018}, they showed that these methods
do not work very well for bandit problems.
In this paper, we build on older work, specifically
\citep{Singhal1988,deFreitas00ekf},
which used the extended Kalman filter (EKF)
to perform approximate online inference for DNNs.
We combine this with subspace methods to scale to high dimensions,
as we discuss below.
There are several techniques for scaling Bayesian inference to neural
networks with many parameters.
A simple approach is to use variational inference with a diagonal
Gaussian posterior, but this ignores important correlations between
the weights.
It is also possible to use low-rank factorizations of the posterior
covariance matrix.
In \citep{Daxberger2021subnetwork},
they propose to use a MAP estimate for some parameters
and a Laplace approximation for others.
However, their computation of the MAP estimate relies on standard offline SGD (stochastic gradient descent),
whereas we perform online Bayesian inference without using SGD.
\eat{
they propose to partition the weights ${\bm{\theta}} \in \mathbb{R}^D$
into a high dimensional set, ${\bm{\theta}}_1 \in \mathbb{R}^{D-d}$,
and a low dimensional set, ${\bm{\theta}}_2 \in \mathbb{R}^{d}$.
To find this partition, they compute a diagonal Laplace approximation
to $p({\bm{\theta}}|D)$, and then select the $d$ elements of ${\bm{\theta}}$
with highest posterior marginal variance.
They then compute a full covariance Laplace approximation
over ${\bm{\theta}}_2$, treating ${\bm{\theta}}_1$ as point estimate.
This gives the following posterior approximation:
$p({\bm{\theta}}|D) \approx \delta({\bm{\theta}}_1 - \hat{{\bm{\theta}}}_1)
\mathcal{N}({\bm{\theta}}_2 | \hat{{\bm{\theta}}}_2, \myvecsym{\Sigma}_2)$,
where $\hat{{\bm{\theta}}}=(\hat{{\bm{\theta}}}_1,\hat{{\bm{\theta}}}_2) \in \mathbb{R}^D$ is the usual MAP estimate,
and $\myvecsym{\Sigma}_2$ is the inverse of the Hessian for ${\bm{\theta}}_2$ at the posterior mode.
}
In \citep{Izmailov2019}, they compute
a linear subspace of dimension $d$
by applying PCA to the last $L$ iterates
of stochastic weight averaging
\citep{Izmailov2018};
they then perform slice sampling in this low-dimensional subspace.
In this paper, we also leverage subspace inference,
but we do so in the online setting, which is
necessary when solving bandit problems.
The literature on contextual bandits is vast
(see e.g., \citep{Lattimore2019,Slivkins2019}).
Here we just discuss recent work which utilizes DNNs
to model the reward function,
combined with Thompson sampling as the policy for choosing the action.
In \citep{Riquelme2018}, they evaluated
many different approximate inference methods for Bayesian neural networks
on a set of benchmark contextual bandit problems;
they called this the ``Deep Bayesian Bandits Showdown''.
The best performing method
in their showdown
is what they call the ``neural linear'' method,
which we discuss in \cref{sec:neuralLinear}.
Unfortunately the neural linear method is not a fully online algorithm,
since it needs to keep all the past data
to avoid the problem of ``catastrophic forgetting''
\citep{Robins1995,French99,Kirkpatrick2017}.
This means that the memory complexity is $O(T)$,
and the computational complexity can be as large as $O(T^2)$.
This makes the method impractical for applications
where the data is high dimensional,
and/or the agent is running for a long time.
In \citep{Nabati2021}, they make an online version of the neural linear
method which they call "Lim2", which stands for
``Limited Memory Neural-Linear with Likelihood Matching''.
We discuss this in more detail in \cref{sec:LIM}.
More recently, several methods based on neural tangent kernels (NTK)
have been developed \citep{NTK},
including
neural Thompson sampling \citep{neuralTS}
and neural UCB \citep{neuralUCB}.
We discuss these methods in more detail in \cref{sec:neuralTS}.
Although Neural-TS and Neural-UCB in principle
achieve a regret of $O(\sqrt{T})$, in practice there are some
disadvantages.
First, these algorithms
perform multiple gradient steps, based on all the past data,
at each step of the algorithm.
Thus these are full memory algorithms that take $O(T)$ space
and $O(T^2)$ time.
Second,
it can be shown \citep{Allen-Zhu2019,Ghorbani2020}
that NTKs are less data efficient learners than (finite width)
hierarchical DNNs, both in theory and in practice.
Indeed we will show that our approach, that uses
constant memory and finite width DNNs,
does significantly better in practice.
\eat{
\begin{itemize}
\item \citep{Daxberger2021subnetwork}
``Bayesian Deep Learning via Subnetwork Inference''.
\item \citep{Izmailov2019}
``Subspace Inference for Bayesian Deep Learning''. PCA on iterates
after SGD converges.
\item \citep{Nguyen2018vcl}
``Variational Continual Learning''.
\item \citep{Ritter2018online}
``Online Structured Laplace Approximations for Overcoming
Catastrophic Forgetting''.
\item \citep{Daxberger2021laplace}.
``Laplace Redux--Effortless Bayesian Deep Learning''.
\item \citep{Ghosh2016} ADF for DNNs.
\item \citep{Urteaga2017}, they also use mixture models
to model the reward function for Thompson sampling,
but they do not use neural networks (so they cannot handle
high dimensional input contexts),
and they use variational inference rather than EKF.
\item \citep{Sezener2020}.
"Online learning in contextual bandits using Gated Linear
Networks".
\end{itemize}
}
\section{Results}
\label{sec:results}
\eat{
https://github.com/sauxpa/neural_exploration
https://github.com/fidelity/mabwiser
https://raw.githubusercontent.com/fidelity/mabwiser/master/examples/lints_reproducibility/movielens_responses.csv
https://www.kaggle.com/prajitdatta/movielens-100k-dataset
100,000 ratings (1-5) from 943 users on 1682 movies.
* Each user has rated at least 20 movies.
* Simple demographic info for the users (age, gender, occupation, zip)
}
In this section, we present empirical results
in which we evaluate the performance (reward) and speed (time)
of our method compared to other methods on various bandit problems.
We also study the effects of various hyper-parameters of our algorithm,
such as how we choose the subspace.
\subsection{Tabular datasets}
\label{sec:tabular}
To compare ourselves to prior works, we consider a subset
of the datasets used in the ``Deep Bayesian Bandits Showdown''
\citep{Riquelme2018}.
These are small tabular datasets, where the goal
is to predict the class label given the features.\footnote{
The datasets are from the UCI ML repository
\protect\url{https://archive.ics.uci.edu/ml/datasets}.
Statlog (shuttle) has 9 features, 7 classes.
Coverype has 54 features, 7 classes.
Adult has 89 features, 2 classes.
We use $T=5000$ samples for all datasets.
} %
We turn this into a bandit problem by defining
the actions to be the class labels,
and the reward is 1 if the correct label is predicted,
and is 0 otherwise.
Thus the cumulative reward is the number of correct classifications,
and the regret is the number of incorrect classifications.
\begin{figure*}
\centering
\includegraphics[height=2in]{arxiv-figures/tabular_reward}
\caption{
Reward for various methods on 3 tabular datasets.
The maximum possible reward for each dataset is 5000.
}
\label{fig:tabular-reward}
\end{figure*}
Following prior work,
we use the multi-headed MLP in \cref{fig:mlp-multi-head},
with one hidden layer with $N_h=50$ units
and \ensuremath{\mathrm{ReLU}}\xspace activations.
(The Neural-TS results are based on the multi-input model
in \cref{fig:mlp-repeat}.)
We use $N_w=20$ ``pulls'' per arm during the warmup phase
and run for $T=5000$ steps.
We run 10 random trials and report the mean reward,
together with the standard deviation.
We compare the following 11 methods:
EKF in a learned (SVD) subspace (with full or diagonal covariance),
EKF in a random subspace (with full or diagonal covariance),
EKF in the original parameter space (with full or diagonal covariance),
Linear,
Neural-Linear (with unlimited or limited memory),
LiM2,
and Neural-TS.
For the 6 EKF methods, we use our own code.\footnote{
Our code is available (in JAX) at
\url{https://github.com/probml/bandits}.
}
For LiM2 and Neural-TS, we use the original code from the authors.\footnote{
LiM2 is available (in TF1) at
\url{
https://github.com/ofirnabati/Neural-Linear-Bandits-with-Likelihood-Matching}.
Neural-TS is available (in PyTorch)
at
\url{https://github.com/ZeroWeight/NeuralTS}.
} %
For Linear and Neural-Linear methods, we reproduced the original code
from the authors in our own codebase.
All the hyperparameters are the same as in
the original papers/code
(namely \citep{Nabati2021} for Linear, Neural-Linear and Lim2,
and \citep{neuralTS} for Neural-TS).
We show the average reward for each method on each dataset
in \cref{fig:tabular-reward}.
(We use $\nparamsSub=200$ for all experiments,
which we found to work well.)
On the Adult dataset, all methods have similar performance,
showng that this is an easy problem.
On the Covertype dataset, we find that the best method
is EKF in a learned (SVD) subspace with full covariance (light blue bar).
This is the only method to beat the linear baseline (purple).
On the Shuttle (Statlog) dataset,
we see that all the EKF subspace variants work well,
and match the accuracy of Lim2 while being much faster.
(We discuss speed in \cref{sec:time}.)
We see that EKF in the original parameter space peforms worse,
especially when we use a diagonal approximation (red).
We also see that limited memory version of neural linear (light orange)
is worse than unlimited memory (dark orange).
However, we also see that differences between most
methods are often rather small, and are often within the error bars.
We also noticed this with other examples from the Bandit Showdown benchmark
(results not shown).
We therefore believe this benchmark is too simple to be a reliable way of
measuring performance differences of neural bandit
algorithms (despite its popularity in the literature).
In the sections below, we consider more challenging benchmarks,
where the relative performance differences are clearer.
\subsection{Recommender systems}
\label{sec:recsys}
\label{sec:movie}
\label{sec:movies}
\begin{figure}
\centering
\includegraphics[height=2in]{arxiv-figures/movielens_reward}
\caption{
Reward for various methods on the Movielens dataset.
}
\label{fig:movielens-reward}
\end{figure}
One of the main applications of bandits is to recommender systems
(see e.g., \citep{Li10linucb,Guo2020bandits}).
Unfortunately, evaluating bandit policies in such systems
requires running a live experiment,
unless we have a simulator or we use
off-policy evaluation methods such as those
in \citep{Li11}.
In this section, we build a simple simulator
by applying SVD to the MovieLens-100k dataset,
following the example in the TF-Agents library.\footnote{
See \url{https://blog.tensorflow.org/2021/07/using-tensorflow-agents-bandits-library-for-recommendations.html}.
}
In more detail,
we start with the MovieLens-100k dataset,
which has 100,000 ratings on a scale of 1--5 from 943 users on 1682 movies.
This defines a sparse $943 \times 1682$ ratings matrix,
where 0s correspond to missing entries.
We extract a subset of this matrix corresponding to the first 20 movies
to get a $943 \times 20$ matrix $\myvec{X}$.
We then compute the SVD of this matrix,
$\myvec{X} = \myvec{U} \myvec{S} \myvec{V}^{\mkern-1.5mu\mathsf{T}}$,
and compute a dense low-rank approximation to it
$\hat{\myvec{X}} = \myvec{U}_K \myvec{S}_K \myvec{V}_K^{\mkern-1.5mu\mathsf{T}}$.
(This is a standard approach to
matrix imputation, see e.g., \citep{Srebro03,Bell2007}).
We treat each user $i$ as a context,
represented by ${\bm{u}}_i$,
and treat each movie $j$ as an action;
the reward for taking action $j$ in
in context $i$ is $X_{ij} \in \mathbb{R}$.
We follow the TF-Agents example
and use $K=20$, so the context has 20 features,
and there are also 20 actions (movies).
Having created this simulator, we can use it to evaluate various
bandit algorithms.
We use MLPs with 1 or 2 hidden layers, with 50 hidden units per layer.
Since the Lim2 and NeuralTS code
was not designed for this environment,
we restrict ourselves to the 9 methods
we have implemented ourselves.
We show the results in \cref{fig:movielens-reward}.
On this dataset we see that the EKF subspace methods
perform the best (by a large margin), followed by linear,
and then neural-linear, and finally EKF in the
original space (diagonal approximation).
We also see that the deeper model (MLP2) performs
worse than the shallower model (MLP1)
when using the neural linear approximation;
we attribute this to overfitting, due to not being Bayesian about
the parameters of the feature extractor.
By contrast, our fully Bayesian approach is robust to using
overparameterized models, even in the small sample setting.
\subsection{MNIST}
\label{sec:MNIST}
\begin{figure}
\centering
\includegraphics[height=2in]{arxiv-figures/mnist_reward}
\caption{
Reward for various methods on MNIST.
The maximum possible reward is 5000.
}
\label{fig:mnist-reward}
\end{figure}
So far we have only considered low dimensional problems.
To check the scalability of our method, we applied it to MNIST,
which has 784 input features and 10 classes (actions).
In addition to a baseline
linear model,
we consider three different kinds of deep neural network:
an MLP with 50 hidden units and 10 linear outputs (MLP1, with
$D=39,760$ parameters),
an MLP with two layers of 200 hidden units each and 10 linear outputs (MLP2
with $D=48,420$ parameters),
and a small convolutional neural network (CNN) known as LeNet5
\citep{LeCun98} with $D=61,706$ parameters.
\eat{
MLP 50-A size = (39760,)
MLP 500-500-A size = (648010,)
MLP 200-200-A 48420
Lenet5 size = (61706,)
}
Not surprisingly, we find that the CNN works better than MLP2,
which works better than MLP1 (see \cref{fig:mnist-reward}).
Furthermore, for any given model,
we see that our EKF-subspace method
outperforms the widely used neural-linear method,
even though the latter has unlimited memory
(and therefore potentially takes $O(T^2)$ time).
For this experiment, we use a subspace
dimensionality of $\nparamsSub=470$
(chosen using a validation set).
With this size of subspace, there is not a big difference
between using an SVD subspace and a random subspace.
However, using a full covariance in the subspace works better than a diagonal covariance
(compare blue bars with the green bars).
We see that all subspace methods work better than the neural linear baseline.
In the original parameter space, a full covariance is intractable,
and EKF with a diagonal approximation (red bar) works very poorly.
\subsection{Varying the subspace}
\label{sec:subspaceResults}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[height=1.75in]{arxiv-figures/adult_sub}
\caption{ }
\end{subfigure}
~
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[height=1.75in]{arxiv-figures/covertype_sub}
\caption{ }
\end{subfigure}
\caption{
Reward vs dimensionality of the subspace
on (a) Adult, (b) Covertype.
Blue estimates the subspace using SVD, orange uses a random subspace.
}
\label{fig:dim-reward}
\end{figure*}
\eat{
\begin{figure*}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[height=1.2in]{arxiv-figures/adult_sub}
\caption{ }
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[height=1.2in]{arxiv-figures/covertype_sub}
\caption{ }
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[height=1.2in]{arxiv-figures/shuttle_sub}
\caption{ }
\end{subfigure}
\caption{
Reward vs dimensionality of the subspace
on (a) Adult, (b) Covertype, (c) Statlog.
Blue estimates the subspace using SVD, orange uses a random subspace.
}
\label{fig:dim-reward}
\end{figure*}
}
A critical component of our approach is
how we estimating the parameter subspace
matrix $\myvec{A} \in \mathbb{R}^{D \times d}$.
As we explained in \cref{sec:subspace}, we have two different approaches
for computing this:
randomly or based on SVD applied
to the parameter iterates computing by gradient descent during the warmup phase.
We show the performance vs $d$ for these two approaches
in \cref{fig:dim-reward} for a one-layer MLP
with $D \sim 40k$ parameters
on some tabular datasets.
We see two main trends:
SVD is usually much better than random, especially in low dimensions;
and performance usually increases with $\nparamsSub$, and then
either plateaus or even drops.
The drop in performance with increasing dimensionality is odd,
but is consistent with the results in
\citep{Larsen2021degrees}, who noticed exactly the same effect.
We leave investigating the causes of this to future work.
\subsection{Time and space complexity}
\label{sec:time}
\begin{figure*}
\centering
\includegraphics[height=2in]{arxiv-figures/movielens_time_nolog}
\caption{
Running time (CPU seconds) for 5000 steps using various methods
on MovieLens.
}
\label{fig:timeMovies}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[height=2in]{arxiv-figures/mnist_time_log}
\caption{
Running time (CPU seconds) for 5000 steps using various methods
on MNIST. Note the vertical axis is logarithmic.
}
\label{fig:timeMNIST}
\end{figure*}
\eat{
\begin{figure*}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[height=1.5in]{../figures/statlog-time-vertical3.pdf}
\caption{ }
\label{fig:tabular-time}
\end{subfigure}
~
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[height=1.2in]{../figures/mnist_time_log_grouped.pdf}
\caption{ }
\label{fig:mnist-time}
\end{subfigure}
\caption{
Running time (econds) for 5000 steps using various methods.
(a)
Statlog data with 1 layer MLP (except for linear).
(b) MNIST data with different architectures.
Note the vertical scale is logarithmic.
}
\label{fig:time}
\end{figure*}
}
One aspect of bandit algorithms that has been overlooked in the literature
is their time and space complexity, which is important in many practical applications,
like recommender systems or robotic systems,
that may run indefinitely (and hence need bounded memory)
and need a fast response time.
We give the asymptotic complexity of each method
in \cref{tab:methods}.
In \cref{fig:timeMovies}, we show the empirical
wall clock time for each method when applied to the MovieLens dataset.
We see the following trends:
Neural-linear methods (orange) are the slowest,
with the limited memory version usually being slightly faster
than the unlimited memory version, as expected.
The EKF subspace methods
are the second slowest, with SVD slightly slower than RND,
and full covariance (blue) slower than diagonal (green).
Finally, the fastest method is diagonal EKF in the
original parameter space;
however, the performance (expected reward) of this method is poor.
It is interesting to note that our subspace models are faster than the linear baseline;
this is because we only have to invert a $d \times d$ matrix,
instead of inverting $N_a$
matrices, each of size $N_z \times N_z$.
In \cref{fig:timeMNIST}, we show the empirical
wall clock time for each method when applied to the MNIST dataset.
The relative performance trends (when viewed on a log scale)
are similar to the MovieLens case.
However, the linear baseline is much slower than most other methods,
since it works with the 784-dimensional input features,
whereas the neural methods work with lower dimensional latent features.
We also see that the neural linear method is quite slow,
especially when applied to CNNs,
and even more so in the unlimited memory setting.
(We could not apply Lim2 to MNIST since the code
is designed for the tabular datasets in the showdown benchmark.)
\eat{
Note that we show the running times for different implementations
of the linear model.
The original formulation of Bayesian updating
for the linear model proposed in
\citep{Riquelme2018}, and used in all subsequent
work, involves an unnecessary matrix inversion
to estimate the noise variance.
In the appendix we discuss a generalized version of the Kalman
filter that can avoid this operation,
and which runs much faster while giving identical results.
}
\eat{
In \cref{fig:mnist-time} we show the time
for some of the methods applied to MNIST, using MLP1, MLP2, and LeNet.
(All methods were run on a single TPU.)
The total time for 5000 steps is only a few seconds,
since the experiments are small scale,
but there are some clear trends,
which are robust across problems we have tried.
The slowest algorithms are
(full covariance)
EKF in the original parameter space
(which requires inverting an $D \times D$
matrix on each step),
and Lim2 (which requires repeated eigenvalue decomposition
of an $N_z \times N_z$ matrix on each step).\footnote{
We used the authors original TF1 code at
\url{https://github.com/ofirnabati/Neural-Linear-Bandits-with-Likelihood-Matching}.
}
(Indeed, both of these methods are too slow to apply to MNIST.)
The second slowest group of algorithms are the neural linear
methods, since they need to repeatedly invoke SGD over the stored data.
Finally we see that the EKF subspace methods are all very fast.
}
In addition to time constraints, memory is also a concern
for long-running systems. Most online neural bandit methods store
the entire past history of observations,
to avoid catastrophic forgetting.
If we limit SGD updates of the feature extractor
to a window of the last $M=100$ observations,
performance drops (see e.g., \cref{fig:tabular-reward}).
The Lim2 method attempts to solve this,
but is very slow, as we have seen.
Our subspace EKF method is both fast and memory efficient.
\eat{
\subsection{Off-policy evaluation on the Yahoo! dataset}
\label{sec:ads}
TBD.
}
| proofpile-arXiv_065-7788 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{ch:introduction}
Increasing research and development efforts aim to produce new types of scalable
two-terminal nanodevices suitable for storing and processing information in both
traditional and neuromorphic computing architectures
\cite{burr_neuromorphic_2017, li_resistive_2017, sangwan_neuromorphic_2020}.
Emerging devices are often based on incompletely understood mechanisms, and may
exhibit strong non-linearity, negative differential resistance (NDR),
oscillations, stochasticity, and memory effects. In assessing the electrical
capabilities of resistive switching devices such as
ReRAM\cite{waser_redox-based_2009}, it is important to consider not only the
device material properties but also the effects of feedback, runaway, excess
electrical stresses, and the general role of the driving circuitry on
measurement data.
Electrical measurements of patterned devices are inevitably carried out in the
presence of resistance in series with the active material volume of the cell.
This series resistance, commonly of unknown value
\cite{hardtdegen_internal_2016, ibanez_non-uniform_2021}, may originate from a
combination of the electrode leads, inactive layers of the material stack, or
the triode region of a series FET current limiter. Internal and external series
resistance adds current-voltage feedback to the system that affects stability
and influences the operational behavior in important ways.
Modification of switching speeds, threshold voltage/currents, and the range of
achievable resistance states have all been observed and discussed theoretically
\cite{fantini_intrinsic_2012, hennen_switching_2019, gonzalez_current_2020,
maldonado_experimental_2021, hardtdegen_improved_2018, strachan_state_2013}.
A series resistance is often intentionally placed to play
the necessary role of an energy limiting mechanism, where its value can mean the difference between a functioning and
non-functioning device. As an experimental technique, it is useful
to be able to place different resistance values in series with the device under
test (DUT) and examine the effect on switching processes. Here, the linearity of
the resistive load is a convenient property for mathematical modelling; The
circuit response of the simple two element series configuration (2R) is easily
predictable through load line analysis in the ideal case, and is also
straightforward to treat analytically in the presence of commonly encountered
parasitics.
\begin{figure}
\maxsizebox{\columnwidth}{!}{\includegraphics[scale=1]{./figures/RC_circuit.pdf}}
\caption{A simple circuit configuration for device characterization uses a waveform generator and an external resistance in series with the DUT. In practice, the effect of the parasitic capacitance in parallel with the device requires careful attention.}
\label{fig:RC_circuit}
\end{figure}
Another advantage of the 2R configuration is ease of implementation relative to
integration of active FET devices on a test chip, with the latter requiring
substantial fabrication cycle time. However, integrating calibrated series
resistances on-chip is inflexible because each cell is attached to a single static
resistance value that cannot be changed or removed. Scenarios often arise that
give good reason to alter or remove the series resistance \textit{in situ}.
Notably, devices possessing steady states with S-type or N-type NDR each have
different criteria for stable characterization, and both types are
present in the SET and RESET processes of ReRAM,
respectively\cite{fantini_intrinsic_2012}. This imposes different requirements
for the series resistance value even within a single switching
cycle.
Where an adjustable series resistance is required, it must be implemented
externally to the wafer. The main practical challenge associated with this is
that parasitic capacitance $C_\mathrm{p}$ at the node shared with the DUT is highly
detrimental and difficult to avoid (Fig. \ref{fig:RC_circuit}). This stray
capacitance slows down the dynamic response of the circuit, degrading the
ability to control and to measure the voltage and current
experienced by the active cell volume versus time. Coupled with rapid
conductance transitions of the DUT, harmful overshoot transients are generated
that strongly impact the observed switching behavior and can cause irreversible
damage \cite{kinoshita_reduction_2008, lu_elimination_2012,
sharma_electronic_2014, meng_temperature_2020}.
While singular through-hole resistors are a common external solution, their use entails manually switching between resistance values where required. However, the stochastic nature of resistive switching cells is such that they benefit greatly from a statistical treatment using automated measurements with programmable parameters. In this work we present an external circuit design providing an adjustable linear series resistance for flexible wafer-level device characterization. The circuit, based on a digital potentiometer (digipot) chip, is remotely programmable over USB between 528 resistance levels. Importantly, the voltage signal at the low-capacitance DUT node is directly amplified for synchronous measurement with the DUT current with a bandwidth over 200~MHz. We demonstrate the circuit operation for automated characterization of NDR devices and for cycling of bipolar ReRAM cells with high speed voltage sweeps.
\section{Design} \label{ch:designPrinciples}
Applying Kirchhoff's current law, the dynamical equation governing the time evolution of the device voltage in the circuit of Fig.~\ref{fig:RC_circuit} is
\begin{align}\label{eq:diff_eq}
C_\mathrm{p} \frac{dV_\mathrm{d}(t)}{dt} &= \frac{V_\mathrm{in}(t) - V_\mathrm{d}(t)}{R_\mathrm{s}} - I_\mathrm{d}(t,~\ldots),
\end{align}
where $t$ is time and $I_\mathrm{d}$ in general depends on $V_\mathrm{d}$ and other internal state variables of the DUT. Possible steady state solutions lie on the $V_\mathrm{d}$-nullcline,
\begin{align}\label{eq:load_line}
V_\mathrm{d} = V_\mathrm{in} - I_\mathrm{d} R_\mathrm{s},
\end{align}
also known as the load line. For fast conductance switching events that are common in the targeted material systems, transient deviations from the load line occur as seen in the simplified situation of Fig.~\ref{fig:overshoot_sim}. During such transients, the excess energy delivered to the DUT due to capacitive discharge is significant and can strongly influence the end result of the switching process.
While the potential for overshooting transients is unavoidable in the context of
a passive feedback arrangement,
it is important that they are controlled to the extent possible and accurately
measured when they occur. The only way that overshoots can be reduced in the
discussed configuration is by minimizing the value of $C_\mathrm{p}$. Practically this
means that a coaxial cable, acting approximately as a parasitic capacitance of
100~pF/m, cannot be used to connect $R_\mathrm{s}$ to the DUT. The series resistance
should rather be placed as close as possible to the DUT, with the components
carefully selected and the printed circuit board (PCB) layout designed for low
contribution to the total $C_\mathrm{p}$.
High fidelity current measurements can be achieved by amplification of the
voltage across a ground referenced shunt termination following transmission over
a coaxial line. Using this type of current measurement,
positioning the DUT (rather than $R_s$) adjacent to the shunt is generally
preferred because it avoids low pass filtering of the $I_\mathrm{d}$ signal, allowing
measurement of $I_\mathrm{d}$ at a high bandwidth that is independent of the resistance
state of the device. With prior knowledge of $R_\mathrm{s}$, Eq.~\ref{eq:load_line} is
often used to calculate the $V_\mathrm{d}$ from a measurement of $I_\mathrm{d}$ and
$V_\mathrm{in}$, but there are several drawbacks
associated with this method. One is the inaccuracy that comes from neglecting
the capacitive currents of the left-hand side of Eq.~\ref{eq:diff_eq}. Another
problem is measurement noise introduced by the $\IdR_\mathrm{s}$ term, as the small $I_\mathrm{d}$
signal with high relative error is multiplied by a potentially large $R_\mathrm{s}$
value. It is therefore advantageous to directly amplify the voltage at the DUT
electrode rather than attempt to calculate it from other measured signals.
Following from these considerations, the basic intended configuration of
external instruments and the designed circuit can be seen in
Fig~\ref{fig:measurement_setup}. If sufficient resolution is not obtained by
sampling the current with a bare oscilloscope input, additional voltage
amplification should be placed at the termination, where the use of several output
stages is beneficial for dynamic range. Note that the length of the coaxial
lines for DUT voltage and current sampling should be matched so that
post-processing is not needed for signal synchronization.
\begin{figure}
\includegraphics[width=1\columnwidth]{figures/overshoot_sim_HRS_50000_LRS_2000_V_2_subplot_backwards.pdf}
\caption{Simulations (using Eq.~\ref{eq:diff_eq}) of $I_\mathrm{d},V_\mathrm{d}$ transients following a rapid resistance transition of the DUT with $V_\mathrm{in} = 2$~V and different values of $C_\mathrm{p}$. Subplot \textbf{(A)} shows $I_\mathrm{d}$ vs. $t$ while \textbf{(B)} shows $I_\mathrm{d}$ vs. $V_\mathrm{d}$ of the same simulations. The DUT resistance value is assumed to change exponentially in time from a high resistance state (HRS) of $50~\mathrm{k}\upOmega$ to a low resistance state (LRS) of $2~\mathrm{k}\upOmega$ with time constant 1~ns. During and following the transition, the device is subjected to excess currents relative to the load line, an effect which is reduced by using lower $C_\mathrm{p}$ values.}
\label{fig:overshoot_sim}
\end{figure}
\begin{figure}[h]
\centering
\maxsizebox{\columnwidth}{!}{\includegraphics[scale=1]{./figures/setup.pdf}}
\caption{Schematic depiction of the overall measurement setup. An arbitrary
waveform generator (AWG) produces the driving signal $V_\mathrm{in}(t)$, and the
resulting current is sampled after the right electrode via the 50~$\upOmega$
shunt of the oscilloscope input. A second oscilloscope channel simultaneously captures the amplified voltage at the left electrode. A ground jumper provides a low inductance return path and reduces RF interference. All instruments are under computer control.
}
\label{fig:measurement_setup}
\end{figure}
A commercial integrated circuit, the DS1808 digipot from Maxim Integrated, was
chosen as the central component to control the series resistance, $R_\mathrm{s}$.
Internally it contains two separate potentiometers, each consisting of a chain
of 32 resistors whose junctions can be connected to a "wiper" output via a set
of CMOS transmission gates (analog switches). For each potentiometer, there are
32 available resistance settings spaced logarithmically (piecewise) from
approximately $300~\upOmega $ to $45~\mathrm{k\upOmega}$. According to the
published specifications, the DS1808 has a maximum parasitic capacitance of
10~pF and a maximum voltage range of $\pm12$~V\cite{DS1808}.
To increase the coverage of $R_\mathrm{s}$ values, the PCB is routed in a way that allows
connection of both potentiometers either in series or in parallel by connecting
or opening solder jumper pads. While a connection to a single potentiometer
remains a possibility, the number of unique settings is increased to 528 between $600~\upOmega - 90~\mathrm{k}\upOmega$ for the series combination and between $150~\upOmega - 22.5~\mathrm{k}\upOmega$ for the parallel combination. Because a low resistance setting below 300~$\upOmega$ is not provided by the digipots, a reed switch was also included on the PCB to add an option to short the input directly to the output.
For amplification of the output voltage, a THS3091 current-feedback operational amplifier from Texas Instruments was used in a non-inverting configuration. This device features low distortion, low noise, a bandwidth of 210 MHz, and a slew rate of $\mathrm{7300 V /\upmu s}$ while adding only 0.1~pF parasitic capacitance\cite{THS3091}.
All on-board settings are controlled via an Atmega32u4 microcontroller programmed as a USB serial interface to the PC. Control of the $R_\mathrm{s}$ value is accessible using any programming language able to open a serial COM connection and send a simple command composed of three integer values corresponding to the wiper positions and the state of the bypass relay. The total time from issuing a serial command to $R_\mathrm{s}$ reaching a new value is limited by USB / I$^2$C communication, and is typically less than 300~$\upmu$s. The overall circuit design is visualized in the a block diagram of Fig~\ref{fig:completeSchematic}, and a corresponding fabricated PCB is pictured in Fig.~\ref{fig:probingsetup}.
\begin{figure}[h]
\centering
\maxsizebox{\columnwidth}{!}{\includegraphics[scale=1]{./figures/schematic2.pdf}}
\caption{Simplified schematic of the digipot measurement circuit. An Atmega32u4 microcontroller USB-serial interface communicates to the DS1808 digipot via an I$^2$C bus. A SPDT reed relay can be actuated in order to bypass the digipot and make a direct connection between input and output. The voltage at the output is amplified by a THS3091 non-inverting follower.}
\label{fig:completeSchematic}
\end{figure}
\begin{figure}[h]
\setlength{\fboxsep}{0pt}%
\setlength{\fboxrule}{1pt}%
\fbox{\includegraphics[width=.8\columnwidth]{figures/PXL_20210829_154822990_scaled.jpg}}
\caption{A photograph of the probing PCB contacting a test chip. A non-coaxial BeCu probe tip is soldered directly to the output of the main PCB (red), which uses SMA connectors for additional input and output signals. An elevated PCB (blue) contains the microcontroller USB interface (Adafruit ItsyBitsy 32u4). A square PCB module (green) functions as a low noise dual voltage regulator providing $\pm$ 12~V to the system. The right probe is directly connected to a 50~$\upOmega$ oscilloscope input.}
\label{fig:probingsetup}
\end{figure}
\section{Measurements}
For quasistatic measurements of classical NDR materials using a series resistance, saddle-node bifurcations can occur that separate the NDR characteristic into stable and unstable regions. The range of the unstable region is determined by the value of the series resistor, with the bifurcations occurring where the derivative of the NDR curve voltage with respect to current crosses $-R_\mathrm{s}$. While sweeping voltage, sudden current jumps are observed for sufficiently low values of $R_\mathrm{s}$ in S-type NDR (Fig.~\ref{fig:NDR2}A) and for sufficiently high values of $R_\mathrm{s}$ in N-type NDR (Fig.~\ref{fig:NDR2}B). Thus, an adaptable $R_\mathrm{s}$ value allows control of the conditions under which each of these characteristic curves, which contain important information, can be measured.
\begin{figure}[h]
\centering
\includegraphics[]{figures/NDR_load_line_jumps_S_and_N.pdf}
\caption{Voltage sweeping measurements of NDR devices using different resistance settings. \textbf{(A)} 90$\times$500$\times$500~nm S-type VCrOx device \cite{hennen_forming-free_2018}, stabilized for $R_\mathrm{s} > 400~\upOmega$. \textbf{(B)} N-type Ga-As tunnel diode 3\foreignlanguage{russian}{И}306E, stabilized for $R_\mathrm{s} = 0~\upOmega$.}
\label{fig:NDR2}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[]{figures/pulsed_oscillations_digipot_oscillation_Rs_1083_2500mV_pulse_backwards.pdf}
\caption{Oscillations (57 MHz) occurring in a 30$\times$250$\times$250~nm \mbox{S-type} VCrOx NDR device\cite{hennen_forming-free_2018} following a square voltage pulse \mbox{$V_\mathrm{in} = 0~V \rightarrow 2.5~V$} using $R_\mathrm{s} = 1083~\upOmega$. With the line color mapped to time of measurement, \textbf{(A)} shows $I_d$ vs $t$ of the transient, and \textbf{(B)} shows the trajectory of the same data on the $I_\mathrm{d},V_\mathrm{d}$ plane.}
\label{fig:osc}
\end{figure}
Where the material mechanism of NDR is dynamic and reversible, the presence of
$C_\mathrm{p}$ makes the measurement circuit prone to transient oscillations, and stable
oscillatory limit cycles can also occur. Useful in these cases, the presented
circuit is able to capture high speed transients and accurately project them
onto the device $I_\mathrm{d},V_\mathrm{d}$ plane (Fig.~\ref{fig:osc}). This data can be used for
device modelling and circuit simulations, each relevant for example in the
ongoing investigations of coupled oscillatory devices in neuromorphic systems.
In ReRAM devices, NDR behavior is mediated by a combination of Joule heating and
migration of point defects in the oxide material that locally increase its
conductivity\cite{waser_redox-based_2009}. Altering the $R_\mathrm{s}$ value allows these
transitions to be probed in different ways, as seen in the example measurements
of Fig.~\ref{fig:reram_measurement}. In analogy to the NDR measurements of
Fig.~\ref{fig:NDR2}, a fixed value of $R_\mathrm{s}$ can result in sudden and unstable
transitions for one or both of the SET or RESET processes. By switching the
value of $R_\mathrm{s}$ during the measurement (Fig.~\ref{fig:reram_measurement}C) it is
observed that runaway load line transitions can be suppressed by appropriate
selection of the external feedback.
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{figures/ReRAM_vs_Rseries_subplot.pdf}
\caption{Cycling measurements of a 100 nm ReRAM
device\cite{hennen_current-limiting_2021} using bipolar triangular voltage
sweeps with 1~ms duration. Each subplot contains 20 consecutive switching cycles differentiated by color. The value of added series resistance is indicated by the dashed lines with gradient -1/$R_\mathrm{s}$. Transition behavior differs considerably when using \textbf{(A)} 2.4~k$\upOmega$, \textbf{(B)} 0~$\upOmega$, and \textbf{(C)} 11~k$\upOmega$ for positive polarity and 0~$\upOmega$ for negative polarity.}
\label{fig:reram_measurement}
\end{figure}
\section{Conclusion}
When performing electrical measurements of resistive switching systems, the use
of well understood circuitry is critical for realistic evaluation. With isolated
devices vulnerable to runaway transitions, a series resistance circuit provides
a simple means for control and tractable analysis of switching processes. In
this context, parasitic capacitance is an important factor, and the values of
both $R_\mathrm{s}$ and $C_\mathrm{p}$ lead to different switching outcomes in general. We have
presented a circuit design for synchronized measurement of switching
trajectories at high speed while using a programmable linear series resistance
with low parasitic capacitance. Using this circuit, possible implications of the
physical processes that accompany runaway transitions can be conveniently
investigated, yielding insights into how optimal control can eventually be
achieved.
\section*{Data Availability}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
\bigskip
| proofpile-arXiv_065-7791 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:Introduction}
Each site, or peer of distributed systems
has its exclusive property of the contents
and the policy for data sharing.
For collaborative work between peers,
the peer expects partner peers to receive some of its
data and asks them for returning the updated data,
or it asks partner peers to provide their data
for use with its local data.
This kind of data sharing is common in
our real-world systems.
Although data sharing without updates
is simple,
collaborative data sharing with the update
propagation of shared data
poses significant problems due to concurrent updates
of different instances of the same data.
Which updates should be allowed or how the update should
be propagated to all the related peers are the typical
issues to be solved.
We have been discussing
``What should be shared'' in collaborative data
sharing,
but not so much talking
about ``How should be shared''.
Concerning the ``what'',
a seminal work on
{\em Collaborative Data Sharing}
~\cite{Ives2005,Karvounarakis2013}
brought several issues upon the specification of
data to be shared.
An approach based on
the view-updating technique
with {\em Bidirectional Transformation}
\cite{Lenses,Bohannon:08,Hidaka:10,HuMT08,HSST11}
has been proved promissing.
Among others,
the {\em Dejima} 1.0 architecture
\cite{dejima-theory:2019,SFDI-Asano2020}
and the {\em BCDS Agent}~\cite{Takeichi21JSSST}
based on Bidirectional Transformation
reveal the effectiveness of using
bidirectional transformation for peers to
control the accessibility of local data.
The basic scheme of these ideas is based
on {\em state-based} semantics. That is,
data to be shared is compared with and
moved to and from between peers.
Although this is straightforward in a sense,
there may be several problems;
the size of messages
for data exchange tends to grow, and
possible conflicts occur due to
concurrent updates.
Looking from the other side,
how to share data between peers
is very similar to
how to synchronize distributed replicas to be the same.
They are almost equivalent except
original intentions.
And our problem to be solved
is how to synchronize distributed replicas in
serverless
distributed systems.
We have various kind of
{\em Conflict-free Replicated Data Types}
(CRDTs)~\cite{Shapiro2011CRDTs}.
The CRDT approach restricts
available operations acted on replicated data;
the {\em Grow-Only-Set} (G-Set) CRDT allows only
the insertion operation on the set data, for example.
Concerning this, based on operational semantics,
we propose another schema for conflict-free
data sharing using {\em effectful}
operations that enables us to insert and delete
elements as we like~\cite{CCSS2021:Takeichi}.
In this paper, we will explore a novel scheme
for collaborative data sharing
based on {\em operational} approach.
The semantics of collaborative data sharing
is redefined using operations performed on
peer's local data.
And as a natural course, operations are exchanged
each other for making effective data sharing between
peers.
Our {\em Operation-based Collaborative Data Sharing}
(OCDS)
can solve the problem concerning possible
conflicts between concurrent operations
by conflict-free synchronization for eventual
consistency.
And this accepts more operations than
CRDTs.
It is the most remarkable feature of our OCDS
compared with CRDTs.
\section{Operations
and Transformations in Data Sharing}
\label{sec:OprationTransformation}
The {\em Dejima} architecture mentioned in the previous
section configures peers with local data called
{\em Base Table} and several additional
{\em Dejima Tables}.
The shared data is located both in the Dejima Tables in
peers $P$ and $Q$ as illustrated in
Fig.\ref{fig:Dejima}.
{\em Bidirectional Transformation} is employed to
convert data between the Base Table
and the Dejima Table. It controls what to
provide and what to accept for data sharing.
\begin{figure}[htb]
\centering
\includegraphics
[width=0.75\linewidth]{Figure/DejimaArchitecture.pdf}
\vspace*{0pt}
\caption{Dejima Architecture for
Collaborative Data Sharing}
\label{fig:Dejima}
\end{figure}
As described above, the Dejima architecture relies on
{\em state-based} semantics.
We will give another definition of {\em collaborative
data sharing} based on operations
on the local data.
We assume that the configuration of peers
is the same as the Dejima
architecture without the Dejima Table which
corresponds to
the state of shared data.
\subsubsection*{Updating Operation}
Peer $P$ has local data $D_P$ of (structured) type
$\mathcal{D}_P$ with operations on $\mathcal{D}_P$.
The operation takes postfix form
$\odot_P p::\mathcal{D}_P \rightarrow \mathcal{D}_P$
where $p \in{\cup_{D_P\in \mathcal{D_P}}}D_P$,
which maps $D_P$ to $D_P \odot_P p \in \mathcal{D}_P$,
i.e., $D_P \mapsto D_P \odot_P p$.
From operational point of view,
``$\odot_P p$ updates $D_P$ into $D_P \odot_P p$''.
Here, $p$ is not necessarily a single element, but may be composed of several elements in
${\cup_{D_P\in \mathcal{D_P}}}D_P$.
But, for simplicity, we write them as a single $p$.
Operation $\odot_P p$ is derived from operator $\odot_P$
in $P$ indexed by $p$.
Operator $\odot_P$ stands for a generic symbol for
$\oplus$, $\ominus$, $\otimes$, $\oslash$, etc.
For convenience, we use a postfix identity operation
``$!$'' which does not change $D_P$, i.e., $D_P!=D_P$
for any $D_P \in \mathcal{D}_P$.
\subsubsection*{Transformation Function}
In addition to these operations, transformation functions
$\langle get_P^q, put_P^q \rangle$,
where
$get_P^q::\mathcal{D}_P \rightarrow \mathcal{D}$ and
$put_P^q::\mathcal{D} \rightarrow \mathcal{D}_P$
for some $\mathcal{D}$.
With a reservation, $put_P^q$ may use $\mathcal{D}_P$
along with $\mathcal{D}$ as
$put_P^q:: \mathcal{D}_P \times
\mathcal{D} \rightarrow \mathcal{D}_P$.
This will be clear shortly with
the reason why $\mathcal{D}_P$
appears in the domain.
Same for the partner peer $Q$ with
$\mathcal{D}_Q$ and
$\langle get_Q^p, put_Q^p \rangle$,
and $\mathcal{D}$
appeared in definitions in $Q$ is common to $\mathcal{D}$
in $P$.
Thus, $\mathcal{D}$ ``combines'' $P$ and $Q$
as a connector for data exchange.
Suffixes for identifying the peer, e.g., $P$
in $\mathcal{D}_P$, $D_P$, $\odot_P$, ...
are omitted when they are clear from the context.
\subsubsection*{Properties of Transformation Functions}
Given data $D_P \in \mathcal{D}_P$ in $P$,
$get_P^q(D_P)$ gives some $D \in \mathcal{D}$.
And then, $Q$ gets share with data
$D_Q=put_Q^p(D)\in \mathcal{D}_Q$
which corresponds to $D_P$ in $P$.
And reciprocally, $P$ shares
$D_P'=put_P^q(D')=put_P^q(get_Q^p(D_Q'))\in \mathcal{D}_P$
which corresponds to $D_Q'$ in $Q$.
Intuitively, we may understand that
part of $D_P$ and part of $D_Q$ are shared
each other.
If it happened to be $D=D'$,
it is natural to assume that $D_P=D_P'$
and $D_Q=D_Q'$ hold with the above equalities,
so
\begin{equation*}
(put_P^q\cdot get_Q^p)\cdot(put_Q^p\cdot get_P^q)=
(put_Q^p\cdot get_P^q)\cdot(put_P^q\cdot get_Q^p)=
id
\end{equation*}
hold.
Then, what should we require for
$get$ and
$put$
in each peer?
Considering that
$\langle get_P^q, put_P^q \rangle$ and
$\langle get_Q^p, put_Q^p \rangle$
are prepared independently in $P$ and $Q$,
it is reasonable to ask for
\begin{equation*}
\begin{split}
put_P^q \cdot get_P^q &= get_P^q \cdot put_P^q = id\\
put_Q^p \cdot get_Q^p &= get_Q^p \cdot put_Q^p = id
\end{split}
\end{equation*}
This is what we call the ``Round-tripping'' property of
{\em well-behaved} bidirectional transformation.
And we require our $get$ and $put$ to satisfy
this property.
To define well-behaved bidirectional transformation
$\langle get_P^q, put_P^q \rangle$,
taking $\mathcal{D}_P$ along with $\mathcal{D}$
as the domain of
$put_P^q$
is of a great help to define well-behaved transformation.
From this reason, we sometimes define it as
$put_P^q:: \mathcal{D}_P \times
\mathcal{D} \rightarrow \mathcal{D}_P$.
From our operational viewpoint, this $put_P^q$
updates the current instance $D_P$
of mutable data $\mathcal{D}_P$ with
$D\in \mathcal{D}$
to produce a new instance
$D_P' \in \mathcal{D}_P$.
This is natural and reasonable
in that we may use the current data when updating
mutable data.
\section{Operation-based Collaborative Data Sharing}
\label{sec:Op-basedCCDS}
Local operation $\odot_P p$
causes an effect on elements of structured data $D_P$
at a time, and therefore $D_P \odot_P p$
is a new instance $D_P'\in \mathcal{D}_P$
which is almost the same as $D_P$
except for some different elements.
A simple example of $\mathcal{D}_P$ is the set with
standard operations ``{\sf insert} an element $p$''
(written as $\cup\{p\}$) and
``{\sf delete} an element $p$''($\setminus\{p\}$).
As for collaborative data sharing between $P$ and $Q$,
a straightforward method for synchronization
would be to exchange $D_P$ and $D_Q$ through $D$
with transformation by $get$s and $put$s
at the gateways of $P$ and $Q$.
This approach is called ``state-based'' because
the state of the data is wholly concerned
in discussion.
Although the state-based approach to collaborative
data sharing is most common,
it is not suitable for {\em conflict-free}
strategies that aim to do something gradually
in $P$ and $Q$ for the shared part of $D_P$ and
the part of $D_Q$ to arrive at the same state
eventually.
The conflict-free approach liberates us from the
necessity of
global locks for exclusive
access to the whole distributed data to avoid
conflicts between concurrent updates.
This is particularly useful in distributed systems
with no coordination by any peers such as P2P-configured
or composed of highly independent peers.
While the {\em Conflict-free Replicated Data Type}
(CRDT) restricts operations so that the data in each
peer can be easily merged,
our conflict-free approach allows a wider class of
operations that are common to general data structures.
Recently, a novel scheme for
{\em Conflict-free Collaborative Set Sharing}
\cite{CCSS2021:Takeichi} is proposed
using operations performed so far instead of
directly merging the current data.
Although this concentrates on the set data,
it can be extended to our data sharing where
transformations lie between peers' local data.
\subsection{Homomorphic Data Structures for Data Sharing}
\label{sec:HomDataStructure}
If
$D_Q=put_Q^p(get_P^q(D_P))$
and
$D_P=put_P^q(get_Q^p(D_Q))$
hold,
we say that ``$D_P$ and $D_Q$ are {\em consistent}''
and write this as $D_P\sim D_Q$.
In other words, consistent $D_P$ and $D_Q$ have
corresponding parts which are shared each other
through intermediate data $D$ between them.
Assuming that $D_P\sim D_Q$,
then what happens when operation
$\odot_P p$ is performed on $D_P$ to
produce $D_P\odot_P p$?
\begin{itemize}
\item If $get_P^q(D_P\odot_P p)$ gives some
$D'$ which is to be transformed next by $put_Q^p$, and
\begin{itemize}
\item If $put_Q^p(D')$ gives
some $D_Q'\in \mathcal{D}_Q$,
then $D_P\odot_P p \sim D_Q'$.
\item Otherwise, $D_P\odot_P p$
has no corresponding instance in $\mathcal{D}_Q$.
\end{itemize}
\item Otherwise, $D_P\odot_P p$
has no corresponding instance in $\mathcal{D}_Q$.
\end{itemize}
Since $\odot_P p$ changes some elements of $D_P$,
we hope that $D'$ and $D_Q'$
also change some element as $D'=D \odot x$
and $D_Q'=D_Q \odot_Q q$ with $\odot x$ and $\odot_Q q$.
In most of our data sharing applications,
$D_P$, $D_Q$ and intermediate data $D$
are {\em homomorphic} each other in that the above
conditions are satisfied.
In this respect, our transformation functions $get$
and $put$
partly provide {\em homomorphism}.
The simplest example would be the case where all the
related data structures are sets or SQL tables, etc.
In general, these are not necessarily the same but are
homomorphic.
And we need more about homomorphism on operations
for our operation-based data sharing.
\subsubsection*{Homomorphic Data Structures with
Operations}
Data type
$\langle\mathcal{A},\circledcirc_A\rangle$
is closed with respect to operations $\circledcirc_A a$
for any $a\in \cup_{A\in\mathcal{A}}A$,
where operator symbol
$\circledcirc_A :: (\mathcal{A},\cup_{A\in\mathcal{A}}A)
\rightarrow \mathcal{A}$
represents any operators in $\mathcal{A}$.
We simply write here $\circledcirc_A$ for the set of
operators in $\mathcal{A}$ and use the same symbol for
one of them as a generic operator in an overloaded manner.
The operation
$\circledcirc_A a :: \mathcal{A} \rightarrow \mathcal{A}$
is postfixed to the operand
$A \in \mathcal{A}$ to produces
$A'=A \circledcirc_A a \in \mathcal{A}$.
This models the {\em mutable} state data $A$
with operations $\circledcirc_A a$ on $A$ using some
element $a$.
\subsubsection*{Definition of Homomorphic Data Types}
Data types
$\langle\mathcal{A},\circledcirc_A\rangle$
and
$\langle\mathcal{B},\circledcirc_B\rangle$
are {\em homomorphic} if there exist
$h::\mathcal{A} \rightarrow \mathcal{B}$
and overloaded
$h::\circledcirc_A \rightarrow \circledcirc_B$
satisfying
\begin{equation*}
\begin{split}
&\forall A\in \mathcal{A}.
\exists B \in \mathcal{B}. B=h (A)\\
&\forall A\in \mathcal{A}. \forall a\in
\cup_{A\in\mathcal{A}}A.
\exists B \in \mathcal{B}.\exists b \in
\cup_{B\in\mathcal{B}}.
B \circledcirc_B b =
h(A \circledcirc_A a )
\end{split}
\end{equation*}
We assume that every data type
$\langle\mathcal{A},\circledcirc_A\rangle$
has an identity operation ``$!$'' which does
not affect the state of data. That is, for any
$A\in\mathcal{A}$, $A!=A$ holds.
In short,
for operation-based collaboration, we require
operations to exchange between
homomorphic data types so that operation on a peer
corresponds to operation on
the partner peer.
\subsubsection*{Examples of Homomorphic Data Types}
Previous works on state-based data sharing
with transformation
\cite{dejima-theory:2019,SFDI-Asano2020,Takeichi21JSSST}
exclusively deal with SQL databases as local data.
Specifically,
if the intermediate data $D$ is defined
as the view of the SQL
table $D_P$ of the local data,
it is obvious that $D_P$ and $D$ are homomorphic
because $D$ is produced by selection and projection of
$D_P$.
The Dejima architecture allows so-called SPJU
(Select-Project-Join-Union) queries by the SQL's
{\sf SELECT-FROM-WHERE-UNION} construct
for the view. But, since it is not clear what
operations
are permitted on (multiple) SQL tables of $D_P$,
we need more to work on
making sure that $D_P$ and $D$ are
homomorphic. We leave this for the future.
As a demonstration of the independence of implementation
of the local data $D_P$ from the intermediate data $D$
of our operation-based data sharing,
consider the case that $D$ is a set, i.e.,
no duplicates in aggregation, and $D_P$ implements
set by the binary search tree.
In this case, we easily give a homomorphism mapping
from $D_P$ to $D$.
Or, it should be grounded in the data abstraction
mechanism.
As for the relationship of homomorphic data types
with state machines, see the Appendix.
\subsection{Transformation of Operations}
\label{sec:TransOp}
For homomorphic data structures
$\langle\mathcal{D}_P,\odot_P\rangle$,
$\langle\mathcal{D},\odot\rangle$
and
$\langle\mathcal{D}_Q,\odot_Q\rangle$,
operations are transformed according to
the hompmorphisms by $get$ and $put$.
We write
\begin{verse}
$\odot_P {}_P^q\!{\rightarrowtail} \odot x$,
if $get_P^q(D_P\odot_P p)$ gives $D \odot x$.\\
$\odot x ~{\looparrowright}_Q^p\odot_Q q$,
if $put_Q^p(D\odot x)$ gives $D_Q \odot_Q q$
\end{verse}
Our Operation-based Collaborative Data Sharing wholly
sends and receives operations instead of data as shown
in the diagram of Fig.\ref{fig:Op-basedCDS}.
\begin{figure}[htb]
\centering
\includegraphics
[width=0.75\linewidth]{Figure/Op-basedCDS.pdf}
\vspace*{0pt}
\caption{Operation-based Collaborative Data Sharing}
\label{fig:Op-basedCDS}
\end{figure}
In this way, peers
communicate
updating operations to and from each other with
necessary transformation at the gateway of the peer.
\section{Architecture for
Collaborative Data Sharing}
\label{sec:Architecture}
Peers of our Collaborative Data Sharing
run concurrently and they transmit their updates
asynchronously to and from each other.
Thus, the {\em payload} of the communication message
between peers is the updating operation
from a peer to its partner peer.
The peer as the {\em client} sends local operations
to the partner peers,
and as the {\em server} receives remote operations
from the partners and then perform necessary operations
to reflect them on the local data.
In these processes, each peer works as follows.
Peer $P$ asynchronously receives local operations
from the user and remote operations from
the partner peers.
These operations are to be performed on $D_P$
and
are stored in the queue for serialized access to $D_P$.
So far, we explained our scheme
solely with peers $P$ and $Q$.
But, in general, each peer $P$ has
multiple peers connected
in the system.
For $P$ to do with all the partmer peers,
we need to clarify how $P$ should do.
In peer $P$, every update on data $D_P$
is propagated to all the partner peers
$k=\cdots, Q, \cdots$
through the outgoing communication ports prepared
for each peer $k$ after it is transformed by
${}_P^k\!{\rightarrowtail}$.
And remote operations from peers are received
asynchronously from the partner peers
$K=\cdots, Q, \cdots$ through the incoming ports each
prepared for the peer $k$ and transformed by
${\looparrowright}_P^k$.
As the local and remote operations arrive asynchronously,
the peer needs to provide queues for them to perform
the operations on
the local data.
In addition to this,
we follow the scheme of {\em Conflict-free Collaborative
Set Sharing} described in \cite{CCSS2021:Takeichi}
for conflict-free synchronization.
We call
the implementation of the peer as described above
by the name ``OCDS Agent''.
The OCDS Agent is developed to
achieve conflict-free synchronization
of the local data using internal queues
for serialization of asynchronous access
to the local data and asynchronous
transmission of operations as
illustrated in Fig.\ref{fig:OCDSAgent}.
\begin{figure}[htb]
\centering
\includegraphics
[width=0.75\linewidth]{Figure/OCDSAgent.pdf}
\vspace*{0pt}
\caption{OCDS Agent for
Operation-based Collaborative Data Sharing}
\label{fig:OCDSAgent}
\end{figure}
\section{An Example of Operation-based
Collaborative Data Sharing}
\label{sec:Example}
\subsubsection*{Sharing Double and Triple Numbers}
Let $D_P$ be a set of integers with operations
``{\sf insert} an element $p$''
($\cup\{p\}$) and
``{\sf delete} an element $p$''($\setminus\{p\}$).
Bidirectional transformation defied in $P$ is
\begin{equation*}
\begin{split}
get_P^p(D_P)&=\{p~|~p\%2=0, ~p \in D_P\}\\
put_P^q(D_P,D)&=D_P \setminus
get_P^q(D_P) \cup \{x~|~x\%2=0, ~x \in D\}
\end{split}
\end{equation*}
where $\%$ represents the modulo operation.
Functions $get_P^q$ and $put_P^q$ define the mapping for
view-updating of the state-based approach;
$get_P^q$ produces the view $D=get_P^q(D_P)$, and
$put_P^q$ replects the update $D'$ of $D$ onto the
source as $D_P'=put_P^q(D_P,D)$.
We can see that this bidirectional transformation
$\langle get_P^q, put_P^q \rangle$
satisfies the round-tripping property and so
is well-behaved.
Similarly, defined in $Q$:
\begin{equation*}
\begin{split}
get_Q^p(D_Q)&=\{q~|~q\%3=0, ~q \in D_Q\}\\
put_Q^p(D_Q,D)&=D_P \setminus
get_Q^p(D_Q) \cup \{x~|~x\%3=0, ~x \in D\}.
\end{split}
\end{equation*}
Then, we use these for data sharing in a way that the
intermediate data $D$ represents shared data
consisting elements in both of $get_P^q(D_P)$
and $get_Q^p(D_Q)$, i.e.,
$D=get_P^q(D_P)\cap get_Q^p(D_Q)$.
In brief, $D$ contains sextuple numbers,
i.e., numbers divisible by 6, common to $D_P$ and $D_Q$.
We can confirm by the state-based semantics
that local updates in $P$ and $Q$ are faithfully
reflected in both $D_P$ and $D_Q$ through
the Dejima $D$ if this condition holds.
Now, we are going to our operation-based sharing.
Recall that $get_P^q$ tells us that
$P$ is willing to share double numbers with $Q$,
and that $get_Q^p$ tells us that $Q$ is willing
to share triple numbers with $P$.
However, the $put$ functions tell us that
$P$ will accept only double numbers, and
$Q$ will accept only triple numbers from the
common intermediate data $D$.
A short story is here:
\begin{enumerate}
\item Start from $D_P=\{1,2,3,4\}$
and $D_Q=\{2,3,4,9\}$.
They are consistent, i.e., $D_P\sim D_Q$ since $D=\{\}$.
\item Network connection fails.
\item Concurrently, $P$ does $\cup\{6\}$
and $Q$ does $\setminus \{4\}$.
\item Connection restored, and synchronization
processes start
in $P$ and $Q$ independently.
\end{enumerate}
Then, what happens in synchronization processes?
These operations are in fact {\em effectful}\footnote{
The concept of the ``effectful'' operation
is described in \cite{CCSS2021:Takeichi}}
in that $\cup \{6\}$ is applied to $D_P$
which does not contain $6$,
and $\setminus \{4\}$ is applied to $D_Q$ which
does contain $4$.
In Step 3, $P$'s local data becomes
$D_P'=D_P\cup\{6\}=\{1,2,3,4,6\}$,
and $Q$'s local data becomes
$D_Q'=D_Q\setminus \{4\}=\{2,3,9\}$.
Synchronization proceeds as
\begin{itemize}
\item Since
$\cup \{6\}{}_P^q\!{\rightarrowtail}\cup
\{6\}{\looparrowright}_Q^p$, $6$ is added to
$D_Q'$ to produce
$D_Q''=D_Q'\cup \{6\}= \{2,3,6,9\}$.
\item On the other direction,
since $\setminus\{4\}$ in $Q$ cannot be passed
to ${}_Q^p\!{\rightarrowtail}$ because
$get_Q^p$ rejects $4$,
this operation does not arrive at $P$.
\end{itemize}
Thus, these synchronization processes concurrently
done in $P$ and $Q$ lead $P$'s data and $Q$'s data
to the consistent state,
i.e., $D_P'\sim D_Q''$ with $D=\{6\}$.
Another story is here:
In Step 3 above, what happens if
``$Q$ does $\setminus \{6\}$''
instead of
``$Q$ does $\setminus \{4\}$''?
Note that these operations are not effectful because
$\setminus \{6\}$ here
is applied to $D_Q$ which does not contain $6$.
During the period of network failure,
$P$’s local data becomes
$D_P'=D_P\cup\{6\}=\{1,2,3,4,6\}$ as before,
and $Q$’s local data remains at it has been because
$D_Q'=D_Q\setminus\{6\}=\{2,3,4,9\}$.
Synchronization proceeds as follows
after the network connection
is restored.
\begin{itemize}
\item $\cup \{6\}{}_P^q\!{\rightarrowtail}\cup
\{6\}{\looparrowright}_Q^p$
causes changes
$D_Q''=D_Q'\cup \{6\}= \{2,3,6,9\}$
as the previous case.
\item And since
$\setminus \{6\}{}_Q^p\!{\rightarrowtail}
\setminus \{6\}{\looparrowright}_P^q\setminus \{6\}$,
$P$ may produces a new state
$D_P''=D_P'\setminus\{6\}=\{1,2,3,4\}$.
\end{itemize}
If the synchronization in $P$ proceeds as above,
$P$ loses $6$ which was added in Step 3,
while it is added to $Q$'s local data by $Q$'s
synchronization.
This breaks the consistency of
$D_P''$ and $D_Q'$.
From these examples,
we observe that the effectful set operations
in concurrent updates is essential
for conflict-free synchronization.
They effectively avoid insertion/deletion
conflicts in synchronization.
This is an extension of the scheme
for data sharing described in \cite{CCSS2021:Takeichi}.
Here, we used transformations
$get$ and $put$ at the gateways of the peers.
\section{Remarks}
\label{sec:Remarks}
We can employ our OCDS Agent for configuring
serverless distributed systems with
ensuring eventual consistency of peers' local data.
We may consider this as an alternative scheme for
the Dejima style data sharing~\cite{dejima-theory:2019}
that has been
implemented to ensure the global strong
consistency by locking on the way during the update
propagation.
Our conflict-free approach allows peers to
leave and join at any time and can afford to
the network failure and restoration.
A remarkable feature of our OCDS
is that it enables us to control the data;
what to provide and
what to accept for sharing with other peers.
This contrasts clearly with other conflict-free
data sharing or data synchronization of replicated
data such as CRDTs.
\begin{comment}
\bigskip
\noindent
{\bf Acknowledgements}
This work is partially supported by the Japan Society
for the Promotion od Science (JSPS) Grant-in-Aid for
Scientic Research (S) No. 17H06099
``Bidirectional Information
Systems for Collaborative, Updatable, Interoperable,
and Trusted Sharing''.
The author would like to thank to Zhenjiang Hu
conducting this project
and also to the members of the project for
discussion.
\end{comment}
\bibliographystyle{abbrv}
| proofpile-arXiv_065-7793 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Community detection is the task of dividing a network --- typically one
which is large --- into many smaller groups of nodes that have a similar
contribution to the overall network structure. With such a division, we
can better summarize the large-scale structure of a network by
describing how these groups are connected, instead of each individual
node. This simplified description can be used to digest an otherwise
intractable representation of a large system, providing insight into its
most important patterns, how it relates to its function, and the
underlying mechanisms responsible for its formation.
Because of its important role in network science, community detection
has attracted substantial attention from researchers, specially in the
last 20 years, culminating in an abundant literature (see
Refs.~\cite{fortunato_community_2010,fortunato_community_2016} for a
review). This field has developed significantly from its early days,
specially over the last 10 years, during which the focus has been
shifting towards methods that are based on statistical inference (see
e.g. Refs.~\cite{moore_computer_2017,abbe_community_2017,peixoto_bayesian_2019}).
Despite this shift in the state-of-the-art, there remains a significant
gap between the best practices and the adopted practices in the use of
community detection for the analysis of network data. It is still the
case that some of the earliest methods proposed remain in widespread
use, despite their many serious shortcomings that have been uncovered
over the years. Most of these problems have been addressed with more
recent methods, that also contributed to a much deeper theoretical
understanding of the problem of community
detection~\cite{decelle_asymptotic_2011,zdeborova_statistical_2016,moore_computer_2017,abbe_community_2017}.
Nevertheless, some misconceptions remain and are still promoted. Here we
address some of the more salient ones, in an effort to dispel
them. These misconceptions are not uniformly shared; and those that pay
close attention to the literature will likely find few surprises
here. However, it is possible that many researchers employing community
detection are simply unaware of the issues with the methods being used.
Perhaps even more commonly, there are those that are in fact aware of
them, but not of their actual solutions, or the fact that some supposed
countermeasures are ineffective.
Throughout the following we will avoid providing black box recipes to be
followed, and instead try as much as possible to frame the issues within
a theoretical framework, such that the criticisms and solutions can be
justified in a principled manner.
We will set the stage by making a fundamental distinction between
``descriptive'' and ``inferential'' community detection approaches. As
others have emphasized before~\cite{schaub_many_2017}, community
detection can be performed with many goals in mind, and this will
dictate which methods are most appropriate. We will provide a simple
``litmus test'' that can be used to determine which overall approach is
more adequate, based on whether our goal is to seek inferential
interpretations. We will then move to a more focused critique of the
method that is arguably the most widely employed
--- modularity maximization. This method has an emblematic character,
since it contains all possible pitfalls of using descriptive methods for
inferential aims. We will then follow with a discussion of myths,
pitfalls, and half-truths that obstruct a more effective analysis of
community structure in networks.
(We will not give a throughout technical introduction to inferential
community detection methods, which can be obtained instead in
Ref.~\cite{peixoto_bayesian_2019}. For a practical guide on how to use
various inferential methods, readers are referred to the detailed
HOWTO\footnote{Available at
\url{https://graph-tool.skewed.de/static/doc/demos/inference/inference.html}.}
available as part of the \texttt{graph-tool} Python
library~\cite{peixoto_graph-tool_2014}.)
\section{Descriptive vs. inferential community detection}
At a very fundamental level, community detection methods can be divided
into two main categories: ``descriptive'' and ``inferential.''
\textbf{Descriptive methods} attempt to find communities according to
some context-dependent notion of a good division of the network into
groups. These notions are based on the patterns that can be identified
in the network via an exhaustive algorithm, but without taking into
consideration the possible rules that were used to create them. These
patterns are used only to \emph{describe} the network, not to explain
it. Usually, these approaches do not articulate precisely what
constitutes community structure to begin with, and focus instead only on
how to detect them. For this kind of method, concepts of statistical
significance, parsimony and generalizability are usually not evoked.
\textbf{Inferential methods}, on the other hand, start with an explicit
definition of what constitutes community structure, via a generative
model for the network. This model describes how a \emph{latent}
(i.e. not observed) partition of the nodes would affect the placement of
the edges. The inference consists on reversing this procedure to
determine which node partitions are more likely to have been responsible
for the observed network. The result of this is a ``fit'' of a model to
data, that can be used as a tentative explanation of how it came to
be. The concepts of statistical significance, parsimony and
generalizability arise naturally and can be quantitatively assessed in
this context.
Descriptive community detection methods are by far the most numerous,
and those that are in most widespread use. However, this contrasts with
the current state-of-the-art, which is composed in large part of
inferential approaches. Here we point out the major differences between
them and discuss how to decide which is more appropriate, and also why
one should in general favor the inferential varieties whenever the
objective is to derive generative interpretations from data.
\subsection{Describing vs. explaining}
We begin by observing that descriptive clustering approaches are the
methods of choice in certain contexts. For instance, such approaches
arise naturally when the objective is to divide a network into two or
more parts as a means to solve a variety of optimization
problems. Arguably, the most classic example of this is the design of
Very Large Scale Integrated Circuits (VLSI)~\cite{baker_cmos_2010}. The
task is to combine from up to billions of transistors into a single
physical microprocessor chip. Transistors that connect to each other
must be placed together to take less space, consume less power, reduce
latency, and reduce the risk of cross-talk with other nearby
connections. To achieve this, the initial stage of a VLSI process
involves the partitioning of the circuit into many smaller modules with
few connections between them, in a manner that enables their efficient
spatial placement, i.e. by positioning the transistors in each module
close together and those in different modules farther apart.
Another notable example is parallel task scheduling, a problem that
appears in computer science and operations research. The objective is to
distribute processes (i.e. programs, or tasks in general) between
different processors, so they can run at the same time. Since processes
depend on the partial results of other processes, this forms a
dependency network, which then needs to be divided such that the number
of dependencies across processors is minimized. The optimal division is
the one where all tasks are able to finish in the shortest time
possible.
Both examples above, and others, have motivated a large literature on
``graph partitioning'' dating back to the
70s~\cite{kernighan_graph_1969,kernighan_efficient_1970,bichot_graph_2013},
which covers a family of problems that play an important role in
computer science and algorithmic complexity theory.
Although reminiscent of graph partitioning, and sharing with it many
algorithmic similarities, community detection is used more broadly with
a different
goal~\cite{fortunato_community_2010,fortunato_community_2016}. Namely,
the objective is to perform \emph{data analysis}, where one wants to
extract scientific understanding from empirical observations. The
communities identified are usually directly used for representation
and/or interpretation of the data, rather than as a mere device to solve
a particular optimization problem. In this context, a merely descriptive
approach will fail at giving us a meaningful insight into the data, and
can be misleading, as we will discuss in the following.
\begin{figure}[t!]
\begin{tabular}{cc}
{\larger Description} & {\larger Explanation} \\
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=.49\textwidth]{figs/face.png}};
\node[green] (eyes) at (3.2,4.7){};
\node[green] (eyest) at (1.2,6.8){\textbf{Eye}};
\draw[green,ultra thick,-to] (eyest) -> (eyes);
\node[green] (nose) at (4.3,4.3){};
\node[green] (noset) at (6.2,5.8){\textbf{Nose}};
\draw[green,ultra thick,-to] (noset) -> (nose);
\node[green] (mouth) at (3.8,3.5){};
\node[green] (moutht) at (1.2,1.8){\textbf{Mouth}};
\draw[green,ultra thick,-to] (moutht) -> (mouth);
\end{tikzpicture}
&
\includegraphics[width=.49\textwidth]{figs/mountain.png}\\
A face & A mountain\\
\includegraphics[width=.49\textwidth]{figs/descriptive.pdf} &
\includegraphics[width=.49\textwidth]{figs/observed_random.pdf} \\
A network with 13 communities &
\begin{minipage}{.49\textwidth}
A random network with a prescribed degree sequence, and no
community structure.
\end{minipage}
\end{tabular}
\caption{Difference between descriptive and inferential approaches to
data analysis. As an analogy, on the top row we see two
representations of the \emph{Cydonia Mensae} region on Mars. On the
top left is a descriptive account of what we see in the picture,
namely a face. On the top right is an inferential of representation of
what lies behind it, namely a mountain. (We show a more recent image
of the same region with a higher resolution to represent an
inferential interpretation of the figure on the left.) More
concretely, on the bottom row we see two representations of the same
network. On the bottom left we see a descriptive division into 13
assortative communities. On the bottom right we see an inferential
representation as a degree-constrained random network, with no
communities, since this is a more likely model of how this network was
formed (see Fig.~\ref{fig:descriptive}). \label{fig:infvsdesc}}
\end{figure}
We illustrate the difference between descriptive and inferential
approaches in Fig.~\ref{fig:infvsdesc}. We first make an analogy with
the famous ``face'' seen on images of the \emph{Cydonia Mensae} region
of the planet Mars. A merely descriptive account of the image can be
made by identifying the facial features seen, which most people
immediately recognize. However, an inferential description of the same
image would seek instead to \emph{explain} what is being seen. The
process of explanation must invariably involve at its core an
application of the law of parsimony, or \textbf{Occam's razor}. This
principle predicates that when considering two hypotheses compatible
with an observation, the simplest one must prevail. Employing this logic
results in the conclusion that what we are seeing is in fact a regular
mountain, without denying that it looks like a face in that picture, but
just accidentally. In other words, the ``facial'' description is not
useful as an explanation, as it emerges out of random features rather
than exposing any underlying mechanism.
Going out of the analogy and back to the problem of community detection,
in the bottom of Fig.~\ref{fig:infvsdesc} we see a descriptive and an
inferential account of an example network. The descriptive one is a
division of the nodes into 13 assortative communities, which would be
identified with many descriptive community detection methods available in
the literature. Indeed, we can inspect visually that these groups form
assortative communities,\footnote{See Sec.~\ref{sec:obvious} for
possible pitfalls with relying on visual inspections.} and most people
would agree that these communities are really there, according to most
definitions in use: these are groups of nodes with many more internal
edges than external ones. However, an inferential account of the same
network would reveal something else altogether. Specifically, it would
explain this network as the outcome of a process where the edges are
placed at random, without the existence of any communities. The
communities that we see in Fig.~\ref{fig:infvsdesc}(a) are just a
byproduct of this random process, and therefore carry no explanatory
power. In fact, this is exactly how the network in this example was
generated,
i.e. by choosing a specific degree sequence and connecting the edges
uniformly at random.
\begin{figure}[t]
\begin{tabular}{cc}
\multicolumn{2}{c}{(a) Generative process (random stub matching)}\\
\multicolumn{2}{c}{\smaller $13$ nodes with degree $20$ and $230$ nodes with degree $1$}\\
\multicolumn{2}{c}{\includegraphics[width=\textwidth]{figs/generation-stubs.pdf}} \\
\multicolumn{2}{c}{\smaller Stubs paired uniformly at random}\\
\multicolumn{2}{c}{\includegraphics[width=\textwidth]{figs/generation-placement.pdf}} \\
(b) Observed network & (c) New sample \\
\includegraphics[width=.49\textwidth]{figs/descriptive.pdf} &
\includegraphics[width=.49\textwidth]{figs/descriptive_shuffled.pdf}\\
\end{tabular}
\caption{Descriptive community detection finds a partition of the
network according to an arbitrary criterion that bears in general no
relation to the rules that were used to generate it. In (a) is shown
the generative model we consider, where first a degree sequence is
given to the nodes (forming ``stubs'', or ``half-edges'') which then
are paired uniformly at random, forming a graph. In (b) is shown a
realization of this model. The node colors show the partition found
with virtually any descriptive community detection method. In
(c) is shown another network sampled from the same model, together
with the same partition found in (b), which is completely uncorrelated
with the new apparent communities seen, since they are the mere
byproduct of the random placement of the edges. An inferential
approach would find only a single community in both (b) and (c), since
no partition of the nodes is relevant for the underlying generative
model.\label{fig:descriptive}}
\end{figure}
In Fig.~\ref{fig:descriptive}(a) we illustrate in more detail how the
network in Fig.~\ref{fig:infvsdesc} was generated: The degrees of the
nodes are fixed, forming ``stubs'' or ``half-edges,'' which are then
paired uniformly at random forming the edges of the
network.\footnote{This uniform pairing will typically also result in the
occurrence of pairs of nodes of degree one connected together in their
own connected component. We consider an instance of the process where
this does not happen for visual clarity in the figure, but without
sacrificing its main message.} In Fig.~\ref{fig:descriptive}(b), like in
Fig.~\ref{fig:infvsdesc}, the node colors show the partition found with
descriptive community detection methods. However, this network division
carries no explanatory power beyond what is contained in the degree
sequence of the network, since it is generated otherwise uniformly at
random. This becomes evident in Fig.~\ref{fig:descriptive}(c), where we
show another network sampled from the same generative process,
i.e. another random pairing, but partitioned according to the same
division as in Fig.~\ref{fig:descriptive}(b). Since the nodes are paired
uniformly at random, constrained only by their degree, this will create
new apparent ``communities'' that are always uncorrelated with one
another. Like the ``face'' on Mars, they can be seen and described, but
they cannot (plausibly) explain.
We emphasize that the communities found in Fig.~\ref{fig:descriptive}(b)
are indeed really there from a descriptive point of view, and they can
in fact be useful for a variety of tasks. For example, the \emph{cut}
given by the partition, i.e. the number of edges that go between
different groups, is only 13, which means that we need only to remove
this number of edges to break the network into (in this case) 13 smaller
components. Depending on context, this kind of information can be used
to prevent a widespread epidemic, hinder undesired communication, or, as
we have already discussed, distribute tasks among processors and design
a microchip. However, what these communities \emph{cannot} be used for
is to \emph{explain} the data. In particular, a conclusion that would be
completely incorrect is that the nodes that belong to the same group
would have a larger probability of being connected between
themselves. As shown in Fig.~\ref{fig:descriptive}(a), this is clearly not
the case, as the observed ``communities'' arise by pure chance, without
any preference between the nodes.
\subsection{To infer or to describe? A litmus test}\label{sec:infer}
Given the above differences, and the fact that both inferential and
descriptive approaches have their uses depending on context, we are left
with the question: Which approach is more appropriate for a given task
at hand? In order to help answering this question, for any given
context, it is useful to consider the following ``litmus test'':\\
\tikzstyle{background rectangle}=[thin,draw=black]
\begin{savenotes}
\noindent\begin{tikzpicture}[show background rectangle]
\node[align=justify, text width=.92\linewidth, inner sep=1em]{
Q: ``Would the usefulness of our conclusions change if we learn, after
obtaining the communities, that the network being analyzed is
maximally random?''\\
If the answer is ``yes,'' then an inferential approach is needed.\\
If the answer is ``no,'' then an inferential approach is not required.
};
\node[xshift=3ex, yshift=-0.7ex, overlay, fill=white, draw=white, above
right] at (current bounding box.north west) {
\textit{\textbf{Litmus test: to infer or to describe?}}
};
\end{tikzpicture}
\end{savenotes}
If the answer to the above question is ``yes,'' then an inferential
approach is warranted, since the conclusions depend on an interpretation
of how the data were generated. Otherwise, a purely descriptive approach
may be appropriate since considerations about generative processes are
not relevant.
It is important to understand that the relevant question in this context
is not whether the network being analyzed is \emph{actually} maximally
random,\footnote{``Maximally random'' here means that, conditioned on
some global or local constraints, like the number of edges or the node
degrees, the placement of the edges is done in uniformly at random. In
other words, the network is sampled from a maximum-entropy model
constrained in a manner unrelated to community structure, such that
whatever communities we may ascribe to the nodes could have played no
role in the placement of the edges.} since this is rarely the case for
empirical
networks. Instead, considering this hypothetical scenario serves as a
test to evaluate if our task requires us to separate between actual
latent community structure (i.e. those that are responsible for the
network formation), from those that arise completely out of random
fluctuations, and hence carry no explanatory power. Furthermore, most
empirical networks, even if not maximally random, like most interesting
data, are better explained by a mixture of structure and randomness, and
a method that cannot tell those apart cannot be used for inferential
purposes.
Returning to the VLSI and task scheduling examples we considered in the
previous section, it is clear that the answer to the litmus test above
would be ``no,'' since it hardly matters how the network was generated
and how we should interpret the partition found, as long as the
integrated circuit can be manufactured and function efficiently, or the
tasks finish in the minimal time. Interpretation and explanations are
simply not the primary goals in these cases.\footnote{Although this is
certainly true at a first instance, we can also argue that properly
understanding \emph{why} a certain partition was possible in the first
place would be useful for reproducibility and to aid the design of
future instances of the problem. For these purposes, an inferential
approach would be more appropriate.}
However, it is safe to say that in network data analyses very often the
answer to the question above question would be ``yes.'' Typically,
community detection methods are used to try to understand the overall
large-scale network structure, determine the prevalent mixing patterns,
make simplifications and generalizations, all in a manner that relies on
statements about what lies behind the data, e.g. whether nodes were more
or less likely to be connected to begin with. A majority of conclusions
reached would be severely undermined if one would discover that the
underlying network is in fact completely random. This means that these
analyses are at a grave peril when using purely descriptive methods,
since they are likely to be \emph{overfitting} the data ---
i.e. confusing randomness with underlying generative structure.\footnote{
We emphasize that the concept of overfitting is intrinsically tied with
an inferential goal, i.e. one that involves interpretations about an
underlying distribution of probability relating to the network
structure. The partitioning of a graph with the objective of producing
an efficient chip design cannot overfit, because it is does not elicit
an inferential interpretation. Therefore, whenever we mention that a
method overfits, we refer only to the situation where it is being
employed with an inferential goal, and that it incorporates a level of
detail that cannot be justified by the statistical evidence available in
the data.}
\subsection{Inferring, explaining, and compressing}\label{sec:inference}
Inferential approaches to community detection (see
Ref.~\cite{peixoto_bayesian_2019} for a detailed introduction) are
designed to provide explanations for network data in a principled
manner. They are based on the formulation of generative models that
include the notion of community structure in the rules of how the edges
are placed. More formally, they are based on the definition of a
likelihood $P(\bm{A}|\bm{b})$ for the network $\bm{A}$ conditioned on a partition
$\bm{b}$, which describes how the network could have been generated,
and the inference is obtained via the posterior distribution,
according to Bayes' rule, i.e.
\begin{equation}\label{eq:bayes}
P(\bm{b}|\bm{A}) = \frac{P(\bm{A}|\bm{b})P(\bm{b})}{P(\bm{A})},
\end{equation}
where $P(\bm{b})$ is the prior probability for a partition $\bm{b}$. The
inference procedure consists in sampling from or maximizing this
distribution, which yields the most likely division(s) of the network
into groups, according to the statistical evidence available in the data
(see Fig.~\ref{fig:dcsbm}(c)).
Overwhelmingly, the models used to infer communities are variations of
the stochastic block model (SBM)~\cite{holland_stochastic_1983}, where
in addition to the node partition, it takes the probability of edges
being placed between the different groups as an additional set of
parameters. A particularly expressive variation is the degree-corrected
SBM (DC-SBM)~\cite{karrer_stochastic_2011}, with a marginal likelihood
given by~\cite{peixoto_nonparametric_2017}
\begin{equation}\label{eq:dcsbm-marginal}
P(\bm{A}|\bm{b}) = \sum_{\bm{e}, \bm k}P(\bm{A}|\bm{k},\bm{e},\bm{b})P(\bm{k}|\bm{e},\bm{b})P(\bm{e}|\bm{b}),
\end{equation}
where $\bm{e}=\{e_{rs}\}$ is a matrix with elements $e_{rs}$ specifying how
many edges go between groups $r$ and $s$, and $\bm k=\{k_i\}$ are the
degrees of the nodes. Therefore, this model specifies that, conditioned
on a partition $\bm{b}$, first the edge counts $\bm{e}$ are sampled from a
prior distribution $P(\bm{e}|\bm{b})$, followed by the degrees from the prior
$P(\bm{k}|\bm{e},\bm{b})$, and finally the network is wired together according
to the probability $P(\bm{A}|\bm{k},\bm{e},\bm{b})$, which respects the constraints
given by $\bm k$, $\bm{e}$, and $\bm{b}$. See Fig.~\ref{fig:dcsbm}(a) for a
illustration of this process.
\begin{figure}[t]
\begin{tabular}[t]{cccc}
\multicolumn{4}{c}{(a) Generative process}\\[.5em]
\includegraphics[width=.25\textwidth]{figs/example-partition.pdf}&
\includegraphics[width=.25\textwidth]{figs/example-bg.pdf}&
\includegraphics[width=.25\textwidth]{figs/example-degs.pdf}&
\includegraphics[width=.25\textwidth]{figs/example-done.pdf}\\
\smaller Node partition, $P(\bm{b})$ &
\smaller Edges between groups, $P(\bm{e}|\bm{b})$ &
\smaller Degrees, $P(\bm{k}|\bm{e},\bm{b})$ &
\smaller Network, $P(\bm{A}|\bm{k},\bm{e},\bm{b})$
\end{tabular}
\vspace{1em}
\begin{tabular}[c]{ccccc}
\multicolumn{5}{c}{(b) Inference procedure}\\[.5em]
\multirow[t]{2}{*}[-4.5em]{\includegraphics[width=.25\textwidth]{figs/example-observed.pdf}}&
\includegraphics[width=.16\textwidth]{figs/example-post-0.pdf}&
\includegraphics[width=.16\textwidth]{figs/example-post-1.pdf}&
\includegraphics[width=.16\textwidth]{figs/example-post-2.pdf}&
\multirow[t]{2}{*}[-4.5em]{\includegraphics[width=.25\textwidth]{figs/example-inf.pdf}}\\
&
\includegraphics[width=.16\textwidth]{figs/example-post-0.pdf}&
\includegraphics[width=.16\textwidth]{figs/example-post-1.pdf}&
\includegraphics[width=.16\textwidth]{figs/example-post-2.pdf}&
\\
\smaller Observed network $\bm{A}$ &
\multicolumn{3}{c}{\smaller Posterior distribution $P(\bm{b}|\bm{A})$} &
\smaller Marginal probabilities
\end{tabular}
\caption{Inferential community detection considers a generative
process (a), where the unobserved model parameters are sampled from
prior distributions. In the case of the DC-SBM, these are the priors
for the partition $P(\bm{b})$, the number of edges between groups
$P(\bm{e}|\bm{b})$, and the node degrees, $P(\bm{k}|\bm{e},\bm{b})$. Finally, the
network itself is sampled from its model, $P(\bm{A}|\bm{k},\bm{e},\bm{b})$. The
inference procedure (b) consists on inverting the generative process
given an observed network $\bm{A}$, corresponding to a posterior
distribution $P(\bm{b}|\bm{A})$, which then can be summarized by a marginal
probability that a node belongs to a given group (represented as pie
charts on the nodes).\label{fig:dcsbm}}
\end{figure}
This model formulation includes maximally random networks as special
cases
--- indeed the model we considered in Fig.~\ref{fig:descriptive}
corresponds exactly to the DC-SBM with a single group. Together with the
Bayesian approach, the use of this model will inherently favor a more
parsimonious account of the data, whenever it does not warrant a more
complex description
--- amounting to a formal implementation of Occam's razor. This is best
seen by making a formal connection with information theory, and noticing
that we can write the numerator of Eq.~\ref{eq:bayes} as
\begin{equation}
P(\bm{A}|\bm{b})P(\bm{b}) = 2^{-\Sigma(\bm{A},\bm{b})},
\end{equation}
where the quantity $\Sigma(\bm{A},\bm{b})$ is known as the \emph{description
length}~\cite{rissanen_modeling_1978,grunwald_minimum_2007,rissanen_information_2010}
of the network. It is computed as\footnote{Note that the sum in
Eq.~\ref{eq:dcsbm-marginal} vanishes because only one term is non-zero
given a fixed network $\bm{A}$.}
\begin{equation}\label{eq:dl_dcsbm}
\Sigma(\bm{A},\bm{b}) = \underset{\mathcal{D}(\bm{A}|\bm{k},\bm{e},\bm{b})}{\underbrace{-\log_2P(\bm{A}|\bm{k},\bm{e},\bm{b})}}\,
\underset{\mathcal{M}(\bm{k},\bm{e},\bm{b})}{\underbrace{-\log_2P(\bm{k}|\bm{e},\bm{b}) - \log_2 P(\bm{e}|\bm{b}) - \log_2P(\bm{b})}}.
\end{equation}
The second set of terms $\mathcal{M}(\bm{k},\bm{e},\bm{b})$ in the above
equation quantifies the amount of information in bits necessary to
encode the parameters of the model.\footnote{If a value $x$ occurs with
probability $P(x)$, this means that in order to transmit it in a
communication channel we need to answer at least $-\log_2P(x)$ yes-or-no
questions to decode its value exactly. Therefore we need to answer one
yes-or-no question for a value with $P(x)=1/2$, zero questions for
$P(x)=1$, and $\log_2N$ questions for uniformly distributed values with
$P(x)=1/N$. This value is called ``information content,'' and
essentially measures the degree of ``surprise'' when encountering a
value sampled from a distribution. See
Ref.~\cite{mackay_information_2003} for a thorough but accessible
introduction to information theory and its relation to inference.} The
first term $\mathcal{D}(\bm{A}|\bm{k},\bm{e},\bm{b})$ determines how many bits are
necessary to encode the network itself, once the model parameters are
known. This means that if Bob wants to communicate to Alice the
structure of a network $\bm{A}$, he first needs to transmit
$\mathcal{M}(\bm{k},\bm{e},\bm{b})$ bits of information to describe the
parameters $\bm{b}$, $\bm{e}$, and $\bm{k}$, and then finally transmit the
remaining $\mathcal{D}(\bm{A}|\bm{k},\bm{e},\bm{b})$ bits to describe the network
itself. Then, Alice will be able to understand the message by first
decoding the parameters $(\bm{k},\bm{e},\bm{b})$ from the first part of the
message, and using that knowledge to obtain the network $\bm{A}$ from the
second part, without any errors.
What the above connection shows is that there is a formal equivalence
between \emph{inferring} the communities of a network and
\emph{compressing} it. This happens because finding the most likely
partition $\bm{b}$ from the posterior $P(\bm{b}|\bm{A})$ is equivalent to
minimizing the description length $\Sigma(\bm{A},\bm{b})$ used by Bob to
transmit a message to Alice containing the whole network.
Data compression amounts to a formal implementation of Occam's razor
because it penalizes models that are too complicated: if we want to
describe a network using many communities, then the model part of the
description length $\mathcal{M}(\bm{k},\bm{e},\bm{b})$ will be large, and Bob
will need many bits to transmit the model parameters to Alice. However,
increasing the complexity of the model will also \emph{reduce} the first
term $\mathcal{D}(\bm{A}|\bm{k},\bm{e},\bm{b})$, since there are fewer networks
that are compatible with the bigger set of constraints, and hence Bob
will need a shorter second part of the message to convey the network
itself once the parameters are known. Compression (and hence inference),
therefore, is a balancing act between model complexity and quality of
fit, where an increase in the former is \emph{only} justified when it
results in \emph{an even larger} increase of the second, such that the
total description length is minimized.
The reason why the compression approach avoids overfitting the data is
due to a powerful fact from information theory, known as Shannon's
source coding theorem~\cite{shannon_mathematical_1948}, which states
that it is impossible to compress data sampled from a distribution
$P(x)$ using fewer bits per symbol than the entropy of the distribution,
$H=-\sum_xP(x)\log_2P(x)$ --- indeed, it's a remarkable fact from
Shannon's theory that a statement about a single sample (how many bits
we need to describe it) is intrinsically connected to the distribution
from which it came. Therefore, as the data become large, it also becomes
impossible to compress it more than using a code that is optimal
according to its true distribution. In our context, this means that it
is impossible, for example, to compress a maximally random network using
a SBM with more than one group.\footnote{More accurately, this becomes
impossible only when the network becomes asymptotically infinite; for
finite networks the probability of compression is only vanishingly
small.} This means, for example, that when encountering an example
like in Fig.~\ref{fig:descriptive}, inferential methods will detect a
single community comprising all nodes in the network, since any further
division does not provide any increased compression, or equivalently, no
augmented explanatory power. From the inferential point of view, a
partition like Fig.~\ref{fig:descriptive}(b) \emph{overfits} the data,
since it incorporates irrelevant random features --- a.k.a. ``noise''
--- into its description.
\begin{figure}[h]
\begin{tabular}{cc}
(a) Observed network & (b) New sample \\
\includegraphics[width=.49\textwidth]{figs/inferential.pdf} &
\includegraphics[width=.49\textwidth]{figs/inferential_sampled.pdf}
\end{tabular}
\caption{Inferential community detection aims to find a partition of
the network according to a fit of a generative model that can explain
its structure. In (a) is shown a network sampled from a stochastic
block model (SBM) with 6 groups, and where the group assignments were
hidden from view. The node colors show the groups found via Bayesian
inference of the SBM. In
(b) is shown another network sampled from same SBM, together
with the same partition found in (a), showing that it carries a
substantial explanatory power --- very differently from the example in
Fig.~\ref{fig:descriptive} (c).\label{fig:inferential}}
\end{figure}
In Fig.~\ref{fig:inferential}(a) is shown an example of the results
obtained with an inferential community detection algorithm, for a
network sampled from the SBM. As shown in Fig.~\ref{fig:inferential}(b),
the obtained partitions are still valid when carried over to an
independent sample of the model, because the algorithm is capable of
separating the general underlying pattern from the random
fluctuations. As a consequence of this separability, this kind of
algorithm does not find communities in maximally random networks, which
are composed only of ``noise.''
The concept of compression is more generally useful than just avoiding
overfitting within a class of models. In fact, the description length
gives us a model-agnostic objective criterion to compare different
hypotheses for the data generating process according to their
plausibility. Namely, since Shannon's theorem tells us that the best
compression can be achieved asymptotically only with the true model,
then if we are able to find a description length for a network using a
particular model, regardless of how it is parametrized, this also means
that we have automatically found an \emph{upper bound} on the optimal
compression achievable. By formulating different generative models and
computing their description length, we have not only an objective
criterion to compare them against each other, but we also have a way to
limit further what can be obtained with any other model. The result is
an overall scale on which different models can be compared, as we move
closer to the limit of what can be uncovered for a particular network at
hand.
As an example, in Fig.~\ref{fig:compressed} we show the description
length values with some models obtained for a protein-protein
interaction network for the organism \emph{Meleagris gallopavo} (wild
turkey)~\cite{zitnik_evolution_2019}. In particular, we can see that
with the DC-SBM/TC (a version of the model with the addition of triadic
closure edges~\cite{peixoto_disentangling_2022}) we can achieve a
description length that is far smaller than what would be possible with
networks sampled from either the Erd\H{o}s-Rényi, configuration, or
planted partition (a SBM with strictly assortative
communities~\cite{zhang_statistical_2020}) models, meaning that the
inferred model is much closer to the true process that actually
generated this network than the alternatives. Naturally, the actual
process that generated this network is different from the DC-SBM/TC, and
it likely involves, for example, mechanisms of node duplication which
are not incorporated into this rather simple
model~\cite{pastor-satorras_evolving_2003}. However, to the extent that
the true process leaves statistically significant traces in the network
structure,\footnote{Visually inspecting Fig.~\ref{fig:compressed}
reveals what seems to be local symmetries in the network structure,
presumably due to gene duplication. These patterns are not exploited by
the SBM description, and points indeed to a possible path for further
compression.} computing the description length according to it should
provide further compression when compared to the
alternatives.\footnote{In Sec.~\ref{sec:believe} we discuss further the
usefulness of models like the SBM despite the fact we know they are not
the true data generating process.} Therefore, we can try to extend or
reformulate our models to incorporate features that we hypothesize to be
more realistic, and then verify if this in fact the case, knowing that
whenever we find a more compressive model, it is moving closer to the
true model --- or at least to what remains detectable from it for the
finite data.
\FloatBarrier
\begin{figure}[h]
\begin{tabular}{c}
\includegraphics[width=.8\textwidth]{figs/tcompressed-1.png}\\
\includegraphics[width=.8\textwidth]{figs/compression.pdf}
\end{tabular}
\caption{Compression points towards the true model. \textbf{Top:}
Protein-protein interaction network for the organism \emph{Meleagris
gallopavo}~\cite{zitnik_evolution_2019}. The node colors indicate the
best partition found with the
DC-SBM/TC~\cite{peixoto_disentangling_2022} (there are more groups than
colors, so some colors are repeated), and the edge colors indicate
whether they are attributed to triadic closure (red) or the DC-SBM
(black). \textbf{Bottom:} Description length values according to
different models. The unknown true model must yield a description
length value smaller than the DC-SBM/TC, and no other model should be able
to provide a superior compression that is statistically
significant. \label{fig:compressed}}
\end{figure}
The discussion above glosses over some important technical aspects. For
example, it is possible for two (or, in fact, many) models to have the
same or very similar description length values. In this case, Occam's
razor fails as a criterion to select between them, and we need to
consider them collectively as equally valid hypotheses. This means, for
example, that we would need to average over them when making specific
inferential statements~\cite{peixoto_revealing_2021} --- selecting
between them arbitrarily can be interpreted as a form of
overfitting. Furthermore, there is obviously no guarantee that the true
model can actually be found for any particular data. This is only
possible in the asymptotic limit of ``sufficient data,'' which will vary
depending on the actual model. Outside of this limit (which is the
typical case in empirical settings, in particular when dealing with
\emph{sparse} networks~\cite{yan_model_2014}), fundamental limits to
inference are unavoidable,\footnote{A very important result in the
context of community detection is the detectability limit of the SBM. As
discovered by Decelle et
al~\cite{decelle_phase_2011,decelle_asymptotic_2011}, if a sufficiently
large network is sampled from a SBM with a sufficiently weak but
nontrivial structure below a specific threshold, it becomes strictly
impossible to uncover the true model from this sample.} which means in
practice that we will always have limited accuracy and some amount of
error in our conclusions. However, when employing compression, these
potential errors tend towards overly simple explanations, rather than
overly complex ones. Whenever perfect accuracy is not possible, it is
difficult to argue in favor of a bias in the opposite direction.
We emphasize that it is not possible to ``cheat'' when doing
compression. For any particular model, the description length will have
the same form
\begin{equation}
\Sigma(\bm{A},\bm\theta) = \mathcal{D}(\bm{A}|\bm\theta) + \mathcal{M}(\bm\theta),
\end{equation}
where $\bm\theta$ is some arbitrary set of parameters. If we constrain
the model such that it becomes possible to describe the data with a
number of bits $\mathcal{D}(\bm{A}|\bm\theta)$ that is very small, this can
only be achieved, in general, by increasing the number of parameters
$\bm\theta$, such that the number of bits $\mathcal{M}(\bm\theta)$
required to describe them will also increase. Therefore, there is no
generic way to achieve compression that bypasses actually formulating a
meaningful hypothesis that matches statistically significant patterns
seen in the data. One may wonder, therefore, if there is an automatized
way of searching for hypotheses in a manner that guarantees optimal
compression. The most fundamental way to formulate this question is to
generalize the concept of minimum description length as follows: for any
binary string $\bm x$ (representing any measurable data), we define
$L(\bm x)$ as the length in bits of the shortest computer program that
yields $\bm x$ as an output. The quantity $L(\bm x)$ is know as
Kolmogorov complexity~\cite{cover_elements_1991,li_introduction_2008},
and if we would be able to compute it for a binary string representing
an observed network, we would be able to determine the ``true model'' value
in Fig.~\ref{fig:compressed}, and hence know how far we are from the
optimum.\footnote{As mentioned before, this would not necessarily mean
that we would be able to find the actual true model in a practical
setting with perfect accuracy, since for a finite $\bm x$ there could be
many programs of the same minimal length (or close) that generate it.}
Unfortunately, an important result in information theory is that $L(\bm
x)$ is not computable~\cite{li_introduction_2008}. This means that it is
strictly impossible to write a computer program that computes $L(\bm x)$
for any string $\bm x$.\footnote{There are two famous ways to prove
this. One is by contradiction: if we assume that we have a program that
computes $L(\bm x)$, then we could use it as subroutine to write another
program that outputs $\bm x$ with a length smaller than $L(\bm x)$. The
other involves undecidabilty: if we enumerate all possible computer
programs in order of increasing length and check if their outputs match
$\bm x$, we will eventually find programs that loop
indefinitely. Deciding whether a program finishes in finite time is
known as the ``halting problem,'' which has been proved to be impossible
to solve. In general, it cannot be determined if a program reaches an
infinite loop in a manner that avoids actually running the program and
waiting for it to finish. Therefore, this rather intuitive algorithm to
determine $L(\bm x)$ will not necessarily finish for any given string
$\bm x$. For more details see
e.g. Refs~\cite{cover_elements_1991,li_introduction_2008}} This does not
invalidate using the description length as a criterion to select among
alternative models, but it dashes any hope of fully automatizing the
discovery of optimal hypotheses.
\subsection{Role of inferential approaches in community detection}
Inferential approaches based on the SBM have an old history, and were
introduced for the study of social networks in the early
80's~\cite{holland_stochastic_1983}. But despite such an old age, and
having appeared repeatedly in the literature over the
years~\cite{snijders_estimation_1997,nowicki_estimation_2001,tallberg_bayesian_2004,hastings_community_2006,
rosvall_information-theoretic_2007, airoldi_mixed_2008,
clauset_hierarchical_2008,hofman_bayesian_2008, morup_learning_2009}
(also under different names in other contexts
e.g.~\cite{boguna_class_2003,bollobas_phase_2007}), they entered the
mainstream community detection literature rather late, arguably after
the influential paper by Karrer and Newman that introduced the
DC-SBM~\cite{karrer_stochastic_2011} in 2011, at a point where
descriptive approaches were already dominating. However, despite the
dominance of descriptive methods, the existence of inferential
\emph{criteria} was already long noticeable. In fact, in a well-known
attempt to systematically compare the quality of a variety of
descriptive community detection methods, the authors of
Ref.~\cite{lancichinetti_benchmark_2008} proposed the now so-called LFR
benchmark, offered as a more realistic alternative to the simpler
Newman-Girvan benchmark~\cite{girvan_community_2002} introduced
earlier. Both are in fact generative models, essentially particular
cases of the DC-SBM, containing a ``ground truth'' community label
assignment, against which the results of various algorithms are supposed
to be compared. Clearly, this is an inferential evaluation criterion,
although, historically, virtually all of the methods compared against
that benchmark are descriptive in
nature~\cite{lancichinetti_community_2009} (these studies were conducted
mostly before inferential approaches had gained more traction). The use
of such a criterion already betrays that the answer to the litmus test
considered previously would be ``yes,'' and therefore descriptive
approaches are fundamentally unsuitable for the task. In contrast,
methods based on statistical inference are not only more principled, but
in fact provably optimal in the inferential scenario: an estimation
based on the posterior distribution obtained from the true generative
model is called ``Bayes optimal,'' since there is no procedure that can,
on average, produce results with higher accuracy. The use of this
inferential formalism has led to the development of asymptotically
optimal algorithms and the identification of sharp transitions in the
detectability of planted community
structure~\cite{decelle_asymptotic_2011,decelle_inference_2011}.
The conflation one often finds between descriptive and inferential goals
in the literature of community detection likely stems from the fact that
while it is easy to define benchmarks in the inferential setting, it is
substantially more difficult to do so in a descriptive setting. Given
any descriptive method (modularity
maximization~\cite{newman_modularity_2006},
Infomap~\cite{rosvall_maps_2008}, Markov
stability~\cite{lambiotte_random_2014}, etc.) it is usually problematic
to determine for which network these methods are optimal (or even if one
exists), and what would be a canonical output that would be
unambiguously correct. In fact, the difficulty with establishing these
fundamental references already serve as evidence that the task itself is
ill-defined. On the other hand, taking an inferential route forces one
to \emph{start with the right answer}, via a well-specified generative
model that articulates what \emph{the communities actually mean} with
respect to the network structure. Based on this precise definition, one
then \emph{derives} the optimal detection method by employing Bayes'
rule.
It is also useful to observe that inferential analyses of aspects of the
network other than directly its structure might still be only
descriptive of the structure itself. A good example of this is the
modelling of dynamics that take place on a network, such as a random
walk. This is precisely the case of the Infomap
method~\cite{rosvall_maps_2008}, which models a simulated
random walk on a network in an inferential manner, using for that a
division of the network into groups. While this approach can be
considered inferential with respect to an artificial dynamics, it is
still only descriptive when it comes to the actual network structure
(and will suffer the same problems, such a finding communities in
maximally random networks). Communities found in this way could be
useful for particular tasks, such as to identify groups of nodes that
would be similarly affected by a diffusion process. This could be used,
for example, to prevent or facilitate the diffusion by removing or
adding edges between the identified groups. In this setting, the answer
to the litmus test above would also be ``no,'' since what is important
is how the network ``is'' (i.e. how a random walk behaves on it), not
how it came to be, or if its features are there by chance alone. Once
more, the important issue to remember is that the groups identified in
this manner cannot be interpreted as having any explanatory power about
the network structure itself, and cannot be used reliably to extract
inferential conclusions about it. We are firmly in a descriptive, not
inferential setting with respect to the network structure.
Another important difference between inferential and descriptive
approaches is worth mentioning. Descriptive approaches are often tied to
very particular contexts, and cannot be directly compared to one
another. This has caused great consternation in the literature, since
there is a vast number of such methods, and little robust methodology on
how to compare them. Indeed, why should we expect that the modules found
by optimizing task scheduling should be comparable to those that
optimize the description of a dynamics? In contrast, inferential
approaches all share the same underlying context: they attempt to
explain the network structure; they vary only in how this is done. They
are, therefore, amenable to principled \emph{model selection}
procedures~\cite{gelman_bayesian_2013,bishop_pattern_2011,mackay_information_2003},
designed to evaluate which is the most appropriate fit for any
particular network, even if the models used operate with very different
parametrizations, as we discussed already in
Sec.~\ref{sec:inference}. In this situation, the multiplicity of
different models available becomes a boon rather than a hindrance, since
they all contribute to a bigger toolbox we have at our disposal when
trying to understand empirical observations.
Finally, inferential approaches offer additional advantages that make
them more suitable as part a scientific pipeline. In particular, they
can be naturally extended to accommodate measurement
uncertainties~\cite{newman_network_2018-1,
martin_structural_2016,peixoto_reconstructing_2018}
--- an unavoidable property of empirical data, which descriptive methods
almost universally fail to consider. This information can be used not
only to propagate the uncertainties to the community
assignments~\cite{peixoto_revealing_2021} but also to reconstruct the
missing or noisy measurements of the network
itself~\cite{clauset_hierarchical_2008, guimera_missing_2009}. Furthermore, inferential approaches can be
coupled with even more indirect observations such as time-series on the
nodes~\cite{hoffmann_community_2020}, instead of a direct measurement of
the edges of the network, such that the network itself is reconstructed,
not only the community structure~\cite{peixoto_network_2019}. All these
extensions are possible because inferential approaches give us more than
just a division of the network into groups; they give us a model
estimate of the network, containing insights about its formation
mechanism.
\subsection{Behind every description there is an implicit generative model}\label{sec:implicit}
Descriptive methods of community detection --- such as graph
partitioning for VLSI~\cite{kernighan_graph_1969} or
Infomap~\cite{rosvall_maps_2008} --- are not designed to produce
inferential statements about the network structure. They do not need to
explicitly articulate a generative model, and the quality of their
results should be judged solely against their manifestly noninferential
goals, e.g. whether a chip design can be efficiently manufactured in the
case of graph partitioning.
Nevertheless, descriptive methods are often employed with inferential
aims in practice. This happens, for example, when modularity
maximization is used to discover homophilic patterns in a social
network, or when Infomap is used to uncover latent communities generated
by the LFR benchmark. In these situations, it is useful to consider to
what extent can we expect any of these methods reveal meaningful
inferential results, despite their intended use.
From a purely mathematical perspective, there is actually no formal
distinction between descriptive and inferential methods, because every
descriptive method can be mapped to an inferential one, according to
some implicit model. Therefore, whenever we are attempting to interpret
the results of a descriptive community detection method in an
inferential way --- i.e. make a statement about how the network came to
be --- we cannot in fact avoid making \emph{implicit} assumptions about
the data generating process that lies behind it. (At first this
statement seems to undermine the distinction we have been making between
descriptive and inferential methods, but in fact this is not the case,
as we will see below.)
It is not difficult to demonstrate that it is possible to formulate any
conceivable community detection method as a particular inferential
method. Let us consider an arbitrary quality function
\begin{equation}
W(\bm{A}, \bm{b}) \in \mathbb{R}
\end{equation}
which is used to perform community detection via the optimization
\begin{equation}\label{eq:opt}
\bm{b}^* = \underset{\bm{b}}{\operatorname{argmax}}\; W(\bm{A}, \bm{b}).
\end{equation}
We can then interpret the quality function $W(\bm{A}, \bm{b})$ as the
``Hamiltonian'' of a posterior distribution
\begin{equation}
P(\bm{b}|\bm{A}) = \frac{\mathrm{e}^{\beta W(\bm{A},\bm{b})}}{Z(\bm{A})},
\end{equation}
with normalization $Z(\bm{A})=\sum_{\bm{b}}\mathrm{e}^{\beta W(\bm{A},\bm{b})}$. By
making $\beta\to\infty$ we recover the optimization of Eq.~\ref{eq:opt},
or we may simply try to find the most likely partition according to the
posterior, in which case $\beta>0$ remains an arbitrary
parameter. Therefore, employing Bayes' rule in the opposite direction,
we obtain the following effective generative model:
\begin{align}
P(\bm{A}|\bm{b})
&= \frac{P(\bm{b}|\bm{A})P(\bm{A})}{P(\bm{b})},\\
&= \frac{\mathrm{e}^{\beta W(\bm{A},\bm{b})}}{Z(\bm{A})}\frac{P(\bm{A})}{P(\bm{b})},
\end{align}
where $P(\bm{A}) = \sum_{\bm{b}}P(\bm{A}|\bm{b})P(\bm{b})$ is the marginal distribution
over networks, and $P(\bm{b})$ is the prior distribution for the
partition. Due to the normalization of $P(\bm{A}|\bm{b})$ we have the following
constraint that needs to be fulfilled:
\begin{equation}\label{eq:wconstraint}
\sum_{\bm{A}}\frac{\mathrm{e}^{\beta W(\bm{A},\bm{b})}}{Z(\bm{A})}P(\bm{A}) = P(\bm{b}).
\end{equation}
Therefore, not all choices of $P(\bm{A})$ and $P(\bm{b})$ are compatible with
the posterior distribution and the exact possibilities will depend on
the actual shape of $W(\bm{A},\bm{b})$. However, one choice that is always
possible is a maximum-entropy one,
\begin{equation}
P(\bm{A}) = \frac{Z(\bm{A})}{\Xi},\qquad P(\bm{b}) = \frac{\Omega(\bm{b})}{\Xi},
\end{equation}
with $\Omega(\bm{b})=\sum_{\bm{A}}\mathrm{e}^{\beta W(\bm{A},\bm{b})}$ and
$\Xi=\sum_{\bm{A},\bm{b}}\mathrm{e}^{\beta W(\bm{A},\bm{b})}$. Taking this choice leads to
the effective generative model
\begin{equation}
P(\bm{A}|\bm{b}) = \frac{\mathrm{e}^{\beta W(\bm{A},\bm{b})}}{\Omega(\bm{b})}.
\end{equation}
Therefore, inferentially interpreting a community detection algorithm
with a quality function $W(\bm{A},\bm{b})$ is equivalent to assuming the
generative model $P(\bm{A}|\bm{b})$ and prior $P(\bm{b})$ above. Furthermore, this
also means that any arbitrary community detection algorithm implies a
description length given (in nats) by\footnote{The description length of Eq.~\ref{eq:dl_W}
is only valid if there are no further parameters in the quality function
$W(\bm{A},\bm{b})$ other than $\bm{b}$ that are being optimized.}
\begin{equation}\label{eq:dl_W}
\Sigma(\bm{A},\bm{b}) = -\beta W(\bm{A},\bm{b}) + \ln\sum_{\bm{A}',\bm{b}'}\mathrm{e}^{\beta W(\bm{A}',\bm{b}')}.
\end{equation}
What the above shows is that there is no such thing as a ``model-free''
community detection method, since they are all equivalent to the
inference of \emph{some} generative model. The only difference to a
direct inferential method is that in that case the modelling assumptions
are made explicitly, inviting rather than preventing scrutiny. Most
often, the effective model and prior that are equivalent to an \emph{ad
hoc} community detection method will be difficult to interpret, justify,
or even compute (in general, Eq.~\ref{eq:dl_W} cannot be written in
closed form).
Furthermore there is no guarantee that the obtained description length
of Eq.~\ref{eq:dl_W} will yield a competitive or even meaningful
compression. In particular, there is no guarantee that this effective
inference will not overfit the data. Although we mentioned in the
previous section that inference and compression are equivalent, the
compression achieved when considering a particular generative model is
constrained by the assumptions encoded in its likelihood and prior. If
these are poorly chosen, no actual compression might be achieved, for
example when comparing to the one obtained with a maximally random
model. This is precisely what happens with descriptive community
detection methods: they overfit because their implicit modelling
assumptions do not accommodate the possibility that a network may be
maximally random, or contain a balanced mixture of structure and
randomness.
Since we can always interpret any community detection method as
inferential, is it still meaningful to categorize some methods as
descriptive? Arguably yes, because directly inferential approaches make
their generative models and priors explicit, while for a descriptive
method we need to extract them from back-engineering. Explicit modelling
allows us to make judicious choices about the model and prior that
reflect the kinds of structures we want to detect, relevant scales or
lack thereof, and many other aspects that improve their performance in
practice, and our understanding of the results. With implicit
assumptions we are ``flying blind,'' relying substantially on
serendipity and trial-and-error --- not always with great success.
It is not uncommon to find criticisms of inferential methods due to a
perceived implausibility of the generative models used --- such as the
conditional independence of the placement of the edges present in the
SBM~\cite{schaub_many_2017} --- although these assumptions are also
present, but only \emph{implicitly}, in other methods, like modularity
maximization (see Sec.~\ref{sec:equivalence}). We discuss this issue
further in Sec.~\ref{sec:believe}.
The above inferential interpretation is not specific to community
detection, but is in fact valid for any learning problem. The set of
explicit or implicit assumptions that must come with any learning
algorithm is called an ``inductive bias.'' An algorithm is expected to
function optimally only if its inductive bias agrees with the actual
instances of the problems encountered. It is important to emphasize that
no algorithm can be free of an inductive bias, we can only chose
\emph{which} intrinsic assumptions we make about how likely we are to
encounter a particular kind of data, not \emph{whether} we are making an
assumption. Therefore, it is particularly problematic when a method does
not articulate explicitly what these assumptions are, since even if they
are hidden from view, they exist nonetheless, and still need to be
scrutinized and justified. This means we should be particularly
skeptical of the impossible claim that a learning method is
``model-free,'' since this denomination is more likely to signal an
inability or unwillingness to expose the underlying modelling
assumptions, which could potentially be revealed as unappealing and
fragile when eventually forced to come under scrutiny.
\subsection{Caveats and challenges with inferential methods}
Inferential community detection is a challenging task, and is not
without its caveats. One aspect they share with descriptive approaches
is algorithmic complexity (see Sec.~\ref{sec:performance}), and the fact
that they in general try to solve NP-hard problems. This means that
there is no known algorithm that is guaranteed to produce exact results
in a reasonable amount of time, except for very small networks. That
does not mean that every instance of the problem is hard to answer, in
fact it can be shown that in key cases robust answers can be
obtained~\cite{decelle_inference_2011}, but in general all existing
methods are approximative, with the usual trade-off between accuracy and
speed. The quest for general approaches that behave well while being
efficient is still ongoing and is unlikely to exhausted soon.
Furthermore, employing statistical inference is not a ``silver bullet''
that automatically solves every problem. If our models are
``misspecified,'' i.e. represent very poorly the structure present in
the data, then our inferences using them will be very limited and
potentially misleading (see Sec.~\ref{sec:believe}) --- the most we can
expect from our methodology in this case is to obtain good diagnostics
of when this is happening~\cite{peixoto_revealing_2021}. There is also a
typical trade-off between realism and simplicity, such that models that
more closely match reality are more difficult to express in simple terms
with tractable models. Usually, the more complex a model is, the more
difficult becomes its inference. The technical task of using algorithms
such as Markov chain Monte Carlo (MCMC) to produce reliable inferences
for a complex model is nontrivial and requires substantial expertise,
and is likely to be a long-living field of research.
In general it can be said that, although statistical inference does not
provide automatic answers, it gives us an invaluable platform where the
questions can be formulated more clearly, and allows us to navigate the
space of answers using more robust methods and theory.
\section{Modularity maximization considered harmful}\label{sec:modularity}
The most widespread method for community detection is modularity
maximization~\cite{newman_modularity_2006}, which happens also to be one
the most problematic. This method is based on the modularity function,
\begin{equation}\label{eq:Q}
Q(\bm{A},\bm{b}) = \frac{1}{2E}\sum_{ij}\left(A_{ij} - \frac{k_ik_j}{2E}\right)\delta_{b_i,b_j},
\end{equation}
where $A_{ij}\in\{0,1\}$ is an entry of the adjacency matrix,
$k_i=\sum_jA_{ij}$ is the degree of node $i$, $b_i$ is the group
membership of node $i$, and $E$ is the total number of edges. The method
consists in finding the partition $\hat\bm{b}$ that maximizes $Q(\bm{A},\bm{b})$,
\begin{equation}\label{eq:qmax}
\hat\bm{b} = \underset{\bm{b}}{\operatorname{argmax}}\; Q(\bm{A},\bm{b}).
\end{equation}
The motivation behind the modularity function is that it compares the
existence of an edge $(i,j)$ to the probability of it existing according
to a null model, $P_{ij} = k_ik_j/2E$, namely that of the configuration
model~\cite{fosdick_configuring_2018} (or more precisely, the Chung-Lu
model~\cite{chung_connected_2002}). The intuition for this method is
that we should consider a partition of the network meaningful if the
occurrence of edges between nodes of the same group exceeds what we
would expect with a random null model without communities.
Despite its widespread adoption, this approach suffers from a variety of
serious conceptual and practical flaws, which have been documented
extensively~\cite{guimera_modularity_2004,fortunato_resolution_2007,
good_performance_2010,fortunato_community_2010,fortunato_community_2016}. The
most problematic one is that it \emph{purports} to use an inferential
criterion --- a deviation from a null generative model --- but is in
fact merely descriptive. As has been recognized very early, this method
categorically fails in its own stated goal, since it always finds
high-scoring partitions in networks sampled from its own null
model~\cite{guimera_modularity_2004}. Indeed, the generative model we
used in Fig.~\ref{fig:descriptive}(a) is exactly the null model considered
in the modularity function, which if maximized yields the partition seen
in Fig.~\ref{fig:descriptive}(a). As we already discussed, this result
bears no relevance to the underlying generative process, and overfits
the data.
The reason for this failure is that the method does not take into
account the deviation from the null model in a statistically consistent
manner. The modularity function is just a re-scaled version of the
assortativity coefficient~\cite{newman_mixing_2003}, a correlation
measure of the community assignments seen at the endpoints of edges in
the network. We should expect such a correlation value to be close to
zero for a partition that is determined \emph{before} the edges of the
network are placed according to the null model, or equivalently, for a
partition chosen at random. However, it is quite a different matter to
find a partition that \emph{optimizes} the value of $Q(\bm{A},\bm{b})$, after
the network is observed. The deviation from a null model computed in
Eq.~\ref{eq:Q} completely ignores the optimization step of
Eq.~\ref{eq:qmax}, although it is a crucial part of the algorithm. As a
result, the method of modularity maximization tends to massively
overfit, and find spurious communities even in networks sampled from its
null model. If we search for patterns of correlations in a random
graph, most of the time we will find them. This is a pitfall known
as ``data dredging'' or ``$p$-hacking,'' where one searches exhaustively
for different patterns in the same data and reports only those that are
deemed significant, according to a criterion that does not take into
account the fact that we are doing this search in the first place.
\begin{figure}[b]
\resizebox{\textwidth}{!}{
\begin{tabular}{ccc}
& \smaller\hspace{2em} Random partition & \smaller\hspace{2em} Maximum modularity\\
\includegraphicslp{(a)}{.98}{width=.42\textwidth}{figs/Q-random.pdf}&
\includegraphicslp{(b)}{.98}{width=.29\textwidth,trim=0 -.65cm 0 0}{figs/random-adj.pdf}&
\includegraphicslp{(c)}{.98}{width=.29\textwidth,trim=0 -.65cm 0 0}{figs/random-adj-ordered.pdf}
\end{tabular}} \caption{Modularity maximization systematically
overfits, and finds spurious structures even its own null model. In
this example we consider a random network model with $N=10^3$ nodes,
with every node having degree $5$. (a) Distribution of modularity
values for a partition into 15 groups chosen at random, and for the
optimized value of modularity, for $5000$ networks sampled from the
same model. (b) Adjacency matrix of a sample from the model, with the
nodes ordered according to a random partition. (c) Same as (b), but
with the nodes ordered according to the partition that maximizes
modularity.\label{fig:randomQ}}
\end{figure}
We demonstrate this problem in Fig.~\ref{fig:randomQ}, where we show the
distribution of modularity values obtained with a uniform configuration
model with $k_i=5$ for every node $i$, considering both a random
partition and the one that maximizes $Q(\bm{A},\bm{b})$. While for a random
partition we find what we would expect, i.e. a value of $Q(\bm{A},\bm{b})$
close to zero, for the optimized partition the value is substantially
larger. Inspecting the optimized partition in Fig.~\ref{fig:randomQ}(c),
we see that it corresponds indeed to 15 seemingly clear assortative
communities --- which by construction bear no relevance to how the
network was generated. They have been dredged out of randomness by the
optimization procedure.
\begin{figure}[h!]
\begin{tabular}{cc}
Modularity maximization & SBM inference\\
\includegraphicsl{(a)}{width=.5\textwidth}{figs/modularity-resolution.pdf}&
\includegraphicsl{(b)}{width=.5\textwidth}{figs/pp-resolution.pdf}\\
\includegraphicsl{(c)}{width=.5\textwidth}{figs/modularity-resolution-mixed.pdf}&
\includegraphicsl{(d)}{width=.5\textwidth}{figs/pp-resolution-mixed.pdf}
\end{tabular} \caption{The resolution limit of modularity maximization
prevents small communities from being identified, even if there is
sufficient statistical evidence to support them. Panel (a) shows a
network with $B=30$ communities sampled from an assortative SBM
parametrization. The colors indicate the $18$ communities found with
modularity maximization, where several pairs of true communities are
merged together. Panel (b) shows the inference result of an
assortative SBM~\cite{zhang_statistical_2020}, recovering the true
communities with perfect accuracy. Panels (c) and (d) show the
results for a similar model where a larger community has been
introduced. In (c) we see the results of modularity maximization,
which not only merges the smaller communities together, but also
splits the larger community into several spurious ones --- thus both
underfitting and overfitting different parts of the network at the
same time. In (d) we see the result obtained by inferring the SBM,
which once again finds the correct answer.\label{fig:resolution}}
\end{figure}
Somewhat paradoxically, another problem with modularity maximization is
that in addition to systematically overfitting, it also systematically
\emph{underfits}. This occurs via the so-called \emph{resolution limit}:
in a connected network\footnote{Modularity maximization, like many
descriptive community detection methods, will always place connected
components in different communities. This is another clear distinction
with inferential approaches, since maximally random models --- without
latent community structure --- can generate disconnected networks if
they are sufficiently sparse. From an inferential point of view, it is
therefore incorrect to assume that every connected component must belong
to a different community.} the method cannot find more than $\sqrt{2E}$
communities~\cite{fortunato_resolution_2007}, even if they seem
intuitive or can be found by other methods. An example of this is shown
in Fig.~\ref{fig:resolution}, where for a network generated with the SBM
containing 30 communities, modularity maximization finds only 18, while
an inferential approach has no problems finding the true
structure. There are attempts to counteract the resolution limit by
introducing a ``resolution parameter'' to the modularity function, but
as we discuss in Sec.~\ref{sec:resolution} they are in general
ineffective.
These two problems --- overfitting and underfitting --- can occur in
tandem, such that portions of the network dominated by randomness are
spuriously revealed to contain communities, whereas other portions with
clear modular structure can have those obstructed. The result is a very
unreliable method to capture the structure of heterogeneous networks. We
demonstrate this in Fig.~\ref{fig:resolution}(c) and~(d)
In addition to these major problems, modularity maximization also often
possesses a degenerate landscape of solutions, with very different
partitions having similar values of
$Q(\bm{A},\bm{b})$~\cite{good_performance_2010}. In these situations the
partition with maximum value of modularity can be a poor representative
of the entire set of high-scoring solutions and depend on idiosyncratic
details of the data rather than general patterns --- which can be
interpreted as a different kind of overfitting.\footnote{This kind of
degeneracy in the solution landscape can also occur in an inferential
setting~\cite{riolo_consistency_2020,peixoto_revealing_2021}. However, there it
can be interpreted as the existence of competing hypotheses for the same data,
whose relative plausibility can be quantitatively assessed via their
posterior probability. In case the multiplicity of alternative
hypotheses is too large, this would be indicative of poor fit, or a
misspecification of the model, i.e. a general inadequacy of the model
structure to capture the structure in the data for any possible choice
of parameters.}
The combined effects of underfitting and overfitting can make the
results obtained with the method unreliable and difficult to
interpret. As a demonstration of the systematic nature of the problem,
in Fig.~\ref{fig:Qrand}(a) we show the number of communities obtained
using modularity maximization for 263 empirical networks of various
sizes and belonging to different domains~\cite{zhang_preparation},
obtained from the Netzschleuder
catalogue~\cite{peixoto_netzschleuder_2020}. Since the networks
considered are all connected, the values are always below $\sqrt{2E}$,
due to the resolution limit; but otherwise they are well distributed
over the allowed range. However, in Fig.~\ref{fig:Qrand}(b) we show the
same analysis, but for a version of each network that is fully
randomized, while preserving the degree sequence. In this case, the
number of groups remains distributed in the same range (sometimes even
exceeding the resolution limit, because the randomized versions can end
up disconnected). As Fig.~\ref{fig:Qrand}(c) shows, the number of groups
found for the randomized networks is strongly correlated with the
original ones, despite the fact that the former have no latent community
structure. This is a strong indication of the substantial amount of
noise that is incorporated into the partitions found with the method.
\begin{figure}
\begin{tabular}{ccc}
\includegraphicslp{(a)}{.99}{width=.33\textwidth}{figs/Q_dist_randFalse.pdf}&
\includegraphicslp{(b)}{.99}{width=.33\textwidth}{figs/Q_dist_randTrue.pdf}&
\includegraphicslp{(c)}{.99}{width=.33\textwidth}{figs/Q_dist_corr.pdf}\\
\smaller Original networks &
\smaller Randomized networks &
\end{tabular} \caption{Modularity maximization incorporates a
substantial amount of noise into its results. (a) Number of groups found using
modularity maximization for 263 empirical networks as a function of
the number of edges. The dashed line corresponds to the $\sqrt{2E}$
upper bound due to the resolution limit.
(b) The same as in (a) but with randomized versions of each
network. (c) Correspondence between the number of groups of the
original and randomized network. The dashed line shows the diagonal. \label{fig:Qrand}}
\end{figure}
The systematic overfitting of modularity maximization --- as well as
other descriptive methods such as Infomap --- has been also
demonstrated recently in Ref.~\cite{ghasemian_evaluating_2019}, from the
point of view of edge prediction, on a separate empirical dataset of 572
networks from various domains.
Although many of the problems with modularity maximization were long
known, for some time there were no principled solutions to them, but
this is no longer the case. In the table below we summarize some of the
main problems with modularity and how they are solved with inferential
approaches.
\begin{longtable}{p{.45\textwidth}@{\hskip 1em}p{.45\textwidth}}
\textbf{Problem} & \textbf{Principled solution via inference}\\ \hline Modularity maximization
overfits, and finds modules in maximally random
networks.~\cite{guimera_modularity_2004} & Bayesian inference of the
SBM is designed from the ground to avoid this problem in a
principled way and systematically succeeds~\cite{peixoto_bayesian_2019}.\\[1em]
Modularity maximization has a resolution limit, and finds at most
$\sqrt{2E}$ groups in connected
networks~\cite{fortunato_resolution_2007}. & Inferential approaches
with hierarchical
priors~\cite{peixoto_hierarchical_2014,peixoto_nonparametric_2017} or
strictly assortative structures~\cite{zhang_statistical_2020} do not
have any appreciable resolution limit, and can find a maximum number
of groups that scales as $O(N/\log N)$. Importantly, this is achieved
without sacrificing the robustness against overfitting.\\[1em]
Modularity maximization has a characteristic scale, and tends to find
communities of similar size; in particular with the same sum of
degrees (see Sec.~\ref{sec:resolution}). & Hierarchical priors can be
specifically chosen to be \emph{a priori} agnostic about
characteristic sizes, densities of groups and degree
sequences~\cite{peixoto_nonparametric_2017}, such that these are not
imposed, but instead obtained from inference, in an unbiased way.\\[1em]
Modularity maximization can only find strictly assortative
communities. & Inferential approaches can be based on any generative
model. The general SBM will find any kind of mixing pattern in an
unbiased way, and has no problems identifying modular structure in
bipartite networks, core-periphery networks, and any mixture of these
or other patterns. There are also specialized versions for
bipartite~\cite{larremore_efficiently_2014},
core-periphery~\cite{zhang_identification_2015}, and assortative
patterns~\cite{zhang_statistical_2020}, if these are being searched
exclusively. \\[1em]
The solution landscape of modularity maximization is often degenerate,
with many different solutions with close to the same modularity
value~\cite{good_performance_2010}, and with no clear way of how to
select between them. & Inferential methods are characterized by a
posterior distribution of partitions. The consensus or dissensus
between the different solutions~\cite{peixoto_revealing_2021} can be
used to determine how many cohesive hypotheses can be extracted from
inference, and to what extent is the model being used a poor or a good
fit for the network.
\\\hline
\end{longtable}
Because of the above problems, the use of modularity maximization should
be discouraged, since it is demonstrably not fit for purpose as an
inferential method. As a consequence, the use of modularity maximization
in any recent network analysis that relies on inferential conclusions
can be arguably considered a ``red flag'' that strongly indicates
methodological inappropriateness. In the absence of secondary evidence
supporting the alleged community structures found, or extreme care to
counteract the several limitations of the method (see
Secs.~\ref{sec:consensus}, \ref{sec:significance}
and~\ref{sec:resolution} for how typical attempts usually fail), the
safest assumption is that the results obtained with that method tend to
contain a substantial amount of noise, rendering any inferential
conclusion derived from them highly suspicious.
As a final note, we focus on modularity here not only for its widespread
adoption but also because of its emblematic character. At a fundamental
level, all of its shortcoming are shared with any descriptive method in
the literature --- to varied but always non-negligible degrees.
\section{Myths, pitfalls, and half-truths}
In this section we focus on assumed or asserted statements about how to
circumvent pitfalls in community detection, which are in fact better
characterized as myths or half-truths, since they are either misleading,
or obstruct a more careful assessment of the true underlying nature of
the problem.
\subsection{``Modularity maximization and SBM inference are equivalent methods.''}\label{sec:equivalence}
As we have discussed in Sec.~\ref{sec:implicit}, it is possible to
interpret \emph{any} community detection algorithm as the inference of
\emph{some} generative model. Because of this, the mere fact that an
equivalence with an inferential approach exists cannot be used to
justify the inferential use of a descriptive method, or to use it as a
criterion to distinguish between approaches that are statistically
principled or not. To this aim, we need to ask instead whether the
modelling assumptions that are \emph{implicit} in the descriptive
approach can be meaningfully justified, and whether they can be used to
consistently infer structures from networks.
Some recent works have detailed some specific equivalences of modularity
maximization with statistical
inference~\cite{zhang_scalable_2014,newman_equivalence_2016}. As we will
discuss below, these equivalences are far more limited than commonly
interpreted. They serve mostly to understand in more detail the reasons
why modularity maximization fails as a reliable method, but do not
prevent it from failing --- they expose more clearly its sins, but offer
no redemption.
We start with a very interesting connection revealed by Zhang and
Moore~\cite{zhang_scalable_2014} between the effective posterior
distribution we obtain when using the modularity function as a
Hamiltonian,
\begin{equation}\label{eq:Qgibbs}
P(\bm{b}|\bm{A}) = \frac{\mathrm{e}^{\beta E Q(\bm{A},\bm{b})}}{Z(\bm{A})},
\end{equation}
and the posterior distribution of the strictly assortative DC-SBM, which
we refer here as the degree-corrected planted partition model (DC-PP),
\begin{equation}\label{eq:dcpp_post}
P(\bm{b}|\bm{A},\omega_{\text{in}},\omega_{\text{out}},\bm{\theta}) =
\frac{P(\bm{A}|\omega_{\text{in}},\omega_{\text{out}},\bm{\theta},\bm{b})P(\bm{b})}
{P(\bm{A}|\omega_{\text{in}},\omega_{\text{out}},\bm{\theta})},
\end{equation}
which has a likelihood given by
\begin{equation}\label{eq:dcpp}
P(\bm{A}|\omega_{\text{in}},\omega_{\text{out}},\bm{\theta},\bm{b})
= \prod_{i<j}\frac{\mathrm{e}^{-\omega_{b_i,b_j}\theta_i\theta_j}\left(\omega_{b_i,b_j}\theta_i\theta_j\right)^{A_{ij}}}{A_{ij}!},
\end{equation}
where
\begin{equation}
\omega_{rs} = \omega_{\text{in}}\delta_{rs} + \omega_{\text{out}}(1-\delta_{rs}).
\end{equation}
This model assumes that there are constant rates $\omega_{\text{in}}$
and $\omega_{\text{out}}$ controlling the number of edges that connect
to nodes of the same and different communities, respectively. In
addition, each node has its own propensity $\theta_i$, which determines
the relative probability it has of receiving an edge, such that nodes
inside the same community are allowed to have very different
degrees. This is a far more restrictive version of the full DC-SBM we
considered before, since it not only assumes assortativity as the only
mixing pattern, but also that all communities share the same rate
$\omega_{\text{in}}$, which imposes a rather unrealistic similarity
between the different groups.
Before continuing, it is important to emphasize that the posterior of
Eq.~\ref{eq:dcpp_post} corresponds to the situation where the number of
communities and all parameters of the model, except the partition
itself, are known \emph{a priori}. This does not correspond to any
typical empirical setting where community detection is employed, since
we do not often have such detailed information about the community
structure, and in fact no good reason to even use this particular
parametrization to begin with. The equivalences that we are about to
consider apply only in very idealized scenarios, and are not expected to
hold in practice.
Taking the logarithm of both sides of Eq.~\ref{eq:dcpp}, and ignoring
constant terms with respect to the model parameters we have
\begin{multline}\label{eq:pp_L}
\ln P(\bm{A}|\omega_{\text{in}},\omega_{\text{out}},\bm{\theta},\bm{b})
= \left(\ln \frac{\omega_{\text{in}}}{\omega_{\text{out}}}\right)
\left[\sum_{i<j}\left(A_{ij}-\frac{\omega_{\text{in}}-\omega_{\text{out}}}
{\ln (\omega_{\text{in}}/\omega_{\text{out}})}\theta_i\theta_j\right)\delta_{b_i,b_j}\right] +\\
\sum_{i<j}\left[A_{ij}\ln(\theta_i\theta_j\omega_{\text{out}})-\theta_i\theta_j\omega_{\text{out}}\right].
\end{multline}
Therefore, ignoring additive terms that do not depend on $\bm{b}$ (since
they become irrelevant after normalization in Eq.~\ref{eq:Qgibbs}) and
making the arbitrary choices (we will inspect these in detail soon),
\begin{equation}
\beta = \ln \frac{\omega_{\text{in}}}{\omega_{\text{out}}},\qquad \frac
{\ln (\omega_{\text{in}}/\omega_{\text{out}})}{\omega_{\text{in}}-\omega_{\text{out}}} = 2E, \qquad \theta_i=k_i,
\end{equation}
we obtain the equivalence,
\begin{equation}
\ln P(\bm{A}|\omega_{\text{in}},\omega_{\text{out}},\bm{\theta},\bm{b}) = \beta E Q(\bm{A},\bm{b}),
\end{equation}
allowing us to equate Eqs.~\ref{eq:Qgibbs} and~\ref{eq:dcpp_post} (there
is a methodological problem with the choice $\theta_i=k_i$, as we will
see later, but we will ignore this for the time being). Therefore, for
particular choices of the model parameters, one recovers modularity
optimization from the maximum likelihood estimation of the DC-PP model
with respect to $\bm{b}$. Indeed, this allows us to understand more clearly
what \emph{implicit} assumptions go behind using modularity for
inferential aims. For example, besides making very specific prior
assumptions about the model parameters $\omega_{\text{in}}$,
$\omega_{\text{out}}$ and $\bm{\theta}$, this posterior also assumes
that all partitions are equally likely \emph{a priori},
\begin{equation}
P(\bm{b}) \propto 1.
\end{equation}
We can in fact write this uniform prior more precisely as
\begin{equation}
P(\bm{b}) = \left[\sum_{B=1}^{N}\genfrac\{\}{0pt}{}{N}{B}B!\right]^{-1},
\end{equation}
where $\genfrac\{\}{0pt}{}{N}{B}B!$ is the number of labelled partitions
of a set $N$ into $B$ groups. This number reaches a maximum at $B\approx
.72\times N$, and decays fast from there, meaning that such a uniform
prior is in fact very concentrated on a very large number of groups ---
partially explaining the tendency of the modularity posterior to
overfit. Let us examine now the prior assumption
\begin{equation}\label{eq:PPQ}
\frac{\ln (\omega_{\text{in}}/\omega_{\text{out}})}{\omega_{\text{in}}-\omega_{\text{out}}} = 2E.
\end{equation}
For any value of $E$ the above condition admits many solutions. However,
not all of them are consistent with the \emph{expected} number of edges
in the network according to the DC-PP model. Assuming, for simplicity,
that all $B$ groups have the same size $N/B$, and that all nodes have
the same degree $2E/N$, then the expected number of edges according to
the assumed DC-PP model is given by
\begin{equation}
2\avg{E} = (2E)^2\left(\frac{\omega_{\text{in}}}{B} + \frac{\omega_{\text{out}}(B-1)}{B}\right).
\end{equation}
Equating the expected with the observed value, $\avg{E}=E$, leads to
\begin{equation}\label{eq:PPE}
\omega_{\text{in}} + \omega_{\text{out}}(B-1) = \frac{B}{2E}.
\end{equation}
Combining Eqs.~\ref{eq:PPQ} and~\ref{eq:PPE} gives us at most only two
values of $\omega_{\text{in}}$ and $\omega_{\text{out}}$ that are
compatible with the expected density of the network and the modularity
interpretation of the likelihood, as seen in
Fig.~\ref{fig:Qconstraint}(a), and therefore only two possible values
for the expected modularity, computed as
\begin{equation}
\avg{Q} = \frac{1}{B}\left(2E\omega_{\text{in}} - 1\right).
\end{equation}
One possible solution is always
$\omega_{\text{in}}=\omega_{\text{out}}=1/2E$, which leads to
$\avg{Q}=0$. The other solution is only possible for $Q>2$, and yields a
specific expected value of modularity which approaches $\avg{Q}\to 1$ as
$B$ increases (see Fig.~\ref{fig:Qconstraint}(b)). This yields an
implausibly narrow range for the consistency of modularity maximization
with the inference of the DC-PP model. The bias towards larger values of
$Q(\bm{A},\bm{b})$ as the number of groups increases is not an inherent
property of the DC-PP model, as it accommodates any expected value of
modularity by properly choosing its parameters. Instead, this is an
arbitrary implicit assumption baked in $Q(\bm{A},\bm{b})$, which further
explains why maximizing it will tend to find many groups even on random
networks.
\begin{figure}
\begin{tabular}{cc}
\includegraphicsl{(a)}{width=.5\textwidth}{figs/pp_params.pdf}&
\includegraphicsl{(b)}{width=.5\textwidth}{figs/pp_params_Q.pdf}
\end{tabular} \caption{Using modularity maximization is equivalent to
performing a maximum likelihood estimate of the DC-PP model with very
specific parameter choices, that depend on the number of edges $E$ in
the network and the number of communities $B$. In (a) we show the
valid choices of $\omega_{\text{in}}$ and $\omega_{\text{out}}$
obtained when the solid and dashed lines cross, corresponding
respectively to Eqs.~\ref{eq:PPQ} and~\ref{eq:PPE}, where we can see
that for $B=2$ no solution is possible where the expected modularity
is positive. In (b) we show the two possible values for the expected
modularity that are consistent with the implicit model assumptions, as
a function of the number of groups.\label{fig:Qconstraint}}
\end{figure}
In a later work~\cite{newman_equivalence_2016}, Newman relaxed the above
connection with modularity by using instead its generalized
version~\cite{reichardt_statistical_2006,arenas_analysis_2008},
\begin{equation}\label{eq:Qgamma}
Q(\bm{A},\bm{b},\gamma) = \frac{1}{2E}\sum_{ij}\left(A_{ij} - \gamma\frac{k_ik_j}{2E}\right)\delta_{b_i,b_j},
\end{equation}
where $\gamma$ is the so-called ``resolution'' parameter. With this
additional parameter, we have more freedom about the implicit
assumptions of the DC-PP model. Newman in fact showed that if we make
the choices,
\begin{equation}\label{eq:choices}
\beta = \ln \frac{\omega_{\text{in}}}{\omega_{\text{out}}},\qquad
\gamma=\frac{\omega_{\text{in}}-\omega_{\text{out}}}
{\ln (\omega_{\text{in}}/\omega_{\text{out}})}, \qquad \theta_i =\frac{k_i}{\sqrt{2E}},
\end{equation}
then we recover the Gibbs distribution with the generalized modularity
from the DC-PP likelihood of Eq.~\ref{eq:dcpp}. Due to the independent
parameter $\gamma$, now the assumed values of $\omega_{\text{in}}$ and
$\omega_{\text{out}}$ are no longer constrained by $E$ alone, and can
take any value. Therefore, if we knew the correct value of the model
parameters, we could use them to choose the appropriate value of
$\gamma$ and hence maximize $Q(\bm{A}, \bm{b}, \gamma)$, yielding the same
answer as maximizing $\ln
P(\bm{A}|\omega_{\text{in}},\omega_{\text{out}},\bm{\theta},\bm{b})$ with the
same parameters.
There are, however, serious problems remaining that prevent this
equivalence from being true in general, or in fact even typically. For
the equivalence to hold, we need the number of groups $B$ and all
parameters to be known a priori and to be equal to
Eq.~\ref{eq:choices}. However, the choice $\theta_i = k_i/\sqrt{2E}$
involves information about the observed network, namely the actual
degrees seen --- and therefore is not just a prior assumption, but one
done \emph{a posteriori}, and hence needs to be justified via a
consistent estimation that respects the likelihood principle. When we
perform a maximum likelihood estimate of the parameters
$\omega_{\text{in}}$, $\omega_{\text{out}}$, and $\bm\theta$, we obtain
the following system of nonlinear
equations~\cite{zhang_statistical_2020},
\begin{align}
\omega_{\text{in}}^* &= \frac{2e_{\text{in}}}{\sum_r\hat\theta_r^2}\label{eq:mle_lin}\\
\omega_{\text{out}}^* &= \frac{e_{\text{out}}}{\sum_{r<s}\hat\theta_r\hat\theta_s}\label{eq:mle_lout}\\
\theta_i^* &= k_i \left[\frac{2e_{\text{in}}\hat\theta_{b_i}}{\sum_r\hat\theta_r^2}+\frac{e_{\text{out}}\sum_{r\neq b_i}\hat\theta_r}{\sum_{r<s}\hat\theta_r\hat\theta_s}\right]^{-1},\label{eq:mle_theta}
\end{align}
where $e_{\text{in}}=\sum_{i<j}A_{ij}\delta_{b_i,b_j}$,
$e_{\text{out}}=\sum_{i<j}A_{ij}(1-\delta_{b_i,b_j})$, and
$\hat\theta_r=\sum_i\theta_i^*\delta_{b_i,r}$. The above system admits
$\theta^*_i = k_i/\sqrt{2E}$ as a solution only if the following
condition is met for every group $r$:
\begin{equation}\label{eq:symmetry}
\sum_ik_i\delta_{b_i,r} = \frac{2E}{B}.
\end{equation}
In other words, the sum of degrees inside each group must be the same
for every group. Note also that the expected degrees according to the
DC-PP model will be inconsistent with Eq.~\ref{eq:choices} if the above
condition is not met, i.e.
\begin{equation}
\avg{k_i} = \theta_i\left[\omega_{\text{in}}\sum_j\theta_j\delta_{b_i,b_j} + \omega_{\text{out}}\sum_j\theta_j(1-\delta_{b_i,b_j})\right].
\end{equation}
Substituting $\theta_i=k_i/\sqrt{2E}$ in the above equation will yield
in general $\avg{k_i}\ne k_i$, as long as Eq.~\ref{eq:symmetry} is not
fulfilled, \emph{regardless of how we choose $\omega_{\text{in}}$ and
$\omega_{\text{out}}$}.
Framing it differently, for any choice of $\omega_{\text{in}}$,
$\omega_{\text{out}}$ and $\bm\theta$ such that the sums
$\sum_i\theta_i\delta_{b_i,r}$ are not identical for every group $r$,
the DC-SBM likelihood $\ln
P(\bm{A}|\omega_{\text{in}},\omega_{\text{out}},\bm{\theta},\bm{b})$ is not
captured by $Q(\bm{A},\bm{b},\gamma)$ for any value of $\gamma$, and therefore
maximizing both functions will not yield the same results. That is, the
equivalence is only valid for special cases of the model \emph{and}
data. We show in Fig.~\ref{fig:equivalence} an example of an instance of
the DC-PP model where the generalized modularity yields results which
are inconsistent with using likelihood of the DC-PP model directly.
\begin{figure}
\begin{tabular}{cc}
\includegraphicsl{(a)}{width=.5\textwidth}{figs/pp_overlap_r1.pdf}&
\includegraphicsl{(b)}{width=.5\textwidth}{figs/pp_overlap_r2.pdf}
\end{tabular}
\caption{Generalized modularity and the DC-PP model are only
equivalent if the symmetry of Eq.~\ref{eq:symmetry} is
preserved. Here we consider an instance of the DC-PP model with
$\omega_{\text{in}}=2Ec/N$, $\omega_{\text{out}}=2E(1-c)/\sum_{r\ne
s}\sqrt{n_rn_s}$, and $\theta_i=1/\sqrt{n_{b_i}}$, where $n_r$ is
the number of nodes in group $r$. The parameter $c\in[0,1]$ controls
the degree of assortativity. For non-uniform group sizes, the
symmetry of Eq.~\ref{eq:symmetry} is not preserved with this choice
of parameters. We use the parametrization $n_r =
N\alpha^{r-1}(1-\alpha)/(1-\alpha^B)$, where $\alpha > 0$ controls
the group size heterogeneity. When employing generalized modularity,
we choose the closest possible parameter choice with
$\omega_{\text{in}}=2Ec/({\sum_re_r^2/2E})$ and
$\omega_{\text{out}}=2E(1-c)/(2E - {\sum_re_r^2/2E})$, where
$e_r=\sum_ik_i\delta_{b_i,r}$. In (a) we show the inference results
for the uniform case with $\alpha\to 1$, where both approaches are
identical, performing equally well all the way down to the
detectability threshold~\cite{decelle_asymptotic_2011} (vertical
line). In (b) we show the result with $\alpha=2$, which leads to
unequal group sizes, causing the behavior between both approaches to
diverge. In all cases we consider averages over 5 networks with
$N=10^4$ nodes, average degree $2E/N=3$, and $B=10$ groups.
\label{fig:equivalence}}
\end{figure}
Because of the above caveats, we have to treat the claimed equivalence
with a grain of salt. In general there are only three scenarios we may
consider when analysing a network:
\begin{enumerate}
\item We know that the network has been sampled from the DC-PP model, as
well as
the correct number of groups $B$ and the values of the
parameters $\omega_{\text{in}}$,
$\omega_{\text{out}}$, and $\bm\theta$, and the following symmetry
exists:
\begin{equation}\label{eq:symmetry_case}
\sum_i\theta_i\delta_{b_i,r} = C,
\end{equation}
where $C$ is a constant.
\item Like the first case, but where the symmetry of
Eq.~\ref{eq:symmetry_case} does not exist.
\item Every other situation.
\end{enumerate}
Cases 1 and 2 are highly idealized and are not expected to be
encountered in practice, which almost always falls in case
3. Nevertheless, the equivalence between the DC-PP model and generalized
modularity is only valid in case 1. In case 2, as we already discussed,
the use of generalized modularity will be equivalent to \emph{some}
generative model --- as all methods are --- but which cannot be expressed
within the DC-PP parametrization.
Because of the above problems, the relevance of this partial equivalence
between these approaches in practical scenarios is arguably dubious. It
serves only to demonstrate how the implicit assumptions behind
modularity maximization are hard to justify.
We emphasize also the obvious fact that even if the equivalence with the
DC-PP model were to hold more broadly, this would not make the
pathological behavior of modularity described in
Sec.~\ref{sec:modularity} disappear. Instead, it would only show that
this particular inferential method would \emph{also} be pathological. In
fact, it is well understood that maximum likelihood is not in general an
appropriate inferential approach for models with an arbitrarily large
number of degrees of freedom, since it lacks the regularization
properties of Bayesian methods~\cite{peixoto_bayesian_2019}, such as the
one we described in Sec.~\ref{sec:infer}, where instead of considering
point estimates of the parameters, we integrate over all possibilities,
weighted according to their prior probability. In this way, it is
possible to \emph{infer} the number of communities, instead of assuming
it \emph{a priori}, together with all other model parameters. In fact,
when such a Bayesian approach is employed for the DC-PP model, one
obtains the following marginal likelihood~\cite{zhang_statistical_2020},
\begin{align}
P(\bm{A}|\bm{b}) &= \int P(\bm{A}|\omega_{\text{in}},\omega_{\text{out}},\bm\theta,\bm{b})P(\omega_{\text{in}})P(\omega_{\text{out}})P(\bm\theta|\bm{b})\,\mathrm{d}\omega_{\text{in}}\mathrm{d}\omega_{\text{out}}\mathrm{d}\bm\theta\\
&= \frac{e_{\text{in}}!e_{\text{out}}!}
{\left(\frac{B}{2}\right)^{e_{\text{in}}}{B\choose 2}^{e_{\text{out}}}(E+1)^{1-\delta_{B,1}}}
\prod_r\frac{(n_r-1)!}{(e_r+n_r-1)!}\times\frac{\prod_ik_i!}{\prod_{i<j}A_{ij}!\prod_i A_{ii}!!},\label{eq:pp_marginal}
\end{align}
where $e_{\text{in}}=\sum_{i<j}(a)_{ij}\delta_{b_i,b_j}$ and
$e_{\text{out}}=E-e_{\text{in}}$. As demonstrated in
Ref.~\cite{zhang_statistical_2020}, this approach allows us to detect
purely assortative community structures in a nonparametric way, in a
manner that prevents both overfitting and underfitting --- i.e. the
resolution limit \emph{vanishes} since we inherently consider every
possible value of the parameters $\omega_{\text{in}}$ and
$\omega_{\text{out}}$ --- thus lifting two major limitations of
modularity. Note also that Eq.~\ref{eq:pp_marginal} (or its logarithm)
does not bear any direct resemblance to the modularity function, and
therefore it does not seem possible to reproduce its behavior via a
simple modification of the latter.\footnote{There is also no need to
``fix'' modularity. We can simply use Eq.~\ref{eq:pp_marginal} in its
place for most algorithms, which incurs almost no additional
computational overhead.}
We also mention briefly a result obtained by Bickel and
Chen~\cite{bickel_nonparametric_2009}, which states that modularity
maximization can consistently identify the community assignments of
networks generated by the SBM in the dense limit. This limit corresponds
to networks where the average number of neighbors is comparable to the
total number of nodes. In this situation, the community detection
problem becomes substantially easier, and many algorithms, including
e.g. unregularized spectral clustering, can do just as well as
modularity maximization. This result tells us more about how easy it is
to find communities in dense networks than about the quality of the
algorithms compared. The dense scenario does not represent well the
difficulty of finding communities in real networks, which are
overwhelmingly sparse, with an average degree much smaller than the
total number of nodes. In the sparse case, likelihood-based inferential
approaches are optimal and outperform
modularity~\cite{bickel_nonparametric_2009,
decelle_asymptotic_2011}. Comparable equivalences have also been
encountered with spectral methods~\cite{newman_spectral_2013}, but they
also rely on particular realizations of the community detection problem,
and do not hold in general.
In short, if the objective is to infer the DC-PP model, there is no
reason to do it via the maximization of $Q(\bm{A},\bm{b},\gamma)$, nor is it in
general equivalent to any consistent inference approach such as maximum
likelihood or Bayesian posterior inference. Even in the unlikely case
where the true number of communities is known, the implicit assumptions
of modularity correspond to the DC-PP model not only with uniform
probabilities between communities but also uniform sums of degrees for
every community. If these properties are not present in the network, the
method offers no inherent diagnostic, and will find spurious structures
that tend to match it, regardless of their statistical
significance. Combined with the overall lack of regularization, these
features render the method substantially prone to distortion and
overfitting. Ultimately, the use of any form of modularity maximization
fails the litmus test we considered earlier, and should be considered a
purely descriptive community detection method. Whenever the objective is
to understand network structure, it needs to be replaced with a flexible
and robust inferential procedure.
\subsection{``Consensus clustering can eliminate overfitting.''}\label{sec:consensus}
As mentioned in Sec.~\ref{sec:modularity}, methods like modularity
maximization tend to have a degenerate solution landscape. One strategy
proposed to tackle this problem is to obtain a \emph{consensus
clustering}, i.e. leverage the entire landscape of solutions to produce
a single partition that points in a cohesive direction, representative
of the whole
ensemble~\cite{massen_thermodynamics_2006,lancichinetti_consensus_2012,riolo_consistency_2020}. If
no cohesive direction exists, one could then conclude that no actual
community structure exists, and therefore solve the overfitting problem
of finding communities in maximally random networks. In reality, however, a
descriptive community detection method can in fact display a cohesive
set of solutions on a maximally random network. We demonstrate this in
Fig.~\ref{fig:consensus} which shows the consensus between $10^5$
different maximum modularity solutions for a small random network, using
the method of Ref.~\cite{peixoto_revealing_2021} to obtain the
consensus. Although we can notice a significant variability between the
different partitions, there is also substantial agreement. In
particular, there is no clear indication from the consensus that the
underlying network is maximally random. The reason for this that the
randomness of the network is \emph{quenched}, and does indeed point to a
specific community structure with the highest modularity. The ideas of
solution heterogeneity and overfitting are, in general, orthogonal
concepts.
\FloatBarrier
\begin{figure}
\begin{tabular}{c}
\includegraphics[width=.5\textwidth]{figs/random_consensus.pdf}
\end{tabular}
\caption{Consensus clustering of a maximally random network,
sampled from the Erd\H{os}-Rényi model, that combines $10^5$
solutions of the maximum modularity method. On each node there is a
pie chart describing the frequencies with which it was observed in a
given community, obtained using the approach described in
Ref.~\cite{peixoto_revealing_2021}. Despite the lack of latent
communities, there is a substantial agreement between the different
answers.
\label{fig:consensus}}
\end{figure}
With care, it is possible to probe the solution landscape in a manner
that reveals a signal of the randomness of the underlying network. For
this purpose, some authors have proposed that instead of finding the
maximum modularity partition, one instead samples them from the Gibbs
distribution~\cite{massen_thermodynamics_2006,reichardt_when_2006,hu_phase_2012,zhang_scalable_2014},
\begin{equation}
P(\bm{b}) = \frac{\mathrm{e}^{\beta Q(\bm{A},\bm{b})}}{Z(\bm{A})},
\end{equation}
with normalization $Z(\bm{A})=\sum_{\bm{b}}\mathrm{e}^{\beta Q(\bm{A},\bm{b})}$, effectively
considering $Q(\bm{A},\bm{b})$ as the Hamiltonian of a spin system with an
inverse temperature parameter $\beta$. For a sufficiently large random
network, there is a particular value $\beta=\beta^*$, below which
samples from the distribution become uncorrelated, forming a lack of
consensus~\cite{zhang_scalable_2014}. There is a problem, however: there
is no guarantee that if a lack of consensus exists for $\beta<\beta^*$,
then the network must be random; only the reverse is true. In general,
while statements can be made about the behavior of the modularity
landscape for maximally random and sufficiently large networks, or even for
networks sampled from a SBM, very little can be said about its behavior
on real, finite networks. Since real networks are likely to contain a
heterogeneous mixture of randomness and structure (e.g. as illustrated in
Fig.~\ref{fig:resolution}(c)) this kind of approach becomes ultimately
unreliable. One fundamental problem here is that these approaches
attempt to reach an inferential conclusion (``is the network sampled
from a random model?'') without fully going through Bayes' formula of
Eq.~\ref{eq:bayes}, and reasoning about model assumptions, prior
information and compressibility. We currently lack a principled
methodology to reach such a conclusion while avoiding these crucial
steps.
Another aspect of the relationship between consensus clustering and
overfitting is worth mentioning. In an inferential setting, if we wish
to obtain an estimator for the true partition $\hat\bm{b}$, this will in
general depend on how we evaluate its accuracy. In other words, we must
define an error function $\epsilon(\bm{b}',\bm{b})$ such that
\begin{equation}
\bm{b} = \underset{\bm{b}'}{\operatorname{argmin}}\; \epsilon(\bm{b}',\bm{b}).
\end{equation}
Based on this, our best possible estimate is the one which minimizes the
average error over the entire posterior distribution,
\begin{equation}
\hat\bm{b} = \underset{\bm{b}'}{\operatorname{argmin}}\; \sum_{\bm{b}} \epsilon(\bm{b}',\bm{b}) P(\bm{b}|\bm{A}).
\end{equation}
Note that in general this estimator will be different from the most
likely partition, i.e.
\begin{equation}
\hat\bm{b} \neq \underset{\bm{b}}{\operatorname{argmax}}\; P(\bm{b}|\bm{A}).
\end{equation}
The optimal estimator $\hat\bm{b}$ will indeed correspond to a consensus
over all possible partitions, weighted according to their
plausibility. In situations where the posterior distribution is
concentrated on a single partition, both estimators will
coincide. Otherwise, the most likely partition might in fact be less
accurate and incorporate more noise than the consensus estimator, which
might be seen as a form of overfitting. This kind of overfitting is of a
different nature than the one we have considered so far, since it
amounts to a residual loss of accuracy, where an (often small) fraction
of the nodes end up incorrectly classified, instead of spurious groups
being identified. However, there are many caveats to this kind of
analysis. First, it will be sensitive to the error function chosen,
which needs to be carefully justified. Second, there might be no
cohesive consensus, in situations where the posterior distribution is
composed of several distinct ``modes,'' each corresponding to a
different hypothesis for the network. In such a situation the consensus
between them might be unrepresentative of the ensemble of
solutions. There are principled approaches to deal with this problem, as
described in
Refs.~\cite{peixoto_revealing_2021,kirkley_representative_2022}.
\subsection{``Overfitting can be tackled by doing a statistical significance test of the quality function.''}\label{sec:significance}
Sometimes practitioners are aware that non-inferential methods like
modularity maximization can find communities in random networks. In an
attempt to extract an inferential conclusion from their results, they
compare the value of the quality function with a randomized version of
the network --- and if a significant discrepancy is found, they conclude
that the community structure is statistically
meaningful~\cite{reichardt_when_2006}. Unfortunately, this approach is
as fundamentally flawed as it is straightforward to implement.
The reason why the test fails is because in reality it answers a
question that is different from the one intended. When we compare the
value of the quality function obtained from a network and its randomized
counterpart, we can use this information to answer \emph{only} the
following question: ``Can we reject the hypothesis that the observed
network was sampled from a random null model?'' No other information can
be obtained from this test, including whether the \emph{network
partition} we obtained is significant. All we can determine is if the
optimized value of the quality function is significant or not. The
distinction between the significance of the quality function value and
the network partition itself is subtle but crucial. \FloatBarrier
\begin{figure}
\begin{tabular}{ccc}
\includegraphicsl{(a)}{width=.33\textwidth}{figs/modularity-null-N100-ak3.pdf}&
\includegraphicslp{(b)}{.87}{width=.33\textwidth}{figs/modularity-null-N100-ak3-partition.pdf}&
\includegraphicslp{(c)}{.87}{width=.33\textwidth}{figs/modularity-null-N100-ak3-partition-sbm.pdf}
\end{tabular} \caption{The statistical significance of the maximum
modularity value is not informative of the significance of the
community structure. In (a) we show the distribution of optimized
values of modularity for networks sampled from the Erd\H{o}s-Rényi
(ER) model with the same number of nodes and edges as the network
shown in (b) and (c). The vertical line shows the value obtained for
the partition shown in (b), indicating that the network is very
unlikely to have been sampled from the ER model ($P=0.002$). However,
what sets this network apart from typical samples is the existence of
a small clique of six nodes that would not occur in the ER model. The
remaining communities found in (b) are entirely meaningless. In (c) we
show the result of inferring the SBM on this network, which perfectly
identifies the planted clique without overfitting the rest of the
network.
\label{fig:modularity_null}}
\end{figure}
We illustrate the above difference with an example in
Fig.~\ref{fig:modularity_null}(b). This network is created by starting
with a maximally random Erd\H{o}s-Rényi (ER) network, and adding to it a
few more edges so that it has an embedded clique of six nodes. The
occurrence of such a clique from an ER model is very unlikely, so if we
perform a statistical test on this network that is powerful enough, we
should be able to rule out that it came from the ER model with good
confidence. Indeed, if we use the value of maximum modularity for this
test, and compare with the values obtained for the ER model with the
name number of nodes and edges (see Fig.~\ref{fig:modularity_null}(a)),
we are able to reach the correct conclusion that the null model should
be rejected, since the optimized value of modularity is significantly
higher for the observed network. Should we conclude therefore that the
communities found in the network are significant? If we inspect
Fig.~\ref{fig:modularity_null}(b), we see that the maximum value of
modularity indeed corresponds to a more-or-less decent detection of the
planted clique. However, it also finds another seven completely spurious
communities in the random part of the network. What is happening is
clear --- the planted clique is enough to increase the value of $Q$ such
that it becomes a suitable test to reject the null model,\footnote{Note
that it is possible to construct alternative examples, where instead of
planting a clique, we introduce the placement of triangles, or other
features that are known to increase the value of modularity, but that do
not correspond to an actual community
structure~\cite{foster_clustering_2011}.} but the test is not able to
determine that the communities themselves are statistically
meaningful. In short, the statement ``the value of $Q$ is significant''
is not synonymous with ``the network partition is significant.''
Conflating the two will lead to the wrong conclusion about the
significance of the communities uncovered.
In Fig.~\ref{fig:modularity_null}(c) we show the result of a more
appropriate inferential approach, based on the SBM as described in
Sec~\ref{sec:inference}, that attempts to answer a much more relevant
question: ``which partition of the network into groups is more likely?''
The result is able to cleanly separate the planted clique from the rest
of the network, which is grouped into a single community.
This example also shows how the task of rejecting a null model is very
oblique to Bayesian inference of generative models. The former attempts
to determine what the network \emph{is not}, while the latter what
\emph{it is}. The first task tends to be easy --- we usually do not
need very sophisticated approaches to determine that our data did not
come from a null model, specially if our data is complex. On the other
hand, the second task is far more revealing, constructive, and arguably
more useful in general.
\subsection{``Setting the resolution parameter of modularity maximization can remove the resolution limit.''}\label{sec:resolution}
The resolution limit of the generalized modularity of
Eq.~\ref{eq:Qgamma} is such that, in a connected network, no more than
$\sqrt{\gamma 2E}$ communities can be found, with $\gamma$ being the
resolution
parameter~\cite{reichardt_statistical_2006,arenas_analysis_2008}. Therefore,
by changing the value of $\gamma$, we can induce the discovery of
modules of arbitrary size, at least in principle. However, there are
several underlying problems with tuning the value of $\gamma$ for the
purpose of counter-acting the resolution limit. The first is that it
requires a specific prior knowledge about what would be the relevant
scale for a particular network --- which is typically unavailable ---
turning an otherwise nonparametric approach into one which is
parametric.\footnote{We emphasize that the maximum likelihood approach
proposed in Ref.~\cite{newman_equivalence_2016} to determine $\gamma$,
even ignoring the caveats discussed in Sec.~\ref{sec:equivalence} that
render it invalid unless very specific conditions are met, is only
applicable for situations when the number of groups is known, directly
undermining its use to counteract the resolution limit.} The second
problem is even more serious: In many cases no single value of $\gamma$
is appropriate. This happens because, as we have seen in
Seq.~\ref{sec:equivalence}, generalized modularity comes with the
built-in assumption that the sum of degrees of every group should be the
same. The preservation of this homogeneity means that when the network
is composed of communities of different sizes, either the smaller ones
will be merged together or the bigger ones will be split into smaller
ones, regardless of the statistical
evidence~\cite{lancichinetti_limits_2011}. We show a simple example of
this in Fig.~\ref{fig:resolution_param}, where no value of $\gamma$ can
be used to recover the correct partition.
\begin{figure}
\begin{tabular}{ccc}
\multirow{4}{*}[10em]{\includegraphicslp{(a)}{.95}{width=.33\textwidth}{figs/resolution_param.pdf}}&
\includegraphicslp{(b)}{.87}{width=.33\textwidth}{figs/modularity-resolution-mixed-gamma0.41025641025641024.pdf}&
\includegraphicslp{(c)}{.87}{width=.33\textwidth}{figs/modularity-resolution-mixed-gamma1.0256410256410255.pdf}\\
&\smaller$\gamma=0.41$ & \smaller$\gamma=1.02$\\
&\includegraphicslp{(d)}{.87}{width=.33\textwidth}{figs/modularity-resolution-mixed-gamma2.051282051282051.pdf}&
\includegraphicslp{(e)}{.87}{width=.33\textwidth}{figs/modularity-resolution-mixed-gamma2.564102564102564.pdf}\\
&\smaller$\gamma=2.05$ & \smaller$\gamma=2.56$
\end{tabular}
\caption{Modularity maximization imposes characteristic community
sizes in a manner that hides heterogeneity. Panel (a) shows the
overlap between the true and obtained partition for the network
described in Fig.~\ref{fig:resolution}, as a function of the
resolution parameter $\gamma$. Panels (b) to (e) show the partitions
found for different values of $\gamma$, where we see that as smaller
groups are uncovered, bigger ones are spuriously split. The result is
that no value of $\gamma$ allows the true communities to be
uncovered.\label{fig:resolution_param}}
\end{figure}
However, the most important problem with the analysis of the resolution
limit in the context of modularity maximization is that it is often
discussed in a manner that is largely decoupled from the issue of
statistical significance. Since we can interpret a limit on the maximum
number of groups as type of systematic underfitting, we can only
meaningfully discuss the removal of this limitation if we also do not
introduce a tendency to \emph{overfit}, i.e. find more groups than
justifiable by statistical evidence. This is precisely the problem with
``mutliresolution'' approaches~\cite{granell_hierarchical_2012}, or
analyses of quality functions other than
modularity~\cite{kawamoto_estimating_2015}, that claim a reduced or a
lack of resolution limit, but without providing a robustness against
overfitting. This one-sided evaluation is fundamentally incomplete, as
we may end up trading one serious limitation for another.
Methods based on the Bayesian inference of the SBM can tackle the issue
of over- and underfitting, as well as preferred sizes of communities at
the source. As was shown in Ref.~\cite{peixoto_parsimonious_2013}, a
uninformative assumption about the mixing patterns between groups leads
naturally to a resolution limit similar to the one existing for
modularity, where no more than $O(\sqrt{N})$ groups can be inferred for
sparse networks. However, since in an inferential context our
assumptions are made explicitly, we can analyse them more easily and
come up with more appropriate choices. In
Ref.~\cite{peixoto_hierarchical_2014} it was shown how replacing the
noninformative assumption by a Bayesian hierarchical model can
essentially remove the resolution limit, with a maximum number of groups
scaling as $O(N/\log N)$. That model is still unbiased with respect to
the expected mixing patterns, and incorporates only the assumption that
the patterns themselves are generated by another SBM, with its own
patterns generated by yet another SBM, and so on recursively. Another
model that has also been shown to be free of the resolution limit is the
assortative SBM of Ref.~\cite{zhang_statistical_2020}. Importantly, in
both these cases the removal of the resolution limit is achieved without
sacrificing the capacity of the method to avoid overfitting --- e.g. none of
these approaches will find spurious groups in random networks.
The issue with preferred group sizes can also be tackled in a principled
way in an inferential setting. As demonstrated in
Ref.~\cite{peixoto_nonparametric_2017}, we can also design Bayesian
prior hierarchies where the group size distribution is chosen in a
non-informative manner, before the partition itself is determined. This
results in an inference method that is by design agnostic with respect
to the distribution of group sizes, and will not prefer any of them in
particular. Such a method can then be safely employed on networks with
heterogeneous group sizes in an unbiased manner. In
Fig.~\ref{fig:resolution}(d) we show how such an approach can easily
infer groups of different sizes for the same example of
Fig.~\ref{fig:resolution_param}, in a completely nonparametric manner.
\subsection{``Modularity maximization can be fixed by replacing the null model.''}\label{sec:null}
Several variations of the method of modularity maximization have been
proposed, where instead of the configuration model, another null model
is used, in a manner that makes the method applicable in various
scenarios, e.g. with bipartite networks~\cite{barber_modularity_2007},
correlation matrices~\cite{macmahon_community_2015}, signed edge
weights~\cite{traag_community_2009}, networks embedded in euclidean
spaces~\cite{expert_uncovering_2011}, to name a few. While the choice of
null model has an important effect on what kind of structures are
uncovered, its choice does not address any of the statistical
shortcomings of modularity that we consider here. In general, just like
it happens for the configuration model, the approach will find spurious
communities in networks sampled from its null model, regardless of how
it is chosen. As as discussed in Sec.~\ref{sec:modularity}, this happens
because the measured deviation does not account for the optimization
procedure employed. Any method based on optimizing the modularity score
will amount to a data dredging procedure, independently of the null
model chosen, and are thus unsuitable for inferential aims.
\subsection{``Descriptive approaches are good enough when the community structure is obvious.''}\label{sec:obvious}
A common argument goes that, sometimes, the community structure of a
network is so ``obvious'' that it will survive whatever abuse we direct
at it, and it will be uncovered by a majority of community detection
methods that we employ. Therefore, if we are confident that our network
contains a clear signal of its community structure, specially if several
algorithms substantially agree with each other, or they agree with
metadata, then it does not matter very much which algorithm we use.
There are several problems with this argument. First, if an ``obvious''
structure exists, it does not necessarily mean that it is really
meaningful, or statistically significant. If ten algorithms overfit, and
one does not, the majority vote is incorrect, and we should prefer the
minority opinion. This is precisely the case we considered in
Fig.~\ref{fig:descriptive}, where virtually any descriptive method would
uncover the same 13 communities --- thus overfitting the network ---
while an inferential approach would not. And if a method agrees with
metadata, while another finds further structure not in agreement, what
is to say that this structure is not really there? (Metadata are not
``ground truth,'' they are only more
data~\cite{hric_network_2016,newman_structure_2016,peel_ground_2017},
and hence can have its own complex, incomplete, noisy, or even
irrelevant relationship with the network.)
Secondly, and even more importantly, how do we even define what is an
``obvious'' community structure? In general, networks are not low
dimensional objects, and we lack methods to inspect their structure
directly, a fact which largely motivates community detection in the
first place. Positing that we can just immediately determine community
structure largely undermines this fact. Often, structure which is deemed
``obvious'' at first glance, ceases to be so upon closer inspection. For
example, one can find claims in the literature that different connected
components must ``obviously'' correspond to different
communities. However, maximally random graphs can end up disconnected if
they are sufficiently sparse, which means that from an inferential point
of view different components can belong to the same community.
Another problem is that analyses of community detection results rely
frequently on visual inspections of graphical network layouts, where one
tries to evaluate if the community labels agree with the position of the
nodes. However, the positioning of the nodes and edges is not inherent
to the network itself, and needs to be obtained with some graph drawing
algorithm. A typical example are the so-called ``spring-block'' or
``force-directed'' layouts, where one considers attractive forces
between nodes connected by an edge (like a spring) and an overall
repulsive force between all nodes~\cite{hu_efficient_2005}. The final
layout is then obtained by minimizing the energy of the system,
resulting in edges that have similar length and as few crossings between
edges as possible (e.g. in Fig.~\ref{fig:infvsdesc} we used the
algorithm of Ref.~\cite{hu_efficient_2005}). This kind of drawing in
itself can be seen as a type of indirect descriptive community detection
method, since nodes belonging to the same assortative community will
tend to be placed close to each
other~\cite{noack_modularity_2009}. Based on this observation, when we
say that we ``see'' the communities in a drawing like in
Fig.~\ref{fig:infvsdesc}, we are in reality only seeing what the layout
algorithm is telling us. Therefore, we should always be careful when
comparing the results we get with a community detection algorithm to the
structure we see in these layouts, because there is no reason to assume
that the layout algorithm itself is doing a better job than the
clustering algorithm we are evaluating.\footnote{Indeed, if we inspect
Fig.~\ref{fig:consensus}, which shows the consensus clustering of a
maximally random network, we notice that nodes that are classified in the
same community end up close together in the drawing, i.e. the layout
algorithm also agrees with the modularity consensus. Therefore, it
should not be used as a ``confirmation'' of the structure any more than
the result of any other community detection algorithm, since it is also
overfitting from an inferential perspective.} In fact, this is often not
the case, since the actual community structures in many networks do not
necessarily have a sufficiently low-dimensional representation that is
required for this kind of visualization to be effective.
\subsection{``The no-free-lunch theorem means that every community detection method is equally good.''}
For a wide class of optimization and learning problems there exist
so-called ``no-free-lunch'' (NFL) theorems, which broadly state that
when averaged over all possible problem instances, all algorithms show
equivalent
performance~\cite{wolpert_no_1995,wolpert_lack_1996,wolpert_no_1997}. Peel
\emph{et al}~\cite{peel_ground_2017} have proved that this is also valid
for the problem of community detection, meaning that no single method
can perform systematically better than any other, when averaged over all
community detection problems. This has been occasionally interpreted as
a reason to reject the claim that we should systematically prefer
certain classes of algorithms over others. This is, however, a
misinterpretation of the theorem, as we will now discuss.
The NFL theorem for community detection is easy to state. Let us
consider a generic deterministic community detection algorithm indexed
by $f$, defined by the function $\hat\bm{b}_f(\bm{A})$, which ascribes a single
partition to a network $\bm{A}$. Peel \emph{et al}~\cite{peel_ground_2017}
consider an instance of the community detection problem to be an
arbitrary pair $(\bm{A},\bm{b})$ composed of a network $\bm{A}$ and the correct
partition $\bm{b}$ that one wants to find from $\bm{A}$. We can evaluate the
accuracy of the algorithm $f$ via an error (or ``loss'') function
\begin{equation}
\epsilon (\bm{b}, \hat\bm{b}_f(\bm{A})),
\end{equation}
which should take the smallest possible value if $\hat\bm{b}_f(\bm{A}) =
\bm{b}$. If the error function does not have an inherent preference for any
partition (it's ``homogeneous''), then the NFL theorem
states~\cite{wolpert_lack_1996,peel_ground_2017}
\begin{equation}\label{eq:nfl}
\sum_{(\bm{A}, \bm{b})}\epsilon (\bm{b}, \hat\bm{b}_f(\bm{A})) = \Lambda(\epsilon),
\end{equation}
where $\Lambda(\epsilon)$ is a value that depends only on the error
function chosen, but not on the community detection algorithm $f$. In
other words, when averaged over all problem instances, all algorithms
have the same accuracy. This implies, therefore, that in order for one
class of algorithms to perform systematically better than another, we
need to restrict the universe of problems to a particular subset. This
is a seemingly straightforward result, but which is unfortunately very
susceptible to misinterpretation and overstatement.
A common criticism of this kind of NFL theorem is that it is a poor
representation of the typical problems we may encounter in real domains
of application, which are unlikely to be uniformly distributed across
the entire problem space. Therefore, as soon as we constrain ourselves
to a subset of problems that are relevant to a particular domain, then
this will favor some algorithms over others --- but then no algorithm
will be superior for all domains. But since we are typically only
interested in some domains, the NFL theorem is then arguably
``theoretically sound, but practically
irrelevant''~\cite{schaffer_conservation_1994}. Although indeed correct,
in the case of community detection this logic is arguably an
understatement. This is because as soon as we restrict our domain to
community detection problems that reveal something \emph{informative}
about the network structure, then we are out of reach of the NFL
theorem, and some algorithms will do better than others, without evoking
any particular domain of application. We demonstrate this in the
following.
The framework of the NFL theorem of Ref.~\cite{peel_ground_2017}
operates on a liberal notion of what constitutes a community detection
problem and its solution, which means for an arbitrary pair $(\bm{A},\bm{b})$
choosing the right $f$ such that $\hat\bm{b}_f(\bm{A})=\bm{b}$. Under this
framework, algorithms are just arbitrary mappings from network to
partition, and there is no necessity to articulate more specifically how
they relate to the structure of the network --- community detection just
becomes an arbitrary game of ``guess the hidden node labels.'' This
contrasts with how actual community detection algorithms are proposed,
which attempt to match the node partitions to patterns in the network,
e.g. assortativity, general connection preferences between groups,
etc. Although the large variety of algorithms proposed for this task
already reveal a lack of consensus on how to precisely define it, few
would consider it meaningful to leave the class of community detection
problems so wide open as to accept any matching between an arbitrary
network and an arbitrary partition as a valid instance.
Even though we can accommodate any (deterministic) algorithm deemed
valid according to any criterion under the NFL framework, most
algorithms in this broader class do something else altogether. In fact,
the absolute vast majority of them correspond to a maximally random
matching between network and partition, which amounts to little more
than just randomly guessing a partition for any given network, i.e. they
return widely different partitions for inputs that are very similar, and
overall point to no correlation between input and output.~\footnote{An
interesting exercise is to count how many such algorithms exist. A given
community detection algorithm $f$ needs to map each of all
$\Omega(N)=2^{N\choose 2}$ networks of $N$ nodes to one of
$\Xi(N)=\sum_{B=1}^{N}\genfrac\{\}{0pt}{}{N}{B}B!$ labeled partitions of
its nodes. Therefore, if we restrict ourselves to a single value of $N$,
the total number of input-output tables is $\Xi(N)^{\Omega(N)}$. If we
sample one such table uniformly at random, it will be asymptotically
impossible to compress it using fewer than $\Omega(N)\log_2\Xi(N)$ bits
--- a number that grows super-exponentially with $N$. As an
illustration, a random community detection algorithm that works only
with $N=100$ nodes would already need $10^{1479}$ terabytes of
storage. Therefore, simply considering algorithms that humans can write
and use (together with their expected inputs and outputs) already pulls
us very far away from the general scenario considered by the NFL
theorem. } It is not difficult to accept that these random algorithms
perform equally ``well'' for any particular problem, or even all
problems, but the NFL theorem says that they have equivalent performance
even to algorithms that we may deem more meaningful. How do we make a
formal distinction between algorithms that are just randomly guessing
from those that are doing something coherent, that depends on
discovering actual network patterns? As it turns out, there is an answer
to this question that does not depend on particular domains of
application: we require the solutions found to be \emph{structured} and
\emph{compressive of the network}.
In order to interpret the statement of the NFL theorem in this vein, it
is useful to re-write Eq.~\ref{eq:nfl} using an equivalent probabilistic
language,
\begin{equation}\label{eq:nflp}
\sum_{\bm{A}, \bm{b}}P(\bm{A},\bm{b})\epsilon (\bm{b}, \hat\bm{b}_f(\bm{A})) = \Lambda'(\epsilon),
\end{equation}
where $\Lambda'(\epsilon)\propto \Lambda(\epsilon)$, and $P(\bm{A},\bm{b})
\propto 1$ is the uniform probability of encountering a problem
instance. When writing the theorem statement in this way, we notice
immediately that instead of being agnostic about problem instances, it
implies a \emph{very specific} network generative model, which assumes a
complete independence between network and partition. Namely, if we
restrict ourselves to networks of $N$ nodes, we have then:\footnote{We
could easily introduce arbitrary constraints such as total number of
edges or degree distribution, which would change the form of
Eqs.~\ref{eq:uniform} and~\ref{eq:uniform_a}, but none of the ensuing
analysis.}
\begin{align}
P(\bm{A},\bm{b})&=P(\bm{A})P(\bm{b}),\label{eq:uniform}\\
P(\bm{A}) &= 2^{-{N\choose 2}},\label{eq:uniform_a}\\
P(\bm{b}) &= \left[\sum_{B=1}^{N}\genfrac\{\}{0pt}{}{N}{B}B!\right]^{-1}.\label{eq:uniform_b}
\end{align}
Therefore, the NFL theorem states simply that if we sample networks and
partitions from a maximally random generative model, then all algorithms
will have the same average accuracy at inferring the partition from the
network. This is hardly a spectacular result --- indeed the
Bayes-optimal algorithm in this case, i.e. the one derived from the
posterior distribution of the true generative model and which guarantees
the best accuracy on average, consists of simply guessing partitions
uniformly at random, ignoring the network structure altogether.
The probabilistic interpretation reveals that the NFL theorem involves a
very specific assumption about what kind of community detection problem
we are expecting. It is important to remember that it is not possible to
make ``no assumption'' about a problem; we are always forced to make
\emph{some} assumption, which even if implicit does not exempt it from
justification, and the uniform assumption of Eqs.~\ref{eq:uniform}
to~\ref{eq:uniform_b} is no exception. In Fig.~\ref{fig:nfl}(a) we show
a typical sample from this ensemble of community detection problems. In
a very concrete sense, we can state that such problem instances are
\emph{unstructured} and contain \emph{no learnable community structure},
or in fact no learnable network structure \emph{at all}. We say that a
community structure is (in principle) learnable if the knowledge of the
partition $\bm{b}$ can be used to compress the network $\bm{A}$, i.e. there
exists an encoding $\mathcal{H}$ (i.e. a generative model) such that
\begin{align}
\Sigma(\bm{A}|\bm{b},\mathcal{H}) &< -\log_2P(\bm{A}),\\
&< {N\choose 2},
\end{align}
where $\Sigma(\bm{A}|\bm{b},\mathcal{H}) = -\log_2P(\bm{A}|\bm{b},\mathcal{H})$ is the
description length of $\bm{A}$ according to model $\mathcal{H}$, conditioned
on the partition being known. However, it is a direct consequence of
Shannon's source coding
theorem~\cite{shannon_mathematical_1948,cover_elements_1991}, that for
the vast majority of networks sampled from the model of
Eq.~\ref{eq:uniform} the inequality above cannot be fulfilled as
$N\to\infty$, i.e. the networks are incompressible.\footnote{For finite
networks a positive compression might be achievable with small
probability, but due to chance alone, and not in a manner that makes its
structure learnable.} This means that the true partition $\bm{b}$ carries
no information about the network structure, and vice versa, i.e. the
partition is not learnable from the network. In view of this, the common
interpretation of the NFL theorem as ``all algorithms perform equally
well'' is in fact quite misleading, and should be more accurately
phrased as ``all algorithms perform equally \emph{poorly},'' since no
inferential algorithm can uncover the true community structure in most
cases, at least no better than by chance alone. In other words, the
universe of community detection problems considered in the NFL theorem
is composed overwhelmingly of instances for which compression and
explanation are not possible.\footnote{One could argue that such a
uniform model is justified by the principle of maximum entropy, which
states that in the absence of prior knowledge about which problem
instances are more likely, we should assume they are all equally likely
\emph{a priori}. This argument fails precisely because we \emph{do} have
sufficient prior knowledge that empirical networks are not maximally
random --- specially those possessing community structure, according to
any meaningful definition of the term. Furthermore, it is easy to verify
for each particular problem instance that the uniform assumption does
not hold; either by compressing an observed network using any generative
model (which should be asymptotically impossible under the uniform
assumption~\cite{shannon_mathematical_1948}), or performing a
statistical test designed to be able to reject the uniform null
model. It is exceedingly difficult to find an empirical network for
which the uniform model cannot be rejected with near-absolute
confidence.\label{foot:uniform}} This uniformity between instances also
reveals that there is no meaningful trade-off between algorithms for
most instances, since all algorithms will yield the same negligible
asymptotic performance, with an accuracy tending asymptotically towards
zero as the number of nodes increases. In this setting, there is not
only no free lunch, but in fact there is no lunch at all (see
Fig.~\ref{fig:trade-off}).
\begin{figure}
\begin{tabular}{cc}
\includegraphicsl{(a)}{height=.5\textwidth}{figs/random.pdf}&
\includegraphicsl{(b)}{height=.5\textwidth}{figs/sbm.pdf}\\
$\Sigma_{\text{min}}(\bm{A}|\bm{b}) = 4950$ bits & \\
$\Sigma_{\text{SBM}}(\bm{A}|\bm{b}) = 6612$ bits & $\Sigma_{\text{SBM}}(\bm{A}|\bm{b}) = 2280$ bits
\end{tabular}
\caption{The NFL theorem involves predominantly instances of the
community detection problem that are strictly incompressible,
i.e. the true partitions cannot be used to explain the network. In
(a) we show a typical sample of the uniform problem space given by
Eq.~\ref{eq:uniform}, for $N=100$ nodes, which yields a dense
maximally random network, randomly divided into $B=72$ groups. It is
asymptotically impossible to use this partition to compress this
network into fewer than $\Sigma_{\text{min}}(\bm{A}|\bm{b}) = {N \choose 2}
= 4950$ bits, and therefore the partition is not learnable from the
network alone with any inferential algorithm. We show also the
description length of the SBM conditioned on the true partition,
$\Sigma_{\text{SBM}}(\bm{A}|\bm{b})$, as a reference. In (b) we show an
example of a community detection problem that is solvable, at least
in principle, since $\Sigma_{\text{SBM}}(\bm{A}|\bm{b}) <
\Sigma_{\text{min}}(\bm{A}|\bm{b})$. In this case, the partition can be
used to inform the network structure, and potentially
vice-versa. This class of problem instance has a negligible
contribution to the sum in the NFL theorem in Eq.~\ref{eq:nfl},
since it occurs only with an extremely small probability when
sampled from the uniform model of Eq.~\ref{eq:uniform}. It is
therefore more reasonable to state that the network in example (b)
has an \emph{actual} community structure, while the one in
(a) does not.
\label{fig:nfl}}
\end{figure}
\begin{figure}
\begin{tabular}{cc}
(a) ``Trade-off'' picture & (b) Actual NFL setting\\
\begin{tikzpicture}
\begin{axis}[
axis lines=left,
ylabel near ticks,
xlabel near ticks,
xlabel={Problem space},
ylabel={Accuracy},
legend entries={Algorithm 1, Algorithm 2, Algorithm 3},
ymin=-0.03, ymax=.65,
ytick={0},
xmajorticks=false
]
\addplot [blue,domain=-3:7,samples=401]
{exp(-x^2 / 2) / sqrt(2*pi)};
\addplot [red,domain=-3:7,samples=401]
{exp(-(x-2)^2 / 2) / sqrt(2*pi)};
\addplot [green,domain=-3:7,samples=401]
{exp(-(x-4)^2 / 2) / sqrt(2*pi)};
\addplot [gray, dashed, domain=-3:7,samples=11]
{.2};
\addplot [black, no marks] coordinates{(6,.2)} node[above] {\smaller Average};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[
axis lines=left,
ylabel near ticks,
xlabel near ticks,
xlabel={Problem space},
ylabel={Accuracy},
legend entries={Algorithm 1, Algorithm 2, Algorithm 3},
ymin=-0.03, ymax=.65,
ytick={0},
xmajorticks=false
]
\addplot [blue,domain=-3:7,samples=10]
{0};
\addplot [red,domain=-3:7,samples=10]
{0};
\addplot [green,domain=-3:7,samples=10]
{0};
\addplot [black, no marks] coordinates{(6,.0)} node[above] {\smaller Average};
\end{axis}
\end{tikzpicture}
\end{tabular}
\caption{A common interpretation of the NFL theorem for community
detection is that it reveals a necessary trade-off between
algorithms: since they all have the same average performance, if one
algorithm does better than another in one set of instances, it must
do worse on a equal number of different instances, as depicted in
panel (a). However, in the actual setting considered by the NFL
theorem there is no meaningful trade-off: asymptotically, all
algorithms perform maximally poorly for the vast majority of
instances, as depicted in panel (b), since in these cases the
network structure is uninformative of the partition. If we constrain
ourselves to informative problem instances (which compose only an
infinitesimal fraction of all instances), the NFL theorem is no
longer applicable.
\label{fig:trade-off}}
\end{figure}
If we were to restrict the space of possible community detection
algorithms to those that provide actual explanations, then by definition
this would imply a positive correlation between network and
partition,\footnote{Note that Eq.~\ref{eq:solvable} is a necessary but
not sufficient condition for the community detection problem to be
solvable. An example of this are networks generated by the SBM, which
are solvable only if the strength of the community structure exceeds a
detectability threshold~\cite{decelle_asymptotic_2011}, even if
Eq.~\ref{eq:solvable} is fulfilled.}
i.e.
\begin{align}
P(\bm{A},\bm{b}) &= P(\bm{A}|\bm{b})P(\bm{b})\\
&\neq P(\bm{A})P(\bm{b}).\label{eq:solvable}
\end{align}
Not only this implies a specific generative model but, as a consequence,
also an \emph{optimal} community detection algorithm, that operates
based on the posterior distribution
\begin{equation}
P(\bm{b}|\bm{A}) = \frac{P(\bm{A}|\bm{b})P(\bm{b})}{P(\bm{A})}.
\end{equation}
Therefore, \emph{learnable} community detection problems are invariably
tied to an \emph{optimal} class of algorithms, undermining to a
substantial degree the relevance of the NFL theorem in practice. In
other words, whenever there is an actual community structure in the
network being considered --- i.e. due to a systematic correlation
between $\bm{A}$ and $\bm{b}$, such that $P(\bm{A},\bm{b})\ne P(\bm{A})P(\bm{b})$ --- there
will be algorithms that can exploit this correlation better than others
(see Fig.~\ref{fig:nfl}(b) for an example of a learnable community
detection problem). Importantly, the set of learnable problems form only
an infinitesimal fraction of all problem instances, with a measure that
tends to zero as the number of nodes increases, and hence remain firmly
out of scope of the NFL theorem. This observation has been made before,
and is equally valid, in the wider context of NFL theorems beyond
community detection~\cite{streeter_two_2003,mcgregor_no_2006,everitt_universal_2013,lattimore_no_2013,schurz_humes_2019}.
Note that since there are many ways to choose a nonuniform model
according to Eq.~\ref{eq:solvable}, the optimal algorithms will still
depend on the particular assumptions made via the choice of $P(\bm{A},\bm{b})$
and how it relates to the true distribution. However, this does not
imply that all algorithms have equal performance on compressible problem
instances. If we sample a problem from the universe $\mathcal{H}_1$,
with $P(\bm{A},\bm{b}|\mathcal{H}_1)$, but use instead two algorithms optimal
in $\mathcal{H}_2$ and $\mathcal{H}_3$, respectively, their relative
performances will depend on how close each of these universes is to
$\mathcal{H}_1$, and hence will not be in general the same. In fact, if
our space of universes is finite, we can compose them into a single
unified universe~\cite{jaynes_probability_2003} according to
\begin{equation}
P(\bm{A},\bm{b}) = \sum_{i=1}^{M}P(\bm{A},\bm{b}|\mathcal{H}_i)P(\mathcal{H}_i),
\end{equation}
which will incur a compression penalty of at most $\log_2M$ bits added
to the description length of the optimal algorithm. This gives us a
path, based on hierarchical Bayesian models and minimum description
length, to achieve optimal or near-optimal performance on instances of
the community detection problem that are actually solvable, simply by
progressively expanding our set of hypotheses.
The idea that we can use compression as an inference criterion has been
formalized by Solomonoff's theory of inductive
inference~\cite{solomonoff_formal_1964}, which forms a rigorous
induction framework based on the principle of Occam's
razor. Importantly, the expected errors of predictions achieved under
this framework are provably upper-bounded by the Kolmogorov complexity
of the data generating process~\cite{hutter_universal_2007}, making the
induction framework consistent. As we mentioned already in
Sec.~\ref{sec:inference}, the Kolmogorov complexity is a generalization
of the description length we have been using, and it is defined by the
length of the shortest binary program that generates the data. The only
major limitation of Solomonoff's framework is its uncomputability,
i.e. the impossibility of determining Kolmogorov's complexity with any
algorithm~\cite{li_introduction_2008}. However, this impossibility does
not invalidate the framework, it only means that induction cannot be
fully automatized: we have a consistent criterion to compare hypotheses,
but no deterministic mechanism to produce directly the best
hypothesis. There are open philosophical questions regarding the
universality of this inductive
framework~\cite{hutter_open_2009,montanez_why_2017}, but whatever
fundamental limitations it may have do not follow directly from NFL
theorems such as the one from Ref.~\cite{peel_ground_2017}. In fact, as
mentioned in footnote~\ref{foot:uniform}, it is a rather simple task to
use compression to reject the uniform hypothesis forming the basis of
the NFL theorem for almost any network data.
Since compressive community detection problems are out of the scope of
the NFL theorem, it is not meaningful to use it to justify avoiding
comparisons between algorithms, on the grounds that all choices must be
equally ``good'' in a fundamental sense. In fact, we do not need much
sophistication to reject this line of argument, since the NFL theorem
applies also when we are considering trivially inane algorithms,
e.g. one that always returns the same partition for every network. The
only domain where such an algorithm is as good as any other is when we
have no community \emph{structure} to begin with, which is precisely
what the NFL theorem relies on.
Nevertheless, there are some lessons we can draw from the NFL
theorem. It makes it clear that the performances of algorithms are tied
directly to the inductive bias adopted, which should always be made
explicit. The superficial interpretation of the NFL theorem as an
inherent equity between all algorithms stems from the assumption that
considering all problem instances uniformly is equivalent to being free
of an inductive bias, but that is not possible. The uniform assumption
is itself an inductive bias, and one that it is hard to justify in
virtually any context, since it involves almost exclusively unsolvable
problems (from the point of view of compressibility). In contrast,
considering only \emph{compressible} problem instances is also an
inductive bias, but one that relies only on Occam's razor as a guiding
principle. The advantage of the latter is that it is independent of
domain of application, i.e. we are requiring only that an inferred
partition can help explaining the network in some manner, without having
to specify exactly how \emph{a priori}.
In view of the above observations, it becomes easier to understand
results such as of Ghasemian \emph{et
al}~\cite{ghasemian_evaluating_2019} who found that compressive
inferential community detection methods tend to systematically
outperform descriptive methods in empirical settings, when these are
employed for the task of edge prediction. Even though edge prediction
and community detection are not the same task, and using the former to
evaluate the latter can lead in some cases to
overfitting~\cite{valles-catala_consistencies_2018}, typically the most
compressive models will also lead to the best generalization. Therefore,
the superior performance of the inferential methods is understandable,
even though Ghasemian \emph{et al} also found a minority of instances
where some descriptive methods can outperform inferential ones. To the
extent that these minority results cannot be attributed to overfitting,
or technical issues such as insufficient MCMC equilibration, it could
simply mean that the structure of these networks fall sufficiently
outside of what is assumed by the inferential methods, but without it
being a necessary trade-off that comes as a consequence of the NFL
theorem --- after all, under the uniform assumption, edge prediction is
also strictly impossible, just like community detection. In other
words, these results do not rule out the existence of an algorithm that
works better in all cases considered, at least if their number is not
too large.\footnote{It is important to distinguish the actual statement
of the NFL theorem
--- ``all algorithms perform equally well when averaged over all problem
instances'' --- from the alternative statement: ``No single algorithm
exhibits strictly better performance than all others over all
instances.'' Although the latter is a corollary of the former, it can
also be true when the former is false. In other words, a particular
algorithm can be better on average over relevant problem instances, but
still underperform for some of them. In fact, it would only be possible
for an algorithm to strictly dominate all others if it can always
achieve perfect accuracy for every instance. Otherwise, there will be at
least one algorithm (e.g. one that always returns the same partition)
that can achieve perfect accuracy for a single network where the optimal
algorithm does not (``even a broken clock is right twice a
day''). Therefore, sub-optimal algorithms can eventually outperform
optimal ones by chance when a sufficiently large number of instances is
encountered, even when the NFL theorem is not applicable (and therefore
this fact is not necessarily a direct consequence of it).}
In fact, this is precisely what is
achieved in Ref.~\cite{ghasemian_stacking_2020} via model stacking,
i.e. a combination of several predictors into a meta-predictor that
achieves systematically superior performance. This points indeed to the
possibility of using universal methods to discover the latent
\emph{compressive} modular structure of networks, without any tension
with the NFL theorem.
\subsection{``Statistical inference requires us to believe the generative model being used.''}\label{sec:believe}
One possible objection to the use of statistical inference is when the
generative models on which they are based are considered unrealistic for
a particular kind of network. Although this type of consideration is
ultimately important, it is not necessarily an obstacle. An inferential
approach can be used to target a particular kind of structure, and the
corresponding model is formulated with this in mind, but without the
need to describe other properties of the data. The SBM is a good example
of this, since it is often used with the objective of finding
communities, rather than any kind of network structure. A model like the
SBM is a good way to offset the regularities that relate to the
community structure with the irregularities present in real networks,
without requiring us to believe that in fact it generated the network.
Furthermore, certain kinds of models are flexible enough so that they
can approximate other models. For example, a good analogy with fitting
the SBM to network data is to fit a histogram to numerical data, with
the node partitioning being analogous to the data binning. Although a
piecewise constant model is almost never the true underlying
distribution, it provides a reasonable approximation in a tractable,
nonparametric manner. Because of its capacity to approximate a wide
class of distributions, we certainly do not need to believe in a
histogram to extract meaningful inferences from it. In fact, the same
can be said of the SBM in its capacity to approximate a wide class of
network models~\cite{olhede_network_2014,young_universality_2018}.
The above means that we can extract useful, statistically meaningful
information from data even if the models we use are misspecified. For
example, if a network is generated by a latent space
model~\cite{hoff_latent_2002}, and we fit a SBM to it, the communities
that are obtained in this manner are not quite meaningless: they will
correspond to discrete spatial regions. Hence, the inference would yield
a caricature of the underlying latent space, amounting to a
discretization of the true model --- indeed, much like a histogram. This
is very different from, say, finding communities in an Erd\H{o}s-Rényi
graph, which bear no relation to the true underlying model, and would be
just overfitting the data. In contrast, the SBM fit to a spatial network
would be approximately capturing the true model structure, in a manner
that could be used to compress the data and make predictions (although
not optimally).
Furthermore, the associated description length of a network model is a
good criterion to tell whether the patterns we have found are actually
simplifying our network description, without requiring the underlying
model to be perfect. This happens in the same way as using a software
like \texttt{gzip} makes our files smaller, without requiring us to
believe that they are in fact generated by the Markov chain
underlying the Lempel-Ziv algorithm~\cite{ziv_universal_1977}.
Of course, realism becomes important as soon as we demand more from the
point of view of interpretation and prediction. Are the observed
community structures due to homophily or triadic
clusure~\cite{peixoto_disentangling_2022}? Or are they due to spatial
embedding~\cite{hoff_latent_2002}? What models are capable of
reproducing other network descriptors, together with the community
structure? Which models can better reconstruct incomplete
networks~\cite{guimera_missing_2009,peixoto_reconstructing_2018}? When
answering these questions, we are forced to consider more detailed
generative processes, and compare them. However, we are never required
to \emph{believe} them --- models are always tentative, and should
always be replaced by superior alternatives when these are
found. Indeed, criteria such as MDL serve precisely to implement such a
comparison between models, following the principle of Occam's
razor. Therefore, the lack of realism of any particular model cannot be
used to dismiss statistical inference as an underlying methodology. On
the contrary, the Bayesian workflow~\cite{gelman_bayesian_2020} enables
a continuous improvement of our modelling apparatus, via iterative model
building, model checking, and validation, all within a principled and
consistent framework.
It should be emphasized that, fundamentally, there is no
alternative. Rejecting an inferential approach based on the SBM on the
grounds that it is an unrealistic model (e.g. because of the conditional
independence of the edges being placed, or some other unpalatable
assumption), but instead preferring some other non-inferential community
detection method is incoherent: As we discussed in
Sec.~\ref{sec:implicit}, every descriptive method can be mapped to an
inferential analogue, with implicit assumptions that are hidden from
view. Unless one can establish that the implicit assumptions are in fact
more realistic, then the comparison cannot be justified. Unrealistic
assumptions should be replaced by more realistic ones, not by burying
one's head in the sand.
\subsection{``Inferential approaches are prohibitively expensive.''}\label{sec:performance}
One of the reasons why descriptive methods such as modularity
maximization are widely used is because of very efficient heuristics
that enable their application for very large networks. The most famous
of which is the Louvain algorithm~\cite{blondel_fast_2008}, touted for
its speed and good ability to find high-scoring partitions. A more
recent variation of this method is the Leiden
algorithm~\cite{traag_louvain_2019}, which is a refinement of the
Louvain approach, designed to achieve even more high-scoring partitions,
without sacrificing speed. None of these methods were developed with the
purpose of assessing the statistical evidence of the partitions found,
and since they are most often employed as modularity maximization
techniques, they suffer from all the shortcomings that come with it.
It is often perceived that principled inferential approaches based on
the SBM, designed to overcome all of the shortcomings of descriptive
methods including modularity maximization, are comparatively much
slower, often prohibitively so. However, we show here that this
perception is quite inaccurate, since modern inferential approaches can
be quite competitive. From the point of view of algorithmic complexity,
agglomerative~\cite{peixoto_efficient_2014} or merge-split
MCMC~\cite{peixoto_merge-split_2020} have at most a log-linear
complexity $O(E\log^2 N)$, where $N$ and $E$ are the number of nodes and
edges, respectively, when employed to find the most likely
partition. This means they belong to the same complexity class as the
Louvain and Leiden algorithms, despite the fact the SBM-based algorithms
are in fact more general, and do not attempt to find strictly
assortative structures --- and hence cannot make any optimizations that
are only applicable in this case, as done by Louvain and Leiden. In
practice, all these algorithms return results in comparable times.
\FloatBarrier
\begin{figure}
\begin{tabular}{cc}
\includegraphicsl{(a)}{width=.5\textwidth}{figs/inference_performance.pdf}&
\includegraphicsl{(b)}{width=.5\textwidth}{figs/inference_performance_relative.pdf}
\end{tabular}
\caption{Inferential algorithms show competitive performance with
descriptive ones. In panel (a) is shown the run-time of the Leiden
algorithm~\cite{traag_louvain_2019} and the agglomerative
MCMC~\cite{peixoto_efficient_2014} for modularity, and three SBM
parametrizations: planted partition (PP), degree-corrected SBM, and
nested degree-corrected SBM, for 38 empirical
networks~\cite{peixoto_netzschleuder_2020}. All experiments were done
on a laptop with an i9-9980HK Intel CPU, and averaged over at least 10
realizations. The dashed line shows an $O(E\log^2 E)$ scaling. In (b)
are shown the same run times, but relative to the Leiden
algorithm. The horizontal dashed lines show the median
values. \label{fig:performance}}
\end{figure}
In Fig.~\ref{fig:performance} we show a performance comparison between
various algorithms on 38 empirical networks of various domains and
number of edges spanning six orders of magnitude, obtained from the
Netzschleuder repository~\cite{peixoto_netzschleuder_2020}. We used the
Leiden implementation provided by its authors,\footnote{Retreived from
\url{https://github.com/vtraag/leidenalg}.} and compared with various
SBM parametrizations implemented in the \texttt{graph-tool}
library~\cite{peixoto_graph-tool_2014}. In particular we consider the
agglomerative MCMC of Ref.~\cite{peixoto_efficient_2014} employed for
modularity maximization, the Bayesian planted partition (PP)
model~\cite{zhang_statistical_2020}, the degree-corrected SBM with
uniform priors~\cite{peixoto_nonparametric_2017} and the nested
SBM~\cite{peixoto_hierarchical_2014,peixoto_nonparametric_2017}. As seen
in Fig.~\ref{fig:performance}(a), all algorithms display the same
scaling with the number of edges, and differ only by an approximately
constant factor. This difference is speed is due to the more complex
likelihoods used by the SBM and additional data structures that are
needed for its computation. When the agglomerative
MCMC~\cite{peixoto_efficient_2014} is used with the simpler modularity
function, it comes very close to the Leiden algorithm, despite not
taking advantage of any custom optimization for that particular quality
function. When used with the strictly assortative PP model, the
algorithm slows down by a larger factor when compared to Leiden --- most
of which can be attributed to the increased complexity of the quality
function. For the general SBM and nested SBM the algorithm slows down
further, since now it is searching for arbitrary mixing patterns (not
only assortative ones) and entire modular hierarchies. Indeed the
performance difference between the most complex SBM and Leiden can be
substantial, but at this point it also becomes an apples-and-oranges
comparison, since the inferential method not only is not restricted to
assortative communities, but it also uncovers an entire hierarchy of
partitions in a nonparametric manner, while being unhindered by the
resolution limit and with protection against overfitting. Overall, if a
practitioner is considering modularity maximization, they should prefer
instead at least the Bayesian PP model, which solves the same kind of
problem but it is not marred by all the shortcomings of modularity,
including the resolution limit and systematic overfitting, while still
being comparatively fast. The more advanced SBM formulations allow the
researcher to probe a wider array of mixing patterns, without abdicating
from statistical robustness, at the expense of increased computation
time. As this analysis shows, all algorithms are accessible for fairly
large networks of up to $10^7$ edges on a laptop, but in fact can scale
to $10^9$ or more on HPC systems.
Based on the above, it becomes difficult to justify the use modularity
maximization based solely on performance concerns, even on very large
networks, since there are superior inferential approaches available with
comparable speed, and which achieve more meaningful results in
general.\footnote{In this comparison we consider only the task of
finding point estimates, i.e. best scoring partitions. This is done to
maintain an apples-to-apples comparison, since this all that can be
obtained with the Leiden and other modularity maximization
algorithms. To take full advantage of the Bayesian framework we would
need to characterize the full posterior distribution instead, and sample
partitions from it, instead of maximizing it, which incurs a larger
computational cost and requires a more detailed
analysis~\cite{peixoto_revealing_2021}. We emphasize, however, that the
point estimates obtained with the SBM posterior already contain a
substantial amount of regularization, and will not overfit the number of
communities, for example.}
\subsection{``Belief propagation outperforms MCMC.''}
The method of belief propagation (BP)~\cite{decelle_asymptotic_2011} is
an alternative algorithm to MCMC for inferring the partitions from the
posterior distribution of the SBM in the semi-parametric case where the
model parameters controlling the probability of connections between
groups and the expected sizes of the groups are known \emph{a
priori}. It relies on the assumption that the network analyzed was truly
sampled from the SBM, that the number of groups is much smaller than the
number of nodes, $B\ll N$, and the network is sufficiently large, $N\gg
1$. Even though none of these assumptions are likely to hold in
practice, BP is an extremely useful and powerful algorithm since it
returns an estimate of the marginal posterior probability that is not
stochastic, unlike MCMC. Furthermore, it is amenable to analytical
investigations, which was used to uncover the detectability threshold of
the SBM~\cite{decelle_inference_2011,decelle_asymptotic_2011}, and
important connections with spectral
clustering~\cite{krzakala_spectral_2013}. It is often claimed, however,
that it is also faster than MCMC when employed for the same task. This
is, however, not quite true in general, as we now discuss. The
complexity of BP is $O(\tau NB^2)$, where $\tau$ is the convergence
time, which is typically small compared to the other quantities [for the
DC-SBM the complexity becomes $O(\tau \ell NB^2)$, where $\ell$ is the
number of distinct degrees in the network~\cite{yan_model_2014}]. A MCMC
sweep of the SBM, i.e. the number of operations required to give a
chance of each node to be moved once from its current node membership,
can be implemented in time $O(N)$, independent of the number of groups
$B$~\cite{peixoto_efficient_2014,peixoto_merge-split_2020}, when using
the parametrization of
Refs.~\cite{peixoto_parsimonious_2013,peixoto_nonparametric_2017}. This
means that the performance difference between both approaches can be
substantial when the number of groups is large. In fact, if
$B=O(\sqrt{N})$ which a reasonable reference for empirical networks, BP
becomes $O(N^2)$ while MCMC remains $O(N)$. Agglomerative MCMC
initialization schemes, which can significantly improve the mixing time,
have themselves a complexity $O(N\log^2
N)$~\cite{peixoto_efficient_2014}, still significantly faster than BP
for large $B$.
\begin{figure}
\includegraphics[width=.6\textwidth]{figs/BP_performance_B_openflights.pdf} \caption{Comparison
of run times between MCMC and BP on laptop with an i9-9980HK Intel
CPU, for a network of flights between airports, with $N=3188$ nodes
and $E=18833$. We used the agglomerative algorithm of
Ref.~\cite{peixoto_efficient_2014}, and initialized BP with the model
parameters found with MCMC. The dashes line shows a $B^2$
slope.\label{fig:BP}}
\end{figure}
In Fig.~\ref{fig:BP} we show a run-time comparison between BP and MCMC
for an empirical network of flights between airports.\footnote{Obtained
from \url{https://openflights.org/data.html}.} As the number of groups
increases, the run-time of BP grows quadratically, as expected, while
for MCMC it remains constant. There are several caveats in this
comparison, which is somewhat apples-to-oranges: BP outputs a full
marginal distribution for every node, containing even probabilities that
are very low, while for MCMC we obtain anything from a point estimate to
full marginal or joint probabilities, at the expense of longer running
times, which is not revealed by the comparison in Ref.~\ref{fig:BP},
which corresponds only to a point estimate. On the other hand, BP
requires a value of the model parameters besides the partition itself,
which can in principle be obtained together with the marginals via
expectation-maximization (EM)~\cite{decelle_asymptotic_2011}, although a
meaningful convergence for complex problems cannot be guaranteed with
this algorithm~\cite{kawamoto_algorithmic_2018}. Overall, we can state
that some answers can be achieved in log-linear time with MCMC
independently from the number of groups (and requiring no particular
assumptions on the data), while with BP we can never escape the
quadratic dependence on $B$.
We emphasize that BP is only applicable in the semiparametric case,
where the number of groups and model parameters are known. The
nonparametric case considered in Sec.~\ref{sec:inference}, which is
arguably more relevant in practice, cannot be tackled using BP, leaving
MCMC as the only game in town, at least with the current
state-of-the-art.
\subsection{``Spectral clustering outperforms likelihood-based methods.''}
Spectral clustering methods divide a network into groups based on the
leading eigenvectors of a linear operator associated with the network
structure~\cite{spielman_spectral_2007,von_luxburg_tutorial_2007}. There
are important connections between spectral methods and statistical
inference, in particular there are certain linear operators that can be
shown to provide a consistent estimation of the
SBM~\cite{rohe_spectral_2011,krzakala_spectral_2013}. However, when
compared to likelihood-based methods, spectral methods are only
approximations, as they amount to a simplification of the
problem. Nevertheless, one of the touted advantages of this class of
methods is that they tend to be significantly faster than likelihood
based methods using MCMC. But like in the case of BP considered in the
previous section, the run-time of spectral methods is intimately related
to the number of groups one wishes to infer, unlike MCMC. Independently
of the operator being used, the clustering into $B$ groups requires the
computation of the first $B$ leading eigenvectors. The most efficient
algorithms for this purpose are based on the implicitly restarted
Arnoldi method~\cite{lehoucq_deflation_1996}, which has a worse-case
time complexity $O(NB^2)$ for sparse matrices. Therefore, for
sufficiently large number of groups they can cease to be faster than
MCMC, which has a run-time complexity independent of the number of
groups~\cite{peixoto_efficient_2014,peixoto_merge-split_2020}.
\FloatBarrier
\begin{figure}
\includegraphics[width=.6\textwidth]{figs/spectral_performance_B_anybeat.pdf}
\caption{Comparison
of run times between MCMC and spectral clustering using the Laplacian
matrix, on a laptop with an i9-9980HK Intel CPU, for the Anybeat
social network~\cite{fire_link_2013}, with $N=12645$ vertices and
$E=49132$ edges. We used the agglomerative algorithm of
Ref.~\cite{peixoto_efficient_2014} and the ARPACK eigenvector
solver~\cite{lehoucq_ARPACK_1998}.\label{fig:spectral}}
\end{figure}
In Fig.~\ref{fig:spectral} we show a comparison of spectral clustering
and MCMC inference for the Anybeat social
network~\cite{fire_link_2013}. Indeed, for small number of groups
spectral clustering can be significantly faster, but eventually becomes
slower as the number of groups increases. The complexity of the spectral
algorithm does not scale exactly like the worse case $O(NB^2)$ in
practice, and the actual times will depend on the details of the
particular operator. The MCMC algorithm becomes slightly faster, on the
other hand, since the agglomerative initialization heuristic used
terminates sooner when more groups are
imposed~\cite{peixoto_efficient_2014}. As usual, there are caveats with
this comparison. First, the eigenvectors by themselves do not provide a
clustering of the network. Usually, these are given as input to a
general-purpose clustering algorithm, typically $k$-means, which itself
also has a complexity $O(NB^2)$, not included in the comparison of
Fig.~\ref{fig:spectral}. Furthermore, spectral clustering usually
requires the number of groups itself to be known in advance --- although
heuristics exist for spectral algorithms, but which usually require a
significant part of the entire spectrum to be
determined~\cite{krzakala_spectral_2013}. Likelihood-based methods, if
implemented as a nonparametric Bayesian posterior like done in
Sec.~\ref{sec:inference}, do not require this prior information. On the
other hand, spectral methods can be parallelized rather easily, unlike
MCMC, and hence can take advantage of multicore processors.
\subsection{``Bayesian posterior, MDL, BIC and AIC are different but equally valid model selection criteria.''}
One outstanding problem with using inferential community detection is
that the likelihood of a model like the SBM does not, by itself, offer a
principled way to determine the appropriate number of groups. This is
because if we maximize the likelihood directly, it will favor a number
of groups that is equal to the number of nodes, i.e. an extreme
overfitting. This is similar to what happens when we fit a polynomial to
a set of one-dimensional data points by varying its degree: for a degree
equal to the number of points we can fit any set of points perfectly,
but we are guaranteed to be overfitting the data. In other words, if we
do not account for model complexity explicitly, we cannot separate
randomness from structure.
In the literature we often see mentions of Bayesian posterior inference,
minimum description length
(MDL)~\cite{rissanen_modeling_1978,grunwald_minimum_2007}, as well as
likelihood penalty schemes such as Bayesian Information Criterion
(BIC)~\cite{schwarz_estimating_1978} and Akaike's Information Criterion
(AIC)\cite{akaike_new_1974}, as being equally valid alternatives that
can be used to solve this problem. It is sometimes said that the choice
between them is philosophical and often simply reflects the culture that
a researcher stems from. As we show here, this is demonstrably
incorrect, since Bayes, MDL, and BIC are in fact the same criterion,
where BIC is simply an (arguably crude) approximation of the first two,
which are in fact identical. AIC is indeed a different criterion, but,
like BIC, it involves approximations that are known to be invalid for
community detection.
The exact equivalence between MDL and Bayesian inference is easy to
demonstrate~\cite{peixoto_hierarchical_2014,peixoto_nonparametric_2017},
as we have already done already in Sec.~\ref{sec:inference}. Namely, the
posterior distribution of the community detection problem is given by
\begin{align}
P(\bm{b}|\bm{A}) &= \frac{P(\bm{A}|\bm{b})P(\bm{b})}{P(\bm{A})},\label{eq:bayes_dl}\\
&= \frac{2^{-\Sigma(\bm{A},\bm{b})}}{P(\bm{A})},
\end{align}
where the numerator of Eq.~\ref{eq:bayes_dl} is related to the description
length $\Sigma(\bm{A},\bm{b})$ via
\begin{equation}\label{eq:dl}
\Sigma(\bm{A},\bm{b}) = -\log_2P(\bm{A}|\bm{b})-\log_2P(\bm{b}).
\end{equation}
Therefore, maximizing Eq.~\ref{eq:bayes_dl} is identical to minimizing
Eq.~\ref{eq:dl}. Although this is already sufficient to demonstrate
their equivalence, we can go in even more detail and show that the
marginal integrated likelihood,
\begin{equation}\label{eq:canonical_marginal}
P(\bm{A}|\bm{b}) = \int P(\bm{A}|\bm\omega,\bm\kappa,\bm{b})P(\bm\omega,\bm\kappa|\bm{b})\,\mathrm{d}\bm\omega\,\mathrm{d}\bm\kappa,
\end{equation}
where $\bm\omega$ and $\bm\kappa$ are the parameters of the canonical
DC-SBM~\cite{karrer_stochastic_2011}, is identical to the marginal
likelihood of the microcanonical SBM we have used in
Eq.~\ref{eq:dcsbm-marginal}. This is proved in
Ref.~\cite{peixoto_nonparametric_2017}. Therefore, the MDL criterion is
simply an information-theoretical interpretation of the Bayesian
approach, and the two methods coincide in their
implementation.\footnote{In general, it is possible to construct
particular MDL formulations of ``universal codes'' that do not have a
clear Bayesian interpretation~\cite{grunwald_minimum_2007}. However,
these formulations are typically intractable and seldom find an
application. All MDL uses encountered in practice for the community
detection problem are equivalent to Bayesian methods.}
The BIC criterion is based on the exact same framework, but it amounts
to an approximation of the integrated marginal likelihood of a generic
model $\mathcal M$, $ P(\bm D|\bm\theta,\mathcal M)$, where $\bm D$ is a
data vector of size $n$ and $\bm\theta$ is a parameter vector of size
$k$, given by
\begin{align}
P(\bm D|\mathcal M) &= \int P(\bm D|\bm\theta,\mathcal M)P(\bm\theta)\,\mathrm{d}\bm\theta,\\\label{eq:taylor}
&\approx \left(\frac{2\pi}{n}\right)^{k/2}\left|I(\hat\theta)\right| \hat L\times P(\hat{\bm\theta}),\\
&\approx \exp(-\text{BIC}/2),
\end{align}
where $[I(\theta)]_{ij}=\int(\partial\ln
P(D|\bm\theta)/\partial\theta_i)(\partial\ln
P(D|\bm\theta)/\partial\theta_j) P(D|\bm\theta)\,\mathrm{d}\bm\theta$ is the
Fisher information matrix, and the values of the likelihood and parameters
are obtained at the maximum,
\begin{equation}
\hat L = \underset{\theta}{\max}\;P(\bm D|\bm\theta,\mathcal M),\qquad \hat\theta =
\underset{\theta}{\operatorname{argmax}}\;P(\bm D|\bm\theta,\mathcal M),
\end{equation}
and finally the BIC score is obtained from Eq.~\ref{eq:taylor} by assuming
$n\gg k$,
\begin{equation}
\text{BIC} = k \ln n - 2 \ln \hat L.
\end{equation}
The BIC method consists of employing the equation above as criterion to
decide which model to select, applicable even if they have different
number of parameters $k$, with the first term functioning as penalty for
larger models. Eq.~\ref{eq:taylor} corresponds to an approximation of the
likelihood obtained via Laplace's method, which involves a second-order
Taylor expansion of the log-likelihood. Therefore, it requires the
likelihood function to be well approximated by a multivariate Gaussian
distribution with respect to the parameters at the vicinity of its
maximum. However, as demonstrated by Yan \emph{et
al}.~\cite{yan_model_2014}, this assumption is invalid for SBMs, however
large the networks are, as long as they are \emph{sparse},
i.e. with an average degree much smaller than the number of nodes. This
is because for sparse SBMs we have both the number of parameters
$k=O(N)$ [or even larger, since for $B$ groups we a matrix $\bm\omega$
of size $O(B^2)$, and in principle we could have $B=O(N)$] and
effective data size $n=O(N)$ where $N$ is the number of nodes, therefore
the ``sufficient data'' limit required for the approximation to hold is
never realized for any $N$. Furthermore, the BIC penalty completely
neglects the contribution of the prior $P(\bm\theta)$ in the
regularization, which cannot be ignored outside of this limit. Since the
vast majority of empirical networks of interest are sparse, this renders
this method unreliable, and in fact it will tend to overfit in most
cases when employed with the SBM. We emphasize that the approximation of
Eq.~\ref{eq:taylor} is unnecessary, since we can compute the marginal
likelihood of Eq.~\ref{eq:canonical_marginal} exactly for most versions
of the
SBM~\cite{guimera_missing_2009,peixoto_hierarchical_2014,come_model_2015,newman_estimating_2016,peixoto_nonparametric_2017}.
When we compare the BIC penalty with the exact values of the integrated
likelihoods we see that they in general produce significantly different
regularizations, even asymptotically, and also even if we add \emph{ad
hoc} parameters,
e.g. $\lambda k\ln n - 2\ln\hat L$. This is because simply counting the
number of parameters is too crude an estimation of the model complexity,
since it is composed of different classes of parameters occupying
different volumes which need (and can) be more carefully
computed. Therefore the use of BIC for model selection in community
detection should be in general avoided.
Akaike's Information Criterion (AIC)~\cite{akaike_new_1974}, on the
other hand, actually starts out from a different framework. The idea is
assume that the data is sampled from a true generative model $P(\bm
D|\mathcal M_{\text{true}})$, and a candidate model $\mathcal M$ with
its parameter estimates $\hat{\bm\theta}(\bm D)$ is evaluated according
to its Kullback-Leibler (KL) divergence with respect to the true model,
\begin{equation}
\int P(\bm D'|\mathcal M_{\text{true}}) \ln \frac{P(\bm D'|\hat{\bm\theta}(\bm D), \mathcal M)}{P(\bm D'|\mathcal M_{\text{true}})}\;\mathrm{d}\bm D'.
\end{equation}
Of course, whenever it is relevant to employ model selection criteria we
do not have access to the true model, which means we cannot compute the
above quantity. We can, however, estimate the following upper bound,
corresponding to the average over all data $\bm D$,
\begin{equation}\label{eq:kld}
\int P(\bm D|\mathcal M_{\text{true}})P(\bm D'|\mathcal M_{\text{true}}) \ln \frac{P(\bm D'|\hat{\bm\theta}(\bm D), \mathcal M)}{P(\bm D'|\mathcal M_{\text{true}})}\;\mathrm{d}\bm D'\,\mathrm{d}\bm D.
\end{equation}
In this case, for sufficiently large data $\bm D$, the above quantity
can be estimated making use of a series of Laplace
approximations~\cite{burnham_model_2002}, resulting in
\begin{equation}
\ln P(\bm D|\hat{\bm\theta}(\bm D)) - \operatorname{tr}\left[J(\bm\theta_0)I(\bm\theta_0)^{-1}\right],
\end{equation}
where $\bm\theta_0$ is the point around which we compute the quadratic
approximation in Laplace's method, and $J_{ij}(\bm\theta_0) = \int P(\bm D'|\mathcal
M_{\text{true}})\mathcal{I}_{ij}(\bm D,\bm\theta_0)\,\mathrm{d}\bm D$, $I_{ij}(\bm\theta_0) = \int P(\bm D'|\bm\theta_0,\mathcal M)\mathcal{I}_{ij}(\bm D,\bm\theta_0)\,\mathrm{d}\bm D$, with
\begin{equation}
\mathcal{I}_{ij}(\bm D,\hat{\bm\theta}) = \left.\frac{\partial}{\partial\theta_i}\ln P(\bm D'|\bm\theta,\mathcal M)\right|_{\theta_i = \hat\theta_i}\times\left.\frac{\partial}{\partial\theta_j}\ln P(\bm D'|\bm\theta,\mathcal M)\right|_{\theta_j = \hat\theta_j}.
\end{equation}
The AIC criterion is finally obtained
by heuristically assuming
$\operatorname{tr}\left[J(\bm\theta_0)I(\bm\theta_0)^{-1}\right] \approx k$,
yielding
\begin{equation}
\text{AIC} = 2k - 2\ln P(\bm D|\hat{\bm\theta}(\bm D)),
\end{equation}
where the overall sign and multiplicative factor is a matter of
convention. It is also possible to recover AIC from BIC by making a
choice of prior $P(\mathcal M)\propto\exp(k\ln n/2 -
k)$~\cite{burnham_model_2002}, which makes it clear that it favors more
complex models over BIC. Independently of how one judges the suitability
of the fundamental criterion of Eq.~\ref{eq:kld}, just like BIC, AIC
involves several approximations that are known to be invalid for sparse
networks. Together with its heuristic nature and crude counting of
parameters, it is safe to conclude that the use of AIC is ill-advised
for community detection, specially considering the more principled and
exact alternatives of Bayes/MDL.
\section{Conclusion}
We have framed the problem of community detection under two different
paradigms, namely that of ``inference'' and ``description.'' We argued
that statistical inference is unavoidable when the objective is to draw
inferential interpretations from the communities found, and we provided
a simple ``litmus test'' to help deciding when this is indeed the
case. Under this framing, we showed that descriptive methods always come
with hidden inferential assumptions, and reviewed the dangers of
employing descriptive methods with inferential aims, focusing on
modularity maximization as a representative (and hence not unique) case.
We covered a series of pitfalls encountered in community detection,
as well as myths and half-truths commonly believed, and attempted to
clarify them under the same lenses, focusing on simple examples and
conceptual arguments.
Although it is true that community detection in general involves diverse
aims, and hence it is difficult to argue for an one-size-fits-all
approach, here we have taken a more opinionated stance, since it is also
not true that all approaches are used in a manner consistent with their
intended aims. We have clearly favored inferential methods, since they
are more theoretically grounded, are better aligned with well-defined
scientific questions (whenever those involve inferential queries), are
more widely applicable, and can be used to develop more robust
algorithms.
Inferential methodology for community detection has reached a level of
maturity, both in our understanding of them and in the efficiency of
available implementations, that should make it the preferred choice when
analysing network data, whenever the ultimate goal has an inferential
nature.
| proofpile-arXiv_065-7803 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{section:Introduction}
Massive machine-type communication (mMTC) is a rapidly growing class of wireless communications which aims to connect tens of billions of unattended devices to wireless networks.
One significant application of mMTC is that of distributed sensing, which consists of a large number of wireless sensors that gather data over time and transmit their data to a central server, which then interprets the received data to produce useful information and/or make executive decisions.
When combined with recent advances in machine learning (ML), such networks are expected to open a vast realm of economic and academic opportunities.
However, the large population of unattended devices within these networks threatens to overwhelm existing wireless communication infrastructures by dramatically increasing the number of network connections; it is expected that the number of machines connected to wireless networks will exceed the population of the planet by an entire order of magnitude.
Additionally, the traffic and demand profiles characteristic of individual sensors and actuators are highly inefficient under existing human-centric communication protocols; specifically, the sporadic and bursty nature of sensor transmissions are very costly under estimation/enrollment/scheduling procedures typical of cellular networks.
The combination of these challenges necessitates the design of novel physical and medium access control (MAC) layer protocols to efficiently handle the demands of these wireless devices.
One recently proposed paradigm for efficiently handling the demands of unattended devices is that of unsourced random access (URA), first proposed by Polyanskiy in 2017 \cite{polyanskiy2017perspective}.
URA captures many of the nuances of IoT devices by considering a network with an exceedingly large number of uncoordinated devices, of which, only a small percentage is active at any given point in time.
When a device/user is active, it encodes its short message using a common codebook and then transmits its codeword over a regularly scheduled time slot, as facilitated by a beacon.
Furthermore, the power available to each user is strictly limited and assumed to be uniform across devices.
The use of a common codebook is characteristic of URA and has two important implications: first, the network does not need to maintain a dictionary of active devices and their unique codebook information; second, the receiver does not know which node transmitted a given message unless the message itself contains a unique identifier.
The receiver is then tasked with recovering an unordered list of transmitted messages sent during each time slot by the collection of active devices.
The performance of URA schemes is evaluated with respect to the per-user probability of error (PUPE), which is the probability that a user's message is not present in the receiver's final list of decoded messages (this measure is defined in \eqref{eq:pupe}).
In \cite{polyanskiy2017perspective}, Polyanskiy provides finite block length achievability bounds for the short block lengths typical of URA applications using random Gaussian coding and maximum likelihood (ML) decoding.
However, these bounds were produced in the absence of complexity constraints and thus are impractical for deployment in real-world networks.
Over the past few years, several URA schemes have been proposed as means to obtain near-optimal performance with tractable complexity \cite{ordentlich2017isit, vem2017user, pradhan2020sparse, marshakov2019polar, pradhan2019polar, pradhan2019joint, pradhan2021ldpc, amalladinne2019coded, fengler2019sparcs, amalladinne2020unsourced, calderbank2018chirrup, ebert2020hybrid, ebert2021stochastic, decurninge2020tensorbased, shyianov2020massive, han2021sparsekron,li2020sparcldpc, xie2020correlatedmimo, liang2021iterativemimo, fengler2019massive, fengler2019mimo, truhachev2021lowcomplexura}.
All of the aforementioned URA schemes employ concatenated channel codes to recover the messages sent by the collection of active users at the receiver.
We note that the term \emph{channel code} is used broadly such that it includes certain signal dictionaries such as those commonly used for compressed sensing (CS).
Though it is conceptually simpler to decode the inner and outer codes independently, it is a well-known fact within coding theory that dynamically sharing information between the inner and outer decoders will often improve the performance of the decoder.
In this paper, we present a novel algorithm for sharing information between a wide class of inner codes and a tree-based outer code that significantly improves the PUPE performance and reduces the computational complexity of the scheme.
Specifically, our main contributions are as follows.
\begin{enumerate}
\item A general system model consisting of a wide class of inner codes and an outer tree code is developed.
An enhanced decoding algorithm is presented whereby the outer tree code may guide the convergence of the inner code by restricting the search space of the inner decoder to parity consistent paths.
\item The coded compressed sensing (CCS) scheme of Amalladinne et al. in \cite{amalladinne2019coded} is considered under this model.
The enhanced decoding algorithm is applied to CCS and the performance benefits are quantified.
\item The CCS for massive MIMO scheme of Fengler et al. in \cite{fengler2019mimo} is considered under this model.
The enhanced decoding algorithm is applied to CCS for massive MIMO and the performance benefits are quantified.
\end{enumerate}
\section{System Model}
\label{section:SystemModel}
Consider a URA system consisting of $K$ active devices which are referred to by a fixed but arbitrary label $j \in [K]$.
Each of these users wishes to simultaneously transmit their $B$ bit message $\ensuremath{\mathbf{w}}_j$ to a central base station over a Gaussian multiple access channel (GMAC) using a concatenated code consisting of an inner code $\mathcal{C}$ and an outer tree code $\mathcal{T}$.
This inner code $\mathcal{C}$ has the crucial property that, given a linear combination of $K \leq \delta$ codewords, the constituent information messages may be individually recovered with high probability.
Furthermore, we assume that the probability that any two active users' messages are identical is low, i.e. $\mathrm{Pr}(\ensuremath{\mathbf{w}}_i = \ensuremath{\mathbf{w}}_j) < \epsilon$ for $i \neq j$.
We consider a scenario where it is either computationally intractable to inner encode/decode the entire message simultaneously or it is otherwise impractical to transmit the entire inner codeword at once; thus, each user must divide its information message into fragments and inner encode/decode each fragment individually.
To ensure that the message can be reconstructed from its fragments at the receiver, the information fragments are first connected together using an outer tree-based code $\mathcal{T}$, and then inner-encoded using code $\mathcal{C}$.
The resulting signal is transmitted over the channel.
We elaborate on this process below.
Each message $\ensuremath{\mathbf{w}}_j$ is broken into $L$ fragments where fragment $\ell$ has length $m_{\ell}$ and $\sum_{\ell \in [L]} m_{\ell} = B$.
Notationally, $\ensuremath{\mathbf{w}}_j$ is represented as the concatenation of fragments by $\ensuremath{\mathbf{w}}_j = \ensuremath{\mathbf{w}}_j(1)\ensuremath{\mathbf{w}}_j(2)\hdots\ensuremath{\mathbf{w}}_j(L)$.
The fragments are outer-encoded together by adding parity bits to the end of each fragment, with the exception of the first fragment.
This is accomplished by taking random linear combinations of the information bits contained in previous sections.
The parity bits appended to the end of section $\ell$ are denoted by $\ensuremath{\mathbf{p}}_j(\ell)$, and it has length $l_{\ell}$.
This outer-encoded vector is denoted by $\ensuremath{\mathbf{v}}_j$, where $\ensuremath{\mathbf{v}}_j(\ell) = \ensuremath{\mathbf{w}}_j(\ell)\ensuremath{\mathbf{p}}_j(\ell)$.
The vector $\ensuremath{\mathbf{v}}_j$ now assumes the form shown in Fig.~\ref{fig:info_parity_subblocks}.
\begin{figure}[htb]
\centering
\input{Figures/info_parity_subblocks}
\caption{This figure illustrates the structure of a user's outer encoded message, denoted by $\ensuremath{\mathbf{v}}$. Fragment $\ell$ consists of the concatenation of information bits, denoted by $\ensuremath{\mathbf{w}}(\ell)$, and parity bits, denoted by $\ensuremath{\mathbf{p}}(\ell)$.}
\label{fig:info_parity_subblocks}
\end{figure}
After the outer-encoding process is complete, user~$j$ inner-encodes each fragment $\ensuremath{\mathbf{v}}_j(\ell)$ individually using $\mathcal{C}$ and concatenates the encoded fragments to form signal $\ensuremath{\mathbf{x}}_{j}$.
Each user then simultaneously transmits its signal to the base station over a GMAC.
The received signal at the base station assumes the form
\begin{equation}
\ensuremath{\mathbf{y}} = \sum_{j \in [K]} d \ensuremath{\mathbf{x}}_j + \ensuremath{\mathbf{z}}
\end{equation}
where $\ensuremath{\mathbf{z}}$ is a vector of Gaussian noise with independent standard normal components and $d$ accounts for the transmit power.
Recall that the receiver is tasked with producing an unordered list of all the transmitted messages.
A naive way to do this is to have the inner and outer decoders operate independently of each other.
That is, the inner decoder is run on each of the $L$ fragments in $\ensuremath{\mathbf{y}}$ to produce $L$ estimates of the outer-encoded codewords.
Since $\mathcal{C}$ has the property that, given a linear combination of its codewords, the constituent input signals may be recovered with high probability, the aggregate signal in every slot can be expanded into a list of $K$ encoded fragments $\{\hat{\ensuremath{\mathbf{v}}}_j(\ell) : j \in [K]\}$.
It is pertinent to remind the reader that $\hat{\ensuremath{\mathbf{v}}}_j(\ell)$ does not necessarily correspond to the message sent by user~$j$ as the receiver has no way of connecting a received message to an active user within URA.
At this point, the receiver has $L$ lists $\mathcal{L}_1, \mathcal{L}_2, \hdots, \mathcal{L}_{L}$, each with $K$ outer-encoded fragments.
From these lists, the receiver must estimate the $K$ messages sent by the active devices during the frame.
This is done by running the tree decoder on the $L$ lists to find parity-consistent paths across lists.
Specifically, the tree decoder first selects a root fragment from list $\mathcal{L}_1$ and computes the corresponding parity section $\ensuremath{\mathbf{p}}(2)$.
The tree decoder then branches out to all fragments in list $\mathcal{L}_2$ whose parity sections match $\ensuremath{\mathbf{p}}(2)$; each match creates a parity consistent partial path.
This process repeats until the last list $\mathcal{L}_{L}$ is processed.
At this point, if there is a single path from $\mathcal{L}_1$ to $\mathcal{L}_{L}$, the message created by that path is deemed valid and stored for further processing; if there are multiple parity-consistent paths from a given root fragment or no parity consistent paths from a given root fragment, a decoding failure is declared.
Fig.~\ref{fig:slot_decoding} illustrates this process.
\begin{figure}[htb]
\centering
\input{Figures/slot_decoding}
\caption{This figure illustrates the operation of the tree decoder. The inner decoder $\mathcal{C}^{-1}$ produces $L$ lists of $K$ messages each. The outer tree decoder then finds parity consistent paths across lists to extract valid messages. }
\label{fig:slot_decoding}
\end{figure}
While intuitive, this strategy is sub-optimal because information is not being shared by the inner and outer decoders.
If the inner and outer decoders were to operate concurrently, the output of the outer decoder could be used to reduce the search space of the inner decoder, thus guiding the convergence of the inner decoder to a parity consistent solution.
This would also reduce the search space of the inner code, thus providing an avenue for reducing decoding complexity \cite{amalladinne2020enhanced}, \cite{amalladinne2021mimo}.
Explicitly, assume that immediately after the inner decoder produces list $\mathcal{L}_\ell$, the outer decoder finds all parity-consistent partial paths from the root node to stage~$\ell$.
Each of these $R$ parity consistent partial paths has an associated parity section $\ensuremath{\mathbf{p}}_r(\ell+1)$.
Furthermore, it is known that only those fragments in $\mathcal{L}_{\ell+1}$ that contain one of the $\{\ensuremath{\mathbf{p}}_r(\ell+1) : r \in [R]\}$ admissible parity sections may be part of the $K$ transmitted messages.
Thus, when producing $\mathcal{L}_{\ell+1}$, the search space of the inner decoder may be reduced drastically to just the subset for which fragments contain an admissible parity section $\ensuremath{\mathbf{p}}_r(\ell+1)$.
This algorithmic enhancement has the potential to simultaneously reduce decoding complexity and improve PUPE performance.
Still, a precise characterization of the benefits of this enhanced algorithm depends on the inner code chosen.
We now consider two situations in which this algorithm may be applied: Coded Compressed Sensing (CCS) \cite{amalladinne2019coded} and CCS for massive MIMO \cite{fengler2019mimo}.
For each of the considered schemes, the complexity reduction and performance improvements are quantified.
We emphasize that this algorithmic enhancement is applicable to other scenarios beyond those considered in this paper; one such example is the CHIRRUP scheme presented by Calderbank and Thompson in \cite{calderbank2018chirrup}.
\section{Case Study 1: Coded Compressed Sensing}
\label{section:CCS}
In recent years, CCS has emerged as a practical scheme for URA that offers good performance with low complexity \cite{amalladinne2019coded, amalladinne2020unsourced, ebert2020hybrid, ebert2021stochastic}.
Though many variants of CCS have emerged, we will focus on the original version published by Amalladinne et al.\ in \cite{amalladinne2019coded}.
At its core, CCS seeks to exploit a connection between URA and compressed sensing (CS).
This connection may be understood by transforming a $B$-bit message $\ensuremath{\mathbf{w}}$ into a length $2^B$ index vector $\ensuremath{\mathbf{m}}$; the single non-zero entry therein is a one at location $[\ensuremath{\mathbf{w}}]_2$, which is the binary message $\ensuremath{\mathbf{w}}$ interpreted as a radix-$10$ integer.
This bijection is denoted $f(x)$.
The vector $\ensuremath{\mathbf{m}}$ may then be compressed into signal $\ensuremath{\mathbf{x}}$ using sensing matrix $\ensuremath{\mathbf{A}}$ and transmitted over a noisy channel.
The multiple access channel naturally adds the sent signals from the active devices.
At the receiver, the original signals may be recovered from $\ensuremath{\mathbf{y}}$ using standard CS recovery techniques such as non-negative least-squares (NNLS) or least absolute shrinkage and selection operator (LASSO).
However, for messages of even modest lengths, the size of $\ensuremath{\mathbf{x}}$ is too large for standard CS solvers to handle.
To circumvent this challenge, a divide and conquer approach can be employed.
In CCS, the inner code $\mathcal{C}$ consists of the CS encoder and the outer tree code $\mathcal{T}$ is identical to that presented in Section~\ref{section:SystemModel}.
Note that there is an additional step between $\mathcal{T}$ and $\mathcal{C}$: the outer-encoded message $\ensuremath{\mathbf{v}}$ is transformed into the inner code input $\ensuremath{\mathbf{m}}$ via the bijection described above.
Furthermore, $\mathcal{C}$ has the property that, given a linear combination of its codewords, the corresponding set of $K$ one-sparse constituent inputs may be recovered with high probability.
This, combined with the assumption that $\mathrm{Pr}(\ensuremath{\mathbf{w}}_i = \ensuremath{\mathbf{w}}_j) < \epsilon$ for $i \neq j$, makes CCS an eligible candidate for the enhanced decoding algorithm described previously.
We review below the CCS encoding and decoding operations.
\subsection{CCS Encoding}
When user~$j$ wishes to transmit a message to the central base station, it encodes its message in the following manner.
First, it breaks its $B$-bit message into $L$ fragments and outer-encodes the $L$ fragments using the tree code described in Section~\ref{section:SystemModel}; this yields outer codeword $\ensuremath{\mathbf{v}}_j$.
Recall that fragment $\ell$ has $m_\ell$ information bits and $l_\ell$ parity bits.
We emphasize that $m_\ell + l_\ell = v_{\ell}$ is constant for all sections in CCS, but the ratio of $m_\ell$ to $l_\ell$ is subject to change.
Fragment $\ensuremath{\mathbf{v}}_j(\ell)$ is then converted into a length $2^{m_\ell + l_\ell}$ index vector, denoted by $\ensuremath{\mathbf{m}}_j(\ell)$, and compressed using sensing matrix $\ensuremath{\mathbf{A}}$ into vector $\ensuremath{\mathbf{x}}_j(\ell)$.
Within the next transmission frame, user~$j$ transmits its encoded fragments across the GMAC with all other active users.
At the base station, the received vector associated with slot~$\ell$ assumes the form
\begin{equation}
\ensuremath{\mathbf{y}}(\ell) = \left( \sum_{j \in [K]} d \ensuremath{\mathbf{A}} \ensuremath{\mathbf{m}}_j(\ell) \right) + \ensuremath{\mathbf{z}}(\ell)
= d \ensuremath{\mathbf{A}} \left(\sum_{j \in [K]} \ensuremath{\mathbf{m}}_j(\ell) \right) + \ensuremath{\mathbf{z}}(\ell)
\end{equation}
where $\ensuremath{\mathbf{z}}(\ell)$ is a vector of Gaussian noise with standard normal components and $d$ reflects the transmit power.
This is a canonical form of a $K$-sparse compressed vector embedded in Gaussian noise.
\subsection{CCS Decoding}
CCS decoding begins by running a standard CS solver such as NNLS or LASSO on each section to produce $L$ $K$-sparse vectors.
The $K$ indices in each of these $L$ slots are converted back to binary representations using $f^{-1}(x)$, and the tree decoder is run on the resultant $L$ lists to produce estimates of the transmitted messages.
This process may be improved by applying the proposed enhanced decoding algorithm,
which proceeds as follows for CCS.
The inner CS solver first recovers section~$1$, and then computes the set of possible parity patterns for section~$2$, denoted by $\mathcal{P}_{2}$.
The columns of $\ensuremath{\mathbf{A}}$ are then pruned dynamically to remove all columns associated with inadmissible parity patterns in section~$2$.
This reduces the number of columns of $\ensuremath{\mathbf{A}}$ from $2^{m_1+l_1}$ to $2^{m_1}|\mathcal{P}_1|$ \cite{amalladinne2020enhanced}.
Section~$2$ is then recovered, and the process repeats itself until section $L$ has been decoded; at this point, valid paths through the $L$ lists are identified and the list of estimated transmitted messages is finalized.
Fig.~\ref{fig:enhanced_ccs_diagram} illustrates this process.
\begin{figure}[htb]
\centering
\input{Figures/enhanced_ccs_diagram}
\caption{This figure illustrates the enhanced decoding algorithm applied to CCS. After recovering $\mathcal{L}_{\ell}$, the sensing matrix $\ensuremath{\mathbf{A}}$ is pruned so that list $\mathcal{L}_{\ell+1}$ only contains parity-consistent fragments. }
\label{fig:enhanced_ccs_diagram}
\end{figure}
\subsection{Results}
As previously mentioned, the algorithmic enhancement presented in this article has the potential to improve both the performance and the computational complexity of concatenated coding schemes.
Being URA scheme, the performance of CCS is evaluated with respect to the per-user probability of error (PUPE), which is defined as
\begin{equation}
\label{eq:pupe}
P_e = \frac{1}{K} \sum_{j \in [K]} \mathrm{Pr} \left( \ensuremath{\mathbf{w}}_j \notin \hat{W}(\ensuremath{\mathbf{y}}) \right)
\end{equation}
where $\hat{W}(\ensuremath{\mathbf{y}})$ is the estimated list of transmitted messages, with at most $K$ items.
Since many different CS solvers with varying computational complexities may be employed within the CCS framework, the complexity reduction offered by the enhanced decoding algorithm will be quantified by counting the number of columns removed from the matrix $\ensuremath{\mathbf{A}}$.
As discussed in \cite{amalladinne2020enhanced}, the column pruning operation has at least four major implications on the performance of CCS.
These implications are summarized below.
\begin{enumerate}
\item Many CS solvers rely on iterative methods or convex optimization solvers to recover $\ensuremath{\mathbf{x}}$ from $\ensuremath{\mathbf{y}} = \ensuremath{\mathbf{A}}\ensuremath{\mathbf{x}}$.
Decreasing the width of $\ensuremath{\mathbf{A}}$ will result in a reduction in computational complexity, the exact size of which will depend on the CS solver employed.
\item When all message fragments have been correctly recovered for stages $1, 2, \hdots, \ell$, the matrix $\ensuremath{\mathbf{A}}$ is pruned in such a way that is perfectly consistent with the true signal.
In this scenario, the search space for the CS solver is significantly reduced and the performance will improve.
\item When an erroneous message fragment has been incorrectly identified as a true message fragment by stage $\ell$, the column pruning operation will guide the CS solver to a list of fragments that is more likely to contain additional erroneous fragments.
This further propagates the error and helps erroneous paths stay alive longer.
\item When a true fragment is removed from a CS list, its associated parity pattern may be discarded and disappear entirely.
This results in the loss of a correct message and additional structured noise which may decrease the PUPE performance of other valid messages.
\end{enumerate}
Despite having positive and negative effects, the net effect of the enhanced decoding algorithm on the system's PUPE perfomance is positive, as illustrated in Fig.~\ref{fig:enhanced_ccs_performance}.
This figure was generated by simulating a CCS scenario with $K \in [10:175]$ users, each of which wishes to transmit a $B = 75$ bit message divided into $L = 11$ stages over $22,517$ channel uses.
NNLS was used as the CS solver.
\begin{figure}[htb]
\centering
\input{Figures/enhanced_ccs_performance}
\caption{This figure shows the required $E_b/N_0$ to obtain a PUPE of $5\%$ vs the number of active users. }
\label{fig:enhanced_ccs_performance}
\end{figure}
From Fig.~\ref{fig:enhanced_ccs_performance}, we gather that the enhanced decoding algorithm reduces the required $E_b/N_0$ by nearly $1$~dB for a low number of users.
Furthermore, for the entire range of number of users considered, the enhanced algorithm is at least as good as the original algorithm and often much better.
By tracking the expected number of parity-consistent partial paths, it may be possible to compute the expected column reduction ratio at every stage.
However, this is a daunting task, as explained in \cite{amalladinne2019coded}.
Instead, we estimate the expected column reduction ratio by applying the analysis from \cite{amalladinne2019coded} with the following simplifying assumptions:
\begin{itemize}
\item No two users have the exact same message fragments at any stage: $\ensuremath{\mathbf{w}}_i(\ell) \neq \ensuremath{\mathbf{w}}_j(\ell)$ whenever $i \neq j$ and for all $\ell \in [L]$.
\item The inner CS decoder makes no errors in producing lists $\mathcal{L}_1, \hdots, \mathcal{L}_{L}$.
\end{itemize}
Under these assumptions and starting from a designated root node, the number of erroneous paths that survive stage~$\ell$, denoted $L_\ell$, is subject to the following recursion,
\begin{equation} \label{exprec1}
\begin{split}
\mathbb{E} \big[ L_\ell \big]
&= \mathbb{E} [ \mathbb{E} [ L_\ell \mid L_{\ell-1} ] ] \\
&= \mathbb{E} \left[ ( ( L_{\ell-1}+1 ) K-1 ) 2^{-l_{\ell}} \right] \\
&= 2^{-l_{\ell}} K \mathbb{E} [ L_{\ell-1} ] + 2^{-l_{\ell}} (K-1) .
\end{split}
\end{equation}
Using initial condition $\mathbb{E} [L_1] = 0$, we get expected value
\begin{equation}
\mathbb{E}[L_\ell] = \sum_{q=2}^{\ell} \left(K^{\ell-q}(K-1)\prod_{k=q}^{\ell}2^{-l_{k}} \right) .
\end{equation}
When the matrix $\ensuremath{\mathbf{A}}$ is pruned dynamically, then $K$ copies of the tree decoder run in parallel and, as such, the expected number of parity-consistent partial paths at stage~$\ell$ can be expresses as
\begin{equation*}
P_\ell = K(1 + \mathbb{E}[L_\ell]) .
\end{equation*}
Under the further simplifying assumptions that all parity patterns are independent and $P_j$ concentrates around its mean, we can approximate the number of admissible parity patterns.
The probability that a particular path maps to a specific parity pattern is $2^{-l_\ell}$ and, hence, the probability that this pattern is not selected by any path become $(1 - 2^{-l_\ell})^{P_\ell}$.
Taking the complement of this event and multiplying by the number of parity patters, we get an approximate expression for the mean number of admissible patterns,
\begin{equation}
|\mathcal{P}_\ell| \approx 2^{l_\ell} \left( 1 - \left( 1 - 2^{-l_\ell} \right)^{P_\ell} \right) .
\end{equation}
Thus, the expected column reduction ratio at slot~$\ell$, denoted $\mathbb{E}[R_\ell]$, is given by (\cite{amalladinne2020enhanced})
\begin{equation}
\mathbb{E}[R_\ell] = 1 - \left( 1 - 2^{-l_\ell} \right)^{P_\ell}.
\end{equation}
Fig.~\ref{fig:column_reduction_ratio} shows the estimated versus simulated column reduction ratio across stages.
Overall, the number of columns in $\ensuremath{\mathbf{A}}$ can be reduced drastically for some stages, thus significantly lowering the complexity of the decoding algorithm.
\begin{figure}[htb]
\centering
\input{Figures/column_reduction_ratio}
\caption{This figure illustrates the column reduction ratio provided by the enhanced decoding algorithm for each stage of the outer code and a varying number of users. Lines represent numerical results and markers represent simulated results. Clearly, the size of the sensing matrix may be drastically reduced.}
\label{fig:column_reduction_ratio}
\end{figure}
\section{Case Study 2: Coded Compressed Sensing for Massive MIMO}
\label{section:CCS_MIMO}
A natural extension of the single-input single-output (SISO) version of CCS proposed in \cite{amalladinne2019coded} is a version of CCS where the base station utilizes $M \gg 1$ receive antennas.
In this scenario, we assume that the receive antennas are sufficiently separated to ensure negligible spatial correlation across channels.
Furthermore, we adopt a block fading model where the channel remains fixed for a coherence period of $n$ channel uses and all coherence blocks are assumed to be completely independent, as in \cite{fengler2019massive}.
Each active user transmits its message over $L$ coherence blocks, with one coherence block corresponding to each of the $L$ sections described above; thus the total number of channel uses is $N = nL$.
As in SISO CCS, the receiver is tasked with producing an estimated list of the messages transmitted by the collection of active users during a given time instant.
In addition to observing the received signal, the base station has knowledge of the total number of active users, the codes used for encoding messages, and the second-order statistics of MIMO channels.
We note that channel state information (CSI) is not fully known.
Thus the decoding algorithm can be characterized as non-coherent \cite{amalladinne2021mimo}.
The scheme we consider in this work was first presented by Fengler et al.\ in \cite{fengler2019mimo}.
\subsection{MIMO Encoding}
The encoding process for CCS with massive MIMO is analogous to the encoding process for CCS; for a thorough description of this process, please see Section~\ref{section:CCS}.
However, the signal received by the base station will have a different structure as the base station employs $M$ receive antennas.
Let $\ensuremath{\mathbf{x}}(t, \ell)$ denote the $t$th symbol in block $\ell$ of vector $\ensuremath{\mathbf{x}}$.
Then, the signal observed by the base station is of the form
\begin{equation}
\ensuremath{\mathbf{y}}(t, \ell) = \sum_{j \in [K]} \ensuremath{\mathbf{x}}_j(t, \ell)\ensuremath{\mathbf{h}}_j(\ell) + \ensuremath{\mathbf{z}}(t, \ell) \hspace{5mm} t \in [n], \; \ell \in [L]
\end{equation}
where $\ensuremath{\mathbf{z}}(t, \ell)$ is circularly-symmetric complex white Gaussian noise with zero mean and variance $N_0/2$ per dimension and $\ensuremath{\mathbf{h}}_j(\ell) \sim \mathcal{CN}(\mathbf{0}, \mathbf{I}_M)$ is a vector of small-scale fading coefficients representing the channel between user~$j$ and the base station's $M$ antennas.
\subsection{MIMO Decoding}
Recall that an URA receiver is tasked with producing an unordered list of the messages transmitted by the collection of active devices.
To do this, the receiver must first identify the list of fragments transmitted during each of the $L$ coherence blocks and then extract the transmitted messages by finding parity consistent paths across lists.
The receiver architecture presented in \cite{fengler2019mimo} features a concatenated code, where the inner code $\mathcal{C}$ is decoded using a covariance-based activity detection algorithm and the outer tree code $\mathcal{T}$ is decoded in a manner identical to that presented in Section~\ref{section:SystemModel}.
Recall that each active user transforms its outer-encoded message $\ensuremath{\mathbf{v}}$ into a $1$-sparse index vector $\ensuremath{\mathbf{m}}$.
Let $\{i_j(\ell) : j \in [K]\}$ denote the set of indices chosen by the active users during block $\ell$.
Then, the signal observed at the base station is of the form
\begin{equation}
\label{eq:rx_signal}
\begin{split}
\ensuremath{\mathbf{Y}}(\ell) &= \sum_{j \in [K]} \ensuremath{\mathbf{a}}_{i_j(\ell)}(\ell)\ensuremath{\mathbf{h}}_j(\ell)^\intercal + \ensuremath{\mathbf{Z}}(\ell) \\
&= \ensuremath{\mathbf{A}}(\ell)\ensuremath{\mathbf{\Gamma}}(\ell)\ensuremath{\mathbf{H}}(\ell)+\ensuremath{\mathbf{Z}}(\ell)
\end{split}
\end{equation}
where $\ensuremath{\mathbf{H}}(\ell)$ has independent $\mathcal{CN}(0, 1)$ entries, $\ensuremath{\mathbf{Z}}$ is independent complex Gaussian noise, and $\ensuremath{\mathbf{\Gamma}}(\ell)$ is a diagonal matrix that indicates which indices have been selected during block $\ell$; that is, $\ensuremath{\mathbf{\Gamma}}(\ell) = \mathrm{diag}(\gamma_0(\ell), \hdots, \gamma_{2^{v_\ell}}(\ell))$ where
\begin{equation}
\gamma_i(\ell) =
\begin{cases}
1 & i \in \{i_j(\ell) : j \in [K]\} \\
0 & \mathrm{otherwise} .
\end{cases}
\end{equation}
Finally, $\ensuremath{\mathbf{Y}}(\ell)$ is a $n \times M$ matrix where the rows of $\ensuremath{\mathbf{Y}}(\ell)$ correspond to various time instants and the columns of $\ensuremath{\mathbf{Y}}(\ell)$ correspond to the different antennas present at the base station.
Fig.~\ref{fig:mimo_diagram} illustrates this configuration.
\begin{figure}
\centering
\input{Figures/mimo_diagram}
\caption{This figure illustrates the structure of $\ensuremath{\mathbf{Y}}(\ell)$, where the rows correspond to time instants and the columns correspond to receive antennas. }
\label{fig:mimo_diagram}
\end{figure}
Determining which fragments were sent during coherence block~$\ell$ is equivalent to estimating $\ensuremath{\mathbf{\Gamma}}(\ell)$.
This process is referred to as activity detection and may be accomplished through covariance matching when the number of receive antenna is large, as described in \cite{fengler2019mimo}.
An iterative algorithm for estimating $\ensuremath{\mathbf{\Gamma}}(\ell)$ was first proposed by Fengler in \cite{fengler2019mimo} and is summarized in Algorithm~\ref{alg:activity}.
After the collection of fragments transmitted in each of the $L$ sub-blocks has been recovered by Algorithm~\ref{alg:activity}, tree decoding is employed to disambiguate the collection of transmitted messages.
\begin{algorithm}[htb]
\caption{Activity Detection via Coordinate Descent}\label{alg:activity}
\begin{algorithmic}[1]
\State \textbf{Inputs}: Sample covariance $\hat{\mathbf{\Sigma}}_{\mathbf{Y}(\ell)} = \frac{1}{M}\mathbf{Y}(\ell)\mathbf{Y}(\ell)^H$
\State \textbf{Initialize}: $\mathbf{\Sigma}_{\ell} = N_0 \mathbf{I}_n$, $\boldsymbol{\gamma}(\ell) = \mathbf{0}$
\For {$i=1,2,\ldots$}
\For {$k \in \mathcal{S}_\ell$}
\State Set $d^* = \frac{\ensuremath{\mathbf{a}}_k(\ell)^H \mathbf{\Sigma}_\ell^{-1} (\hat{\mathbf{\Sigma}}_{\mathbf{Y}(\ell)}\mathbf{\Sigma}_\ell^{-1} - \mathbf{I}_n)\ensuremath{\mathbf{a}}_k(\ell)} {(\ensuremath{\mathbf{a}}_k(\ell)^H \mathbf{\Sigma}_\ell^{-1} \ensuremath{\mathbf{a}}_k(\ell))^2}$
\State Update $\gamma_k(\ell) \gets \max \{ \gamma_k(\ell) + d^*, 0 \}$
\State Update $\mathbf{\Sigma}_\ell^{-1} \gets \mathbf{\Sigma}_\ell^{-1} - \frac{d^*\mathbf{\Sigma}_\ell^{-1}\ensuremath{\mathbf{a}}_k(\ell)\ensuremath{\mathbf{a}}_k(\ell)^H\mathbf{\Sigma}_\ell^{-1}}{1 + d^*\ensuremath{\mathbf{a}}_k(\ell)^H\mathbf{\Sigma}_\ell^{-1}\ensuremath{\mathbf{a}}_k(\ell)}$
\EndFor
\EndFor
\State \textbf{Output}: Estimate $\boldsymbol{\gamma}(\ell)$
\end{algorithmic}
\end{algorithm}
As before, it is possible to leverage the enhanced version of the tree decoding process, with its dynamic pruning, to improve performance and lower complexity.
The application of the proposed algorithmic enhancement to the activity detection algorithm may be visualized in the following way.
Let $\mathcal{S}_\ell$ denote the set of indices to perform coordinate descent over during coherence block $\ell$; in its original formulation, $\mathcal{S}_\ell = [2^{v_\ell}]$.
After list $\mathcal{L}_1$ has been produced by the activity detection algorithm, the tree decoder can compute the set of all admissible parity patterns $\mathcal{P}_2$ for list $\mathcal{L}_2$; then, $\ensuremath{\mathbf{A}}(2)$ may be pruned to only contain those columns corresponding to messages with parity patterns in $\mathcal{P}_2$.
A similar strategy can be applied moving forward, yielding a reduced admissible set $\mathcal{P}_{\ell}$ for parity patterns at stage~$\ell$.
In turn, this reduces the index set $\mathcal{S}_{\ell}$ to
\begin{equation}
\mathcal{S}_{\ell} = \{[\ensuremath{\mathbf{w}}(\ell)\ensuremath{\mathbf{p}}(\ell)]_2 : \ensuremath{\mathbf{w}}(\ell) \in \{0, 1\}^{{m_\ell}}, \ensuremath{\mathbf{p}}(\ell) \in \mathcal{P}_{\ell} \}
\end{equation}
which may be significantly smaller than $[2^{v_\ell}]$.
This algorithmic refinement guides the activity detection algorithm to a parity consistent solution and reduces the search space of the inner decoder, thus improving performance significantly \cite{amalladinne2021mimo}.
\subsection{Results}
The simulation results presented in this section correspond to a scenario with $K \in [25, 150]$ active users and $M \in [25, 125]$ antennas at the base station.
Each user encodes their $96$-bit signal into $L = 32$ blocks with $100$ complex channel uses per block.
The length of the outer-encoded block is $v_\ell = 12$ for all $\ell \in [L]$, and a parity profile of $(l_1, l_2, \hdots, l_{L}) = (0, 9, 9, \hdots, 9, 12, 12, 12)$ is employed.
The energy per bit $E_b/N_0$ is fixed at $0$~dB and the columns of $\ensuremath{\mathbf{A}}(\ell)$ are chosen randomly from a sphere of radius $\sqrt{nP}$.
These parameters are chosen to match \cite{fengler2019mimo}.
Fig.~\ref{fig:ccs_mimo_performance} shows the PUPE of this scheme for a range of active users and several different values of $M$.
In this figure, the dashed lines represent the performance of the original algorithm and the solid lines represent the performance of the enhanced version with dynamic pruning.
\begin{figure}[htb]
\centering
\input{Figures/ccs_massive_mimo_performance}
\caption{This figure illustrates the performance advantage of applying the enhanced decoding algorithm presented in this paper to CCS for massive MIMO. The dashed line represents the original performance from \cite{fengler2019mimo} and the solid line represents the performance of the enhanced algorithm. }
\label{fig:ccs_mimo_performance}
\end{figure}
From Fig.~\ref{fig:ccs_mimo_performance}, we gather that the proposed algorithm reduces the PUPE for a fixed number of active users and a fixed number of antennas at the base station.
Additionally, this algorithm may be used as a means to reduce the number of antennas required to achieve a target PUPE.
For instance, when $K = 100$, the enhanced algorithm allows for a $23\%$ reduction in the number of antennas at the base station with no degradation in error performance.
Fig.~\ref{fig:ccs_mimo_runtimes} provides the ratio of average runtimes of the enhanced decoding algorithm versus the original decoding algorithm.
The enhanced decoding algorithm also offers a significant reduction in computational complexity, especially for a low number of active users.
\begin{figure}[htb]
\centering
\input{Figures/ccs_massive_mimo_runtimes}
\caption{This figure plots the ratio of average runtimes between the enhanced decoding algorithm and the original algorithm. As seen above, dynamic pruning offers a significant reduction in computational complexity compared to standard tree decoding. }
\label{fig:ccs_mimo_runtimes}
\end{figure}
\section{Conclusion}
In this article, a framework for a concatenated code architecture consisting of a structured inner code and an outer tree code was presented.
This framework was specifically designed for URA applications, but may find applications in other fields as well.
An enhanced decoding algorithm was proposed for this framework that promises to improve performance and decrease computational complexity.
This enhanced decoding algorithm was applied to two URA schemes: coded compressed sensing (CCS) and CCS for massive MIMO.
In both cases, PUPE performance gains were observed and the decoding complexity was significantly reduced.
The proposed algorithm is a natural extension of the existing literature.
From coding theory, we know that there are at least three ways for inner and outer codes to interact.
Namely, the two codes may operate completely independent of one another in a Forney-style concatenated fashion; this is the style of the original CCS decoder presented in \cite{amalladinne2019coded}.
Secondly, information messages may be passed between inner and outer decoders as both decoders converge to the correct codeword; this is the style of CCS-AMP which was proposed by Amalladinne et al in \cite{amalladinne2020unsourced}.
Finally, a successive cancellation decoder may be employed in the spirit of coded decision feedback; this is the style highlighted in this article and considered in \cite{amalladinne2020enhanced, amalladinne2021mimo}.
Thus, the dynamic pruning introduced in this paper can be framed as an application of coding theoretic ideas to a concatenated coding structure that is common within URA.
Though the examples presented in this article pertained to CCS, we emphasize that dynamic pruning may be applicable to many algorithms beyond CCS.
For instance, this approach may be relevant to support recovery in exceedingly large dimensions, where a divide and conquer approach is needed.
As long as the inner and outer codes subscribe to the structure described in Section~\ref{section:SystemModel}, this algorithmic enhancement can be leveraged to obtain performance and/or complexity improvements.
\bibliographystyle{IEEEbib}
| proofpile-arXiv_065-7805 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
One of the most fundamental problems in cosmology is to understand
the formation and evolution of the large-scale structure in the universe
such as galaxies, groups and clusters of galaxies, etc.
In order to understand the large-scale structure, however,
it is highly desirable to have an analytical framework
within which theoretical predictions for structure formation can be made.
The cosmological mass function, $n(M)$
[$n(M)dM$: defined as the comoving number density of gravitationally
bound structures -- dark halos with mass $M$]
provides this analytical tool
since different candidate models for structure formation
predict different number densities of dark halos.
Press \& Schechter (1974, hereafter PS) developed for the first time
an analytic formalism to evaluate the mass function.
Finding a mass function requires both dynamics and statistics.
Dynamically PS adopted the top-hat spherical model, according to which
the collapse condition for forming dark halos is determined purely
by its local average density.
Statistically PS assumed that the initial density field is Gaussian,
and selected dark halos from the peaks of the linear Gaussian density
field.
The practical success of the PS mass function
(e.g., \cite{efs-etal88}; \cite{la-co94}) and the absence of
viable alternatives led many authors to use it
routinely in the last decade(e.g., \cite{col-kai88};
\cite{whi-fre91}; \cite{kau-etal93}; \cite{fan-etal97}).
But, recently high resolution N-body simulations have shown
the limitation of the PS theory:
First, it has been clearly shown by several N-body simulations that
the true gravitational collapse process must be nonspherical
(e.g., \cite{sh-etal95}; \cite{kuh-etal96}),
indicating the weakness of the PS dynamical background.
Second, N-body simulations also showed that
peaks of the linear density field are ``poorly'' correlated with
the final spatial locations of dark halos (e.g., \cite{kat-etal93}),
although the PS formalism assumes that dark halos form in the
peaks of the density field.
Third, recent high-resolution numerical tests have detected that
experimental results are flatter than the standard PS mass
function in shape
(e.g., Governato et al. 1998; Tormen 1998 and references therein
; \cite{she-tor99}).
Currently various attempts have been made to find a better-fitting
mass function. One approach to better mass functions
has been focused on finding phenomenological fitting parameters,
keeping the original PS formula unchanged;
for example, regarding the density threshold as a function of
redshift, etc
(see Governato et al. 1998; \cite{she-tor99}).
Another approach has been on improving the dynamics of the PS formalism
by implementing anisotropic collapse conditions
(e.g., \cite{mo95}; Audit, Teyssier, \& Alimi 1997).
A full analytical alternative of the PS mass function using
this approach has been found by \cite{lee-sha98}, in the frame of the
Zel'dovich approximation. This approach goes along with the assumption
that dark halos correspond to the third axis collapse.
Indeed the mass
function we found in the previous paper is both a dynamically and
statistically improved version of the PS one. First, it is based on
a more realistic {\it nonspherical} dynamical model. Second, the
underlying statistical assumption that dark halos form in the
local maxima of the smallest eigenvalue of the deformation
tensor, $\lambda_{3}$ (see $\S 2$) is in general
agreement with the N-body results performed by \cite{sha-kly84}.
Third, and most importantly our mass function was shown to have desired
properties like a lower peak and more high-mass halos. That is,
our mass function is flatter than the PS one.
In this Letter we present numerical testing results of our mass
function for the case of two fiducial models: the scale-free
power-law spectra with spectral indices $n=-1, 0$, and the standard
cold dark matter (SCDM) model with $\Omega =1$ and $h=0.5$.
In $\S 2$ we briefly summarize the analytic mass function theories
for the readers convenience .
In $\S 3$ we explain the N-body simulations used to produce
the numerical mass functions, and compare the analytical mass functions
with the numerical results.
In $\S 4$ we draw a final conclusion.
\section{SUMMARY OF MASS FUNCTION THEORIES}
The PS theory assumes that dark halos of mass $M$ form
hierarchically in the regions where the linear Gaussian density
field $\delta \equiv (\rho - \bar{\rho})/\bar{\rho}$
($\bar{\rho}$: mean density) filtered on mass scale $M$ reaches
its threshold value $\delta_{c}$ for collapse:
\begin{equation}
n_{PS}(M) = \sqrt{\frac{2}{\pi}}\frac{\bar{\rho}}{M^2}
\Bigg{|}\frac{d\ln\sigma}{d\ln M}\Bigg{|}
\frac{\delta_{c}}{\sigma}\exp\bigg{[}-\frac{\delta_{c}^2}
{2\sigma^2}\bigg{]}.
\end{equation}
The density threshold $\delta_{c}$ for a flat universe
is originally given by the spherical top-hat model:
$\delta_{c} \approx 1.69$ (e.g., \cite{pee93}).
But in many numerical tests
it has been detected that lowered $\delta_{c}$ (roughly $1.5$)
gives a better fit in the high-mass section
(e.g., \cite{efs-ree88}; \cite{car-cou89}; \cite{kly-etal95};
\cite{bo-my96}). It is also worth mentioning that
any PS-like formalism is least reliable in the low-mass section
(\cite{mo95}).
This numerical detection can be understood in the following
dynamical argument: Although the top-hat spherical model
predicts that the gravitational collapse to ``infinite'' density
occurs when the density reaches $\delta_{c}\approx 1.69$, halos
in realistic case can form earlier by a rapid virialization
process due to the growth of small-scale inhomogeneities (\cite{sha-etal99}).
On the other hand, according to our approach (Lee \& Shandarin 1998),
dark halos of mass $M$ form from the Lagrangian regions
where the lowest eigenvalue $\lambda_{3}$
($\lambda_3<\lambda_2<\lambda_1, ~~~
\delta =\lambda_1+\lambda_2+\lambda_3$)
of the deformation tensor $d_{ij}$ (defined as the second derivative
of the perturbation potential $\Psi$ such that
$d_{ij}=\partial^2 \Psi /\partial q_i\partial q_j$, $q_i$ is the Lagrangian
coordinate) reaches its threshold $\lambda_{3c}$ for collapse
on the scale $M$:
\begin{eqnarray}
n_{LS}(M) &=& \frac{25\sqrt{10}}{2\sqrt{\pi}}
\frac{\bar{\rho}}{M^2}\Bigg{|}
\frac{d\ln\sigma}{d\ln M}\Bigg{|}\frac{\lambda_{3c}}{\sigma}
\Bigg{\{}
\Big{(}\frac{5\lambda_{3c}^2}{3\sigma^2}-\frac{1}{12}\Big{)}
\exp\Big{(}-\frac{5\lambda_{3c}^2}{2\sigma^2}\Big{)}
{\rm erfc}\Big{(}\sqrt{2}\frac{\lambda_{3c}}{\sigma}\Big{)}
\nonumber \\
&& +\frac{\sqrt{6}}{8}
\exp\Big{(}-\frac{15\lambda_{3c}^2}{4\sigma^2}\Big{)}
{\rm erfc}\Big{(}\frac{\sqrt{3}\lambda_{3c}}{2\sigma}\Big{)}
-\frac{5\sqrt{2\pi}\lambda_{3c}}{6\pi\sigma}
\exp\Big{(}-\frac{9\lambda_{3c}^2}{2\sigma^2}\Big{)}
\Bigg{\}}.
\end{eqnarray}
In the original derivation of our mass function, the threshold
$\lambda_{3c}$ for collapse has been empirically chosen to be $0.37$.
A similar logic used to give a dynamical explanation to
the lowered $\delta_{c}$ of the PS formalism applies here.
Although a simple extrapolation of the Zel'dovich approximation to
nonlinear regime predicts that the formation of dark halos
corresponding to the third axis collapse
occurs at $\lambda_{3c} = 1$, the first and
the second axis collapse speed up the formation of halos, which
would result in a lowered $\lambda_{3c}$ (see also \cite{au-etal97}).
In $\S$ 3, we witness that our mass function with
the original suggested value of $\lambda_{3} = 0.37$
does agree with the numerical data quite well.
\section{NUMERICAL vs. ANALYTICAL MASS FUNCTIONS}
\subsection{Comparison for Scale-Free Model}
The N-body simulations of a flat matter-dominated universe for power-law
spectra $P(k) \propto k^n$ with spectral indices $n = -1$ and $0$
were run by \cite{whi94} using a Particle-Particle-Particle-Mesh code
with $100^3$ particles in a $256^3$ grid with periodic boundary conditions.
Tormen (1998, 1999)
identified dark halos from the N-body simulations using
a standard halo finder -- the friends-of-friends algorithm
with a linking length $0.2$ [hereafter FOF (0.2)].
Numerical data for the $n = -1$ power-law model
were obtained for 10 different output times
coming from two N-body realizations, and then a final $n=-1$
numerical mass function was obtained by taking an average over
the 10 output values.
While for the $n = 0$ model 4 outputs from one N-body
realization were averaged to produce a final numerical mass function.
For a detailed description of the simulations, see \cite{tor-etal97}.
Here we use the final average numerical mass functions for comparison data.
For the power-law spectra,
the mass variance is given by the following simple form:
\begin{equation}
\sigma^{2}(M) = \Bigg{(}\frac{M}{M_{0}}\Bigg{)}^{-(n+3)/3} ,
\end{equation}
where $M_{0}$ is the characteristic mass scale
\footnote{In \cite{lee-sha98}, the characteristic mass was notated
by $M_{*}$. But here we use $M_{*}$ to notate a slightly
different mass scale. Readers should not be confused about this
different notation of the characteristic mass.}
defined by $\sigma(M_{0}) = 1$. It is sometimes useful to define
a filter-depending nonlinear mass scale $M_{*}$ related to $M_{0}$ by
$M_{*} \equiv M_{0}(\delta_{c})^{-6/(n+3)}$ for a dimensionless
rescaled mass variable $M/M_*$ such that $\sigma(M_*) = \delta_c$
(see \cite{la-co94}).
Figure 1 plots the fraction of mass in halos with mass $M$,
$dF/d\ln M = (M^{2}/\bar{\rho})n(M)$:
our mass function with $\lambda_{3c} = 0.37$ (solid line) against
the averaged numerical data with Poissonian error bars,
and the PS mass function with $\delta_{c} = 1.69$ and $1.5$
(dashed and dotted lines respectively) as well. The upper panel
corresponds to the $n=-1$ power-law model while the lower panel to
the $n=0$ model.
As one can see, our mass function fits the numerical data much better
than the PS ones for the $n=-1$ model in the high-mass section
($M > M_{*}$).
In fact \cite{tor98} also used the spherical overdensity
algorithm [SO (178)] as another halo finder, and showed that the numerical
mass functions from FOF (0.2) and SO (178) are almost identical.
We compared the analytical mass functions with his numerical data
obtained from SO (178) and also found similar results.
Whereas for the $n=0$ model, neither of our mass function and the PS
one fits the numerical data well in the high-mass section.
Yet in the low-mass section $(M < M_{*})$ our mass function fits
slightly better than the PS one for this case.
\subsection{Comparison for SCDM model}
\cite{gov-etal98} provided halo catalogs produced from
one large N-body realization (comoving box size of
$500h^{-1}{\rm Mpc}$, $47$ million particles on a $360^{3}$ grid)
of SCDM model with $\Omega = 1, h = 0.5$ for four different epochs:
$z = 0$, $0.43$, $1.14$ and $1.86$ which are respectively
normalized by
$\sigma(8h^{-1}{\rm Mpc}) = 1.0$, $0.7$, $0.467$ and $0.35$.
They adopted the transfer function given by \cite{bar-etal86} and
also used the FOF (0.2) halo finder. For a detailed description
of the simulations, see \cite{gov-etal98}.
We derived the numerical mass functions from the catalogs
by directly counting the number densities of halos in logarithmic
scale for each epoch.
In accordance with \cite{gov-etal98}, we consider halos more
than $64$ particles (corresponding to $ M > 10^{14}M_{\odot}$)
in order to avoid small-number effects of the N-body simulations.
Figure 2 shows the comparison results.
Our mass function with $\lambda_{3c} = 0.37$ (solid line) fits the
numerical data much better than the PS ones with
$\delta_{c} = 1.69$ and $1.5$ (dashed and dotted lines respectively)
for all chosen epochs.
\section{CONCLUSION}
We have numerically tested an analytical mass function recently
derived by \cite{lee-sha98}, and compared the results with that of
the standard Press-Schechter one.
Our mass function is not just a phenomenologically obtained
fitting formula but a new analytic formula derived through
modification of the PS theory using a nonspherical dynamical model.
It is based on the Zel'dovich approximation taking
into account the nonspherical nature of real gravitational collapse
process, while the PS mass function
is based on the top-hat spherical model.
Consequently our mass function is characterized by
the threshold value of the smallest eigenvalue of the deformation
tensor, $\lambda_{3c}$ while the PS one by the density threshold,
$\delta_{c}$.
We have shown that in the power-law model
with spectral index $n=-1$ and the four different epochs of SCDM
\footnote{At four different epochs of the SCDM model we effectively
probe the dependence of the fit to the slope of the initial spectrum.}
our mass function with $\lambda_{3c} = 0.37$ is
significantly better than the PS one with $\delta_{c} = 1.69 - 1.5$.
It fits the numerical data well especially in the high-mass section
(corresponding to groups and clusters of galaxies) for these two
models.
Furthermore it is worth noting that in the testing results for SCDM model
our mass function agrees with the data well with a consistent threshold
value of $0.37$ at all chosen redshifts.
On the contrary, the testing result for the $n=0$ power-law model has
shown that there are considerable discrepancies between the analytical
mass functions (both of our mass function and the PS one) and the
numerical data in the high-mass section. The discrepancies with
theory for the $n=0$ model, however, have been already detected
(\cite{la-co94}).
Yet in the low-mass section our mass function fits the data slightly
better for this case.
Although we have tested our mass function only for two different models,
given the promising testing results of our mass function demonstrated here,
we conclude that it will provide a more accurate
analytical tool to study structure formation. Further testings of the
new mass function are obviously very desirable and will be reported
in the following publications.
\acknowledgments
We are very grateful to Giuseppe Tormen who provided the $n=-1,0$ numerical
mass functions, and also for serving as a referee
in helping to improve the original manuscript.
We are also grateful to Fabio Governato's
SCDM halo catalogs and useful comments.
We acknowledge the support of EPSCoR 1998 grant.
S. Shandarin also acknowledges the support of GRF 99 grant
from the University of Kansas and the TAC Copenhagen.
\newpage
| proofpile-arXiv_065-7814 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In standard random percolation, the universality of the critical
exponents is quite familiar. The values of these universal quantities depend
only upon the dimensionality of the system. This is also
the case for the amplitude ratios of quantities such as the
mean cluster size and correlation length that are defined essentially
for infinite systems. Even though one might use finite systems
to measure such quantities, in the limit of the system length
scale $L \to \infty$, the shape of those finite systems becomes
irrelevant as $L$ exceeds the correlation length $\xi(p)$.
However, if a system is simulated exactly at $p_c$ where $\xi$ is
in principle infinite,
or if one takes the scaling limit where $p$ approaches $p_c$ as $L$ is
increased such that $\xi$ remains a fixed fraction of $L$, then
the shape of the system (or the sequence of systems) becomes
relevant. This leads to another and larger
class of universal quantities, whose values depends upon the shape of the boundary
of the system and the boundary conditions, as well as the dimensionality.
In spite of
the resulting proliferation of classes, one still considers
these quantities to be universal, because they remain independent of the ``microscopic"
features of the system (lattice type or continuum model used, whether
site or bond percolation, etc.).
Examples of shape-dependent universal properties go back to the work of
Privman and Fisher \cite{PrivmanFisher} concerning Ising models on a torus. In percolation, the various
crossing problems considered by Langlands et al.\ \cite{Langlands92,Langlands94}, Aizenman \cite{Aizenman},
Cardy \cite{Cardy}, Ziff \cite{Ziff92}, Hovi and Aharony \cite{HoviAharony},
Hu et al.\ \cite{HuLin,HuLinChen}, Watts \cite{Watts} and Pinson \cite{Pinson}
show this type of universality. These crossing quantites are all defined in terms of
macroscopic features of the system, so universality is clear.
Recently, Aharony and Stauffer \cite{AharonyStauffer}
have examined the shape dependence of critical universal ratios
such as $L^{-d}S / P_\infty^2$ where $S$ is the mean size (in sites)
of finite clusters and
$P_\infty$ is the probability a site belongs to the ``infinite"
cluster (one spanning the system, by some consistent definition).
Although thess quantities are defined on a microscopic (site) level,
the microscopic aspect cancels out by scaling theory
and the ratio is universal --- but again system-shape dependent.
Similar quantities for the Ising model
were studied by Kamieniarz and Bl\"ote \cite{KamieniarzBlote}
and M\"uller \cite{Muller}.
Here we discuss two shape-dependent universal
quantities: the excess cluster number and
the cross-configuration probability. We
also focus on twisted boundaries and the subtleties of the triangular lattice.
\section{The excess number of clusters}
Exactly at the critical threshold, the number of clusters per site or per unit area
is a finite
non-universal constant $n_c$, whose values for site (S) and bond (B) percolation on square
(SQ) and triangular (TR) lattices were examined in detail in \cite{ZiffFinchAdamchik}. \
Here we quote those results in terms of the number per unit area, taking the lattice bond length to be
unity:
$n_c^{S-SQ} = 0.027\,598\,1(3)$ and
$n_c^{S-TR} = 0.020\,352\,2(6)$. \
For B-SQ, Temperley and Lieb \cite{TemperleyLieb} showed
\begin{equation}
n_c^{B-SQ} = \left[\left( -{\cot \mu \over 2} {\d \over \d \mu} \right)
\left\{{1\over 4\mu}\int_{-\infty}^\infty {\mathrm sech} \left( {\pi x \over2\mu} \right)
\ln \left( {\cosh x - \cos 2 x \over \cosh x - 1} \right) \d x \right\}\right]_{\mu={\pi\over3}}
\end{equation}
which evaluates simply to \cite{ZiffFinchAdamchik}
\begin{equation}
n_c^{B-SQ} = {3 \sqrt 3 - 5 \over 2} = 0.098\,076\,211\ldots \ .
\end{equation}
Likewise, Baxter, Temperley, and Ashley's \cite{BaxterTemperleyAshley} integral
expression for $n_c^{B-TR}$ evaluates to $(35/6 - 2/p_c^{B-TR})\sqrt 3$
$=0.129\,146\,645\ldots$ on a per-unit-area basis,
where $p_c^{B-TR} = 2\sin(\pi/18)$. \ Here,
clusters in bond percolation
are characterized by the number of wet sites they contain, with isolated sites corresponding to
clusters of size one.
While $n_c$ is non-universal, its finite-size correction is universal.
For an $L \times L'$ critical system with periodic boundary
conditions on all sides, the total number of
clusters $N$ behaves as
$
N(L,L') = n_c LL' + b + O( LL')^{-1}
$
where $b$ is a universal function of $r = L'/L$ \cite{ZiffFinchAdamchik}. That is,
\begin{equation}
b(r) = \lim_{L \to \infty}[N(L,L') - n_c LL' ]
\label{eq:br}
\end{equation}
with $L' = rL$, and $b$ represents the excess number of
clusters over what one would expect from the bulk density. As such, it
reflects the large clusters of the system, which is the basis of
its universality. Aharony and Stauffer \cite{AharonyStauffer} showed
that the universality of $b$ follows directly from the arguments of Privman and Fisher \cite{PrivmanFisher}
applied to percolation, implying that $b$ is
precisely the value of the free energy scaling function at $p_c$ and $h=0$. \ Note
that this ``free energy" is $\sum_s n_s e^{-hs}$, different from the free energy
given below.
In \cite{ZiffFinchAdamchik}, $b(1)$ was numerically found to equal 0.884 for both site and bond
percolation on a SQ lattice, demonstrating universality.
It was also found that $b(2) = 0.991$, $b(4) = 1.512$, and for large $r$ (systems of very high aspect ratio),
$b(r) \sim \tilde b \, r$
with $\tilde b = 0.3608$. Here $\tilde b$ is
the excess number per unit length along an infinite periodic strip or
cylinder. Periodic b.\ c.\ are essential in this
problem to eliminate boundary effects which would otherwise overwhelm $b$.
In \cite{KlebanZiff} it was shown that $b$ can be found
explicitly from exact coulomb-gas results. In the Fortuin-Kastelyn
representation, the partition function of the Potts model at criticality
is $Z = \sum Q^{N_C+N_B/2}$ where $N_C$ is the number of clusters
and $N_B$ the number of bonds, giving bond percolation
in the limit of $Q = 1$. Here the free energy is
$F = \ln Z$, and $\langle N_c + N_B/2 \rangle$ follows from $dF/dQ$ at
$Q=1$. Using the Potts model
partition function (universal part) of Di Francesco et
al.\ \cite{DiFrancescoSaleurZuber},
\cite{KlebanZiff} obtained
\begin{eqnarray}
b(r) & = { 5\sqrt{3}\,r \over 24} + q^{5/4}(2\sqrt3\,r-\half)
+q^2(\sqrt3\,r-1)
+ q^{5/48} + 2q^{53/48}\nonumber \\
& -q^{23/16}+q^{77/48}+\ldots
\label{eq:b(r)}
\end{eqnarray}
where $q=\e^{-2 \pi r}$. \
This result yields $b(1) = 0.883\,576\,308\ldots$, $b(2) = 0.991\,781\,515\ldots $
$b(4) = 1.516\,324\,734\ldots $, and $\tilde b = 5\sqrt3/24$,
consistent with measurements of \cite{ZiffFinchAdamchik}.
The result for $\tilde b$ also follows
directly from the work of Bl\"ote, Cardy and Nightingale
\cite{BloteCardyNightingale} on cylindrical systems.
Fluctuations and higher-order cumulants
are also discussed in \cite{KlebanZiff}, and $\tilde b$
in 3d is discussed in \cite{LorenzZiff}.
\section{Confirmation of universality using triangular lattices}
Demonstrating universality of $b(r)$ by choosing only B-SQ and S-SQ
as in \cite{ZiffFinchAdamchik} may not be completely convincing,
and one would like to compare, say, a TR and SQ lattice. Periodic
b.\ c.\ must be retained. Two obvious ways to represent the TR
lattice on the square array of a computer program are shown in Fig\ 1. \
In (a), the TR lattice topology is created by choosing
every other site of the SQ lattice. Taking an $L \times 2L$ boundary on the SQ
lattice as shown, the effective boundary on the TR lattice becomes a rectangle
with $r=\sqrt3/2$. \ Periodic b.\ c.\ on the underlying
SQ lattice results in normal (untwisted)
periodic b.\ c.\ on the TR lattice also.
The measured $b = 0.887$ for this system agrees completely
with the theoretical $b(\sqrt3/2) = 0.887\,373\,266$
for a rectangular system with aspect ratio $\sqrt{3}/2$.
\begin{figure}
\vspace{70mm}
\caption{Two representations of a triangular lattice on
a square array, yielding (a) a rectangular boundary of
aspect ratio $\sqrt{3}/2$ with
no twist, and (b) the same rectangular boundary but with a twist
of $1/2$.}
\end{figure}
The second obvious way to represent the TR lattice --- by far the most
common one --- is shown in Fig.\ 1(b). \ Diagonals are
simply added to the SQ lattice, and the periodic
b.\ c.\ are applied to the squared-off lattice as is. Making this into a proper
TR lattice, the system becomes a $1 \times 1\ 60^\circ$ rhombus, and shifting
around the triangle as shown in Fig.\ 1b demonstrates that it is effectively
a rectangular boundary with $r=\sqrt3/2$, but with a ``twist" $t=\half$ in the periodic
b.\ c., meaning that the $x$ coordinates are shifted by a fraction $t$ of the total
length when wrapping around in the vertical direction. For this
system, simulations gave $b(\sqrt3/2,1/2) = 0.878$ \cite{ZiffFinchAdamchik}, less
than $b(\sqrt3/2,0) = 0.8874$ and indeed less than the minimum untwisted rectangle,
$b(1,0) = 0.8836$, where now we write $b = b(r,t)$.
Here we demonstrate the universality of $b(\sqrt3/2,1/2)$
by studying a SQ lattice with $r$ (necessarily rational) close to $\sqrt3/2 \approx 0.866$,
and comparing the results to the above TR lattice measurement.
On the SQ lattice, we considered a system of size $14\times 16$,
where $r=0.875$; the measured values of $b$ for $t=0,\ 1/8,\ldots,\ 1$ are shown in Fig.\ 2. \
At twist $\half$, the value of $b$ is very close to the result 0.878 found on the TR lattice,
\cite{ZiffFinchAdamchik} demonstrating the universality between these two lattices.
Note that, to find $b(\sqrt 3/2,1/2)$ on the SQ lattice to high precision, one would have to consider different
size systems and extrapolate to $\infty$, and different rational $r$ to interpolate to $r = \sqrt3/2$.
Results for a square system
$16\times16$ are also shown in Fig.\ 2.
\begin{figure}
\vspace{70mm}
\caption{Excess cluster number $b(r,t)$ vs. twist $t$ for
$14 \times 16$ simulation (triangles),
theory for $r = \sqrt{3}/2$ (solid line),
$16 \times 16$ simulation (squares) and
theory for $r = 1$ (broken line).}
\end{figure}
We have generalized the theoretical methods described in \cite{KlebanZiff} to find
$b(r,t)$ from the partition function of \cite{DiFrancescoSaleurZuber}. The parameter
$\tau$ becomes $t + i r$, and the results, which are rather involved, yield the
solid curves in Fig.\ 2. \ The discrepancy with the numerical values can
be attributed to the small system size of these simulations.
\section{Symmetries on a torus with a twist}
The torus with a twist has various topological symmetries
that apply to any shape-dependent universal quantity $u(r,t)$,
which includes $b(r,t)$. We consider a rectangular boundary
with base 1 and height $r$, with a horizontal twist $t$ in the
periodic b.~c. \ (Note that having twists in two directions leads to a
non-uniform system, so we don't consider it.) \ $u(r,t)$ satisfies the obvious symmetries
of reflection
\begin{equation}
u(r,t) = u(r,-t)
\label{eq:reflection}
\end{equation}
and periodicity in the $t$ direction
\begin{equation}
u(r,t) = u(r,1+t)
\label{eq:periodicity}
\end{equation}
Another symmetry follows from the
observation that the same rhombus can be made into a rectangle
in two different ways, leading to:
\begin{equation}
u(r,t) = u\left({r \over r^2 + t^2},{t \over r^2 + t^2}\right)
\label{eq:inverse}
\end{equation}
Another construction
shows that when $t = 1/n$ where $n$ is an integer,
\begin{equation}
u\left(r,{1 \over n}\right) = u\left({1\over n^2 r},{1 \over n}\right)
\label{eq:integer}
\end{equation}
which also
follows from
Eqs.\ (\ref{eq:reflection}-\ref{eq:inverse}).
On the complex $\tau = t + ir$ plane, (\ref{eq:inverse}) corresponds to $\tau \to 1/\tau$ while
(\ref{eq:periodicity}) corresponds to $\tau \to \tau + 1$. These transformations
generate the modular group, and functions invariant under them
are called modular. Thus, $b(r,t)$ must necessarily be a modular function.
However, the explicit expression for $b$ does not display that modularity clearly.
Besides the excess number, another universal
quantity on a torus is the cross-configuration probability
$\pi_+(r,t)$, which can be expressed in a quite compact form.
Using the results of \cite{DiFrancescoSaleurZuber},
Pinson \cite{Pinson} has shown $\pi_+(r,t) =
\half [Z_c(8/3)-Z_c(2/3)]$, where
\begin{equation}
Z_c(h) = {\sqrt{h/r}\over
\eta(q)\overline\eta(q)}\sum_{n,n'}
\exp\left\{-{\pi h\over r}[n'^2+n^2(r^2+t^2)-2tnn']\right\} \ ,
\label{eq:pinson}
\end{equation}
$\eta(q)$ is the Dedekind eta function and $q = e^{-2\pi (r - it)}$.
It can be easily verified
that this function satisfies the modular symmetries.
For an untwisted torus (\ref{eq:pinson}) reduces to
\begin{equation}
\pi_+(r,0) = {1 \over 4} \sqrt{3 \over 2} \
{\varphi({3r\over 8}) \varphi({3 \over 8 r})
- 2\varphi({3r\over 2}) \varphi({3 \over 2 r}) \over \tilde\eta(r)
\tilde\eta({1\over r}) }
\label{eq:pir}
\end{equation}
where $\tilde\eta(r)=\eta(e^{-2\pi r})$ and $\varphi(r) = \vartheta_3(e^{-\pi r})$
is the Jacobi theta function in Ramanujan's notation.
The symmetry $r \to 1/r$ is apparent.
For all rational
$r$, $\pi_+(r,0)$ is an algebraic number; for example, for $r=1$,
one can show
\begin{equation}
\pi_+(1,0) = {2 \sqrt{a + \overline b} \sqrt{2\sqrt{a\overline b}}
- (a + \overline b) + 2\sqrt{a\overline b} \over 3^{1/4}\ 4}
= 0.309\,526\,275\ldots
\end{equation}
where $a = 1 + \sqrt 3 $ and $\overline b = \sqrt 2 - 3^{1/4}$,
using various results for theta functions.
According to the symmetries above, this same value $\pi_+(1,0) = 0.3095\ldots$
applies to
$(r,t) = (1/2,1/2) = (1/5,3/5) = (1/10,3/10)
= (1/13,5/13) = (1/17,13/17) = (1/26,5/26) =
(1/29,12/29) = (1/34,13/34) = (1/37,6/37) = (1/41,9/41) = (1/50,14/50)$
and infinitely many other systems, just as
$b(1,0) = 0.8836\ldots$ applies to all these systems.
We have also measured $\pi_+$ in the $14\times16$ and $16\times16$ systems, with various $t$. \
Whether crossing occurs can be found using an indicator function, such as $I = N_C - N_{C'}
+ N_B - N_S$, where $N_C$ is the number of clusters, $N_{C'}$ is the number of dual-lattice
clusters, $N_B$ is the number of bonds, and $N_S$ is the number of sites. When $I=1$, there is
a cross-configuration on the lattice, when $I=-1$ there is a cross configuration
on the dual-lattice (these two events are clearly mutually exclusive), and when $I=0$ there
is neither. In the latter case, there will necessarily be at least one wrap-around cluster
on the lattice or dual-lattice, or a spiral. Another indicator function
can be made using the number of hulls in the system \cite{Pinson}.
In Fig.\ 3 we show the measured $\pi_+(r,t)$ and comparison with
predictions of Pinson's formula. The small deviations are presumably due to finite-size
effects which should disappear when larger systems are measured and an extrapolation
is made to infinity, as we have verified for $t=0$. Note that $\pi_+(\sqrt{3}/2,1/2)
= 0.316\,053\,413\ldots$ is at a local and apparently global maximum of Pinson's formula,
although again by equations (\ref{eq:reflection}-\ref{eq:inverse})
an infinite number of other points on the $(r,t)$ plane have the
identical value of $\pi_+$, such as $(\sqrt3/6,1/2)$ and $(\sqrt3/14,5/14)$.
\begin{figure}
\vspace{70mm}
\caption{Pinson's number $\pi_+(r,t)$ vs.\ twist $t$; legend same
as Fig.\ 2.}
\end{figure}
A plot of Pinson's formula as a function of $t$ for different
$r$ shows that for small $r$ it begins to develop oscillations. This is
because of the tendency to create spiral rather than cross
configurations for small aspect ratios.
\section{The meaning of $b$}
In \cite{ZiffFinchAdamchik}, it was suggested that $b$ relates to the
number of ``spanning" clusters,
since these are essentially the cause of the excess.
However, Hu \cite{Hu} has shown that they are not numerically identical, using
one particular definition of spanning. Here we elaborate on this point.
We consider large systems
with periodic b.\ c.\ and ask for the number of clusters Nr$(\ell_m \ge \ell)$
whose maximum
dimension $\ell_m$ in the $x$ or $y$ direction exceeds some value $\ell$. \
Since $s \sim \ell_m ^D$ where $D$ is the fractal dimension and $n_s \sim s^{-\tau}$,
it follows that Nr$(\ell_m \ge \ell) \sim \ell^{(1-\tau)D} = \ell^{-d}$
or $\ell^{-2}$ in 2d. \ We have measured this quantity for square $L\times L$ and rectangular $L\times
2L$ systems for various sizes, and find that for an intermediate range in $\ell$ the expected universal
$\ell^{-2}$ behavior is followed:
\begin{equation}
{\mathrm Nr}(\ell_m \ge \ell) = C \left( {\ell \over \sqrt{A} } \right)^{-2} = CA/\ell^2
\label{eq:Nr}
\end{equation}
where $A = L^2$ (square) or $2L^2$ (rectangular system) and $C=0.116$. \ $C$ is a universal measure
of the size distribution, dependent only upon the rule of what constitutes the length scale $\ell$.
(One could just as well use maximum diameter, radius of gyration, etc., each of which would lead to a
different
$C$. Thus $C$ too is a ``shape"-dependent universal quantity.) Eq.~(\ref{eq:Nr}) implies that $C$ is
just the density in an infinite system of clusters of minimum dimension $\ell$, on the length scale of
$\ell$. It is the universal analog of $n_c$.
The data for Nr$(\ell_m \ge \ell)$ deviates
from Eq.\ (\ref{eq:Nr}) at both the large and
small size limits. For small $\ell$, the deviation is due to lattice-level effects;
at the limit $\ell = 1$, Nr$(\ell_m \ge 1)$ is just
$n_c A$, which is clearly non-universal. For $\ell$ near the maximum, the deviation is due to the influence of the boundary
and is related to the value of $b$.
According
to our definition, $b = $ (the actual number of clusters) $-$ (the expected number of clusters using
the bulk density). Therefore, using a lower length scale of $\ell$
that is in the scaling region, we have
\begin{equation}
b = {\mathrm Nr}(\ell_m \ge \ell) - C \, A/\ell^2
\label{eq:b3}
\end{equation}
Evidently, in using this formula, $\ell$ can be taken right up to the minimum dimension of the system, $L$.
We find that the number in each size range,
$N_\ell = {\mathrm Nr}(\ell/2 \le \ell_m < \ell)$ $ = {\mathrm Nr}(\ell_m \ge \ell/2) - {\mathrm Nr}(\ell_m \ge \ell)$
follows $N_{\ell/2}/N_{\ell} = 4$ within $\pm 0.01$ right up to $\ell = L$. \
This implies that
$b$ can be found by applying (\ref{eq:b3})
with $\ell = L$. \
Using the numerical data for Nr$(\ell_m \ge L)$ for $r=1$ and $r=2$,
we find from (\ref{eq:b3})
\begin{eqnarray}
& b(1,0) & = 0.990 - 0.116 = 0.87 \nonumber \\
& b(2,0) & = 1.214 - 2(0.116) = 0.98
\label{eq:num}
\end{eqnarray}
compared with the actual vales 0.884 and 0.991 respectively. The small shortfall may be due to the
need to take $\ell$ somewhat smaller than $L$ in (\ref{eq:b3}), or
to statistical errors. We are investigating this point further.
Thus, we have the meaning of $b$ (taking $\ell = L)$: the number of clusters in a system of area $A$
whose extent is larger than the minimum system
dimension
$L$, minus the expected number predicted by the bulk density, $C A/L^2$.
\section{Conclusions}
We have numerically demonstrated the universality of $b(\sqrt3/2,1/2)$
on both the SQ and TR lattices. We have shown that $b$ is related
to the average number of clusters of length scale greater or equal to $L$, if one
subtracts off the contribution of the universal size distribution
characterized by the universal constant $C$.
It appears that $N_\ell$ follows a universal behavior close to, and perhaps right
up to $\ell = L$. On a finite system with periodic b.\ c., the
probability of growing a cluster of a certain size is identical to its probability
on an infinite system, as long as the cluster is small enough that it doesn't
touch itself after wrapping around the boundaries. This result, however, says that
the {\it number} of clusters of a certain
size range is also substantially unaffected by the system finiteness,
even though the boundaries should, it seems, influence
the statistics of such clusters when their combined size is large
enough that they touch when wrapping around the
boundary. Further work needs to be done to understand this behavior.
The TR lattice constructed as in Fig.\ 1b gives extremal values of $b$ and $\pi_+$
and may in fact be the best periodic system to use for many percolation problems (as well as other
types of lattice simulations). This is because, when the $1 \times 1$ $60^\circ$ rhombus is
used to tile the plane, it leads to a triangular array of repeated patterns which has the
most space between each repeated element of any regular array.
While tori with ``twists" add a rich extra degree of freedom,
they are by no means the only systems for which $b$ can be calculated.
What is needed is a system that is effectively a closed surface.
One can transform the rectangular basis of the torus to other shapes
by conformal transformation, such as to an annulus,
and then apply the transformed periodic b.\ c. to the problem.
(Here, the curved boundaries
suggest using a continuum form of percolation, as in \cite{HsuHuangLing}).
A simple closed surface like the surface of a
sphere can also be used for the system. Each of these systems will have its own characteristic
value of $b$ and other shape-dependent universal quantities.
\ack
RZ acknowledges NSF grant
DMR-9520700. Correspondence with K. S. Williams and B. Berndt concerning
theta function identities is gratefully acknowledged.
| proofpile-arXiv_065-7822 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Damped~Ly$\alpha$\ systems are the extremely high HI column density
($N_{HI} > 2\times 10^{20}$ atoms~cm$^{-2}$) systems seen in absorption in
the spectra taken towards distant quasars. Although rare, they are the
dominant contributors (by mass) to the observed neutral gas at high
($z \sim 3$) redshifts. Principally for this reason, these systems are
natural candidates for the precursors of $z=0$ galaxies. Consistent
with this interpretation, the mass density of neutral gas in damped~Ly$\alpha$\
systems at $z \sim 3$ is comparable to the mass density in stars in
luminous galaxies at $z=0$. Thus, to zeroth order, the evolution with
redshift of the neutral gas density matches that expected from gas depletion
due to star formation (eg. Lanzetta et al. 1991, Lanzetta, Wolfe \& Turnshek
1995, Storrie-Lombardi, McMahon \& Irwin 1996). Further, the evolution
of metallicity with redshift also roughly matches what one would expect from
models of galactic evolution (Ferrini, Moll\'{a} \& D\'{i}az 1997,
Fall 1997).
On the other hand, the morphology of damped~Ly$\alpha$\ systems remains
poorly understood. Based on the edge-leading asymmetries seen in the
absorption profiles of low ionization metals associated with these
systems, Prochaska \& Wolfe (1997) suggest that they are rapidly
rotating large disks with significant vertical scale heights. However,
such profiles can also be explained by models in which damped systems are
much smaller objects that are undergoing infall and merger \cite{hsr}. It\ has
also been claimed that the metal abundance of damped~Ly$\alpha$\ systems depends
on the total HI column density in a way as would be expected from large
disks with central HI holes \cite{wp}, although the number of systems involved
in this study is small.
For damped~Ly$\alpha$\ systems that lie in front of radio loud quasars,
it is possible to augment the optical/UV spectra with HI 21cm absorption
spectra. Such a comparison, yields, among other things (and under suitable
assumptions), the spin temperature, $T_s$, of the HI gas. Derived spin
temperatures of damped~Ly$\alpha$\ systems have, in general, been much larger than
those observed in the disk of the Galaxy or in nearby galaxies
(Braun \& Walterbos 1992, Braun 1996), implying that either damped~Ly$\alpha$\
systems are not disks, or that the ISM in the damped~Ly$\alpha$\ proto-disks is
considerably different from that in the local $z=0$ disks, presumably
due to evolutionary effects.
Studies of low redshift damped~Ly$\alpha$\ systems are particularly
interesting in this regard, since evolutionary effects are expected to
be negligible. Further, much more detailed information is obtainable, in
particular from HST and/or ground based imaging, which makes identification
of the absorber possible (eg. Le Brun et al. 1997, Lanzetta et al. 1997).
Of course, it remains a possibility that the population that gives rise
to damped~Ly$\alpha$\ absorption at low redshift is distinct from that at high
redshift.
In this paper, we report the detection of redshifted 21cm absorption
in two low redshift ($z= 0.2212$, $z=0.0912$) damped~Ly$\alpha$\ systems seen towards
the quasar OI~363 (0738+313, $z_{em} = 0.630$) \cite{rt}. Observations of
the higher redshift
system confirm, at considerably improved spectral resolution and
sensitivity, earlier results from the Westerbork Synthesis Radio Telescope
(WSRT) (Lane et al. 1998), while the lower redshift ($z=0.0912$) system is
the lowest redshift damped~Ly$\alpha$\ system known to date.
\section{Observations and Data Reduction}
The observations were carried out using the GMRT (Swarup et al. 1991,
Swarup et al. 1997).
The backend used was the proto-type eight station FX correlator, which
gives a fixed number (128) of spectral channels over a total bandwidth
that can be varied from 64~kHz to~16 MHz. Due to various ongoing maintenance
and installation activities, the actual number of antennas that were
available during our observing runs varied between six and eight.
For the observations of the $z=0.0912$ system the bandwidth was
set to 1.0~MHz. No spectral taper was applied, giving a channel spacing
of $\sim 1.8$~km s$^{-1}$. Two observing runs were made, one on 27 June 1998
and the other on 5 July 1998. The on source time for each run was about
six hours. Two observing runs were also taken for the $z=0.2212$ system
(on 26 June 1998 and 4 July 1998), the first with a total bandwidth of
1.0~MHz (i.e. a channel spacing of $\sim 2.0$~km s$^{-1}$) and the other with
a total bandwidth of 0.5~MHz (i.e. a channel spacing of $\sim 1.0$~km s$^{-1}$
). Each of these observing runs had an on source time of $\sim 4$~hours.
Bandpass calibration at both redshifts was done using 3C~295, which was
observed at least once during each observing run.
The data was converted from the raw telescope format to FITS and
then reduced in AIPS in the standard way. Maps were produced after
subtracting out the continuum emission of the background quasar using UVLIN,
and spectra extracted from the resulting three dimensional cube. The GMRT
does not do online doppler tracking; this is, however, unimportant since the
doppler shift within any one of our observing runs was a small fraction of
a channel. For the lower redshift system, data from the observations on
different days were corrected to the heliocentric frame and then combined.
The final spectrum for the $z=0.0912$ system is shown in
Figure~\ref{fig:lz}. There appear to be two components, one considerably
deeper than the other. The fainter component, although weak, was detected
in both our observing runs, and its magnitude is also considerably higher
than the noise level. It is, of course, possible that the spectrum consists of
two components, one of which is broad and weak and the other, much deeper
but narrow. The redshift of the narrow component is consistent with the
redshift quoted in Rao \& Turnshek~(1998). The peak optical depth is
$\sim 0.18$ (i.e. a depth of $390$~mJy with the continuum flux of OI363
being 2.0~Jy), and occurs at a redshift of $z=0.09118 \pm 0.00001$. The
FWHM of the line is small, $\sim$ 5~km s$^{-1}$.
Lane et al. (1998) report a redshift of $z=0.2212$ for the higher
redshift system, based on WSRT observations. The $2.0$~km s$^{-1}$ GMRT
spectrum (which has a considerably better velocity resolution and
sensitivity than the WSRT spectrum) is shown in Figure~\ref{fig:hz}. The
redshift measured from this spectrum is $0.2212 \pm 0.00001$. This is
consistent with the redshift measured from the $1.0$~km s$^{-1}$ resolution
spectrum (which is not shown here). The peak optical depth ($\sim 0.07$)
is somewhat less than that of the lower redshift system, but the velocity
width is comparable, $\sim$ 5.5~km s$^{-1}$ (FWHM).
\begin{figure}
\vskip -2.25cm
\epsfxsize=6.0cm
\epsfysize=11.0cm
\hskip -6.5cm \epsfbox{chengaluf1.eps}
\caption{GMRT redshifted 21cm absorption spectrum of the lower redshift
system towards OI363. The channel spacing is $\sim 1.8$~km s$^{-1}$.
The deepest optical depth ($\sim 0.18$) is at a heliocentric
redshift of 0.09118. The width (FWHM) of the line is
$\sim$ 5~km s$^{-1}$.}
\label{fig:lz}
\end{figure}
\begin{figure}
\vskip -2.25cm
\epsfxsize= 6.0cm
\epsfysize=11.0cm
\hskip -6.5cm \epsfbox{chengaluf2.eps}
\caption{GMRT redshifted 21cm absorption spectrum of the higher redshift
system towards OI363. The channel spacing is $\sim 2.0$~km s$^{-1}$.
The peak optical depth ($\sim 0.07$) occurs at a heliocentric
redshift of 0.2212. The width (FWHM) of the line is
$\sim$ 5.5~km s$^{-1}$}
\label{fig:hz}
\end{figure}
\section{Discussion}
The total HI column density of a damped system can be determined
from its Ly$\alpha$\ profile; this can be then used, in combination with the measured
HI 21cm optical depth, to determine the spin temperature of the absorbing
gas (under the assumption that the absorbing gas is homogeneous. For a
multi-phase absorber, this derived temperature is the column density weighted
harmonic mean of the spin temperatures of the different phases, provided
all the phases are optically thin.). One of the principal uncertainties in
this derivation of the spin temperature is the covering factor of the
absorbing gas. In particular, the radio emission from quasars is often
extended, while the UV continuum source is essentially a point source;
thus, the line of sight along which the HI column density has been
derived from UV measurements need not be the same as the one for which
the 21cm optical depth has been measured. The case of OI~363, however, is
relatively straightforward. OI~363 is a core dominated Gigahertz peaked
source whose total flux decreases from 2.2~Jy at 1.64~GHz to 1.59~Jy at
408~MHz. VLA maps at 1.64~GHz \cite{murphy} and 1.4~GHz \cite{ajit}
show that the source is highly core dominated, with about 97\% of the total
flux in an unresolved core component. VLBI measurements at 5~GHz
\cite{vlbi} show that the core size is $\sim 10$~milli arcseconds
($\sim 16$~pc and $\sim 31$~pc at redshifts of 0.0912 and 0.2212
respectively, for $H_0$ = 75~km s$^{-1}$ Mpc$^{-1}$ and $q_0 =0.5$).
Stanghellini et al. (1997) note that their 5~GHz VLBI map recovers only
77\% of the total flux measured by the VLA at 5~GHz. While it is unclear
whether there is much change in the source size between 5~GHz
and 1.3~GHz, we know from IPS measurements \cite{sjk} that the upper limit
on the core size at 330~MHz is 50~milli arcseconds. At both redshifts,
the depth of the line (see Figures~\ref{fig:lz}~\&~\ref{fig:hz}) considerably
exceeds the flux in the weak lobes, this implies that the absorbers
must cover the central core. Given the small size of this central core,
the covering factor is likely to be close to unity.
The HI column densities inferred from the present observations,
in terms of the spin temperatures of the two damped systems, are $1.82
\pm 0.02 \times 10^{18} T_{s}$~atoms cm$^{-2}$ and $0.71 \pm 0.04 \times
10^{18} T_{s}$~atoms cm$^{-2}$, for the lower and higher redshift systems,
respectively. The column densities measured by Rao \& Turnshek (1998),
from the damped Lyman-$\alpha$ lines, are $7.9 \pm 1.4 \times 10^{20}$~atoms
cm$^{-2}$ and $1.5 \pm 0.2 \times 10^{21}$~atoms cm$^{-2}$, again in order
of decreasing redshift. The spin temperatures obtained are hence
$825 \pm 110$~K (for the $z=0.0912$ absorber) and $1120 \pm 200$~K
(for the $z=0.2212$ system). For the higher redshift system, our measurement
agrees within the errors with that of Lane et al. (1998). The overwhelming
source of the (formal) uncertainty is in the determination of the HI column
density from the UV measurements. Thus, even at redshifts where no evolution
is expected, the derived spin temperature is significantly higher than that
typically seen in the Galaxy. If one assumes that the HI 21cm spectral width
is entirely due to thermal motions, the required kinetic temperatures are
$\sim 625$~K and $\sim 750$~K for the lower and higher redshifted system
respectively, i.e. comparable to the derived spin temperatures. Note
however, that in the ISM of the Galaxy, there is no stable neutral phase
with temperature $\sim 1000$~K. On the other hand, such high spin
temperatures appear common at both high and intermediate redshifts (see eg.
de Bruyn, O'Dea \& Baum 1996, Carilli et al. 1996, Lane et al. 1996,
Kanekar \& Chengalur 1997, Boiss\'{e} et al. 1998).
Ground based imaging of the OI~363 field \cite{rt} shows that
there are no spiral galaxies at small impact parameters to the line of
sight, contrary to the canonical model where damped systems arise in extended
disks. Similarly, the next lowest redshift damped~Ly$\alpha$\ absorber (0850+4400,
Lanzetta et al. 1997) appears to be associated with an S0 galaxy, while, at
intermediate redshifts, the absorbers appear to be associated with galaxies
spanning a wide range of morphological types \cite{leBrun}. Interestingly,
at lower redshifts still, where imaging of HI 21cm emission is possible,
21cm absorption from quasar galaxy pairs appears to be associated more
with tidal tails or other extended features of gas rich galaxies \cite{chris},
and not directly with the disks of large spirals.
While the low number density of damped Lyman-$\alpha$ systems
at $z < 1$ makes it {\it a priori} extremely unlikely that two such systems
might be found along the same line of sight, the VLBI map of OI~363 appears to
rule out the possibility of this line of sight being biased due to
gravitational lensing. The current observations (and the absence of
detectable gravitational lensing) do not however place strong constraints
on the surface density or mass of the absorbing systems.
In summary, it appears that even at the lowest redshifts, gas outside
the disks of spiral galaxies and with apparent physical parameters
considerably different from the ISM of nearby galaxies has a non-trivial
contribution to the total absorption cross-section. This is consistent
with observations that, even for intermediate redshift damped~Ly$\alpha$\ absorbers,
the metallicity is considerably lower than typical solar values \cite{boisse}.
Finally, the present GMRT observations also suggest that evolutionary
effects may not play an important role in understanding why the derived
spin temperature for damped~Ly$\alpha$\ systems are in general higher than those
measured in nearby spiral galaxy disks.
{\bf Acknowledgments} These observations would not have been possible
without the many years of dedicated effort put in by the GMRT staff to
build the telescope. The GMRT 1400~MHz wide-band feed and receiver system
was built by the Raman Research Institute. We are also grateful to Wendy
Lane, Judith Irwin, Anish Roshi, D. J. Saikia, R. Srianand and Kandu
Subramanian for their comments and suggestions.
| proofpile-arXiv_065-7838 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Existence of a non-zero neutrino magnetic moment has long been a concern of
great interest, since it can have an observable laboratory effect such as
neutrino-charge lepton elastic scattering, $e^+e^-\rightarrow
\nu{\bar\nu}\gamma$, and also some important astrophysics effect,
such as cooling of SN 1987A, cooling of helium stars, etc.
It is likely that neutrinos may have a small but nonvanishing mass;
for various bounds on magnetic moments and masses, see \cite{pdg} and
\cite{superK}.
Within the framework of the standard model, a nonzero neutrino
mass usually imply a nonzero magnetic moment. It has been shown
that \cite{fuji}, for a massive neutrino,
\begin{equation}
\mu^{SM}_\nu={3 e G_F m_\nu\over 8\pi^2\sqrt 2}
=3.2\times10^{-19}\, m_\nu({\rm eV})\, \mu_B,
\end{equation}
where $\mu_B$ is the Bohr magneton.
In models beyond the standard model, right-handed neutrinos are often included
in interactions (for a review e.g. \cite{moha}), so that we need not depend on
a nonzero neutrino mass to generate a nonzero magnetic moment. In this article,
we consider the possibility of using
leptoquark interactions to generate a nonzero neutrino magnetic moment.
In many unification models, such as SU(5), S0(10), etc.,
one often put quarks and leptons
into the same multiplet, so that leptoquarks arise naturally for connecting
different components within the same multiplet. What makes a leptoquark unique
and interesting is that it couples simultaneously to both a lepton and a
quark. This may help generating a nonzero neutrino magnetic interaction.
Specifically, when a top quark involves in the loop diagram,
its mass provide a large enhancement for the neutrino magnetic moment.
(In such diagram, a massless neutrino needs some massive internal fermion to
flip its chirality, giving rise to some nonzero magnetic moment.)
We add right-handed neutrinos in the general renormalizable lagrangian of
leptoquarks. Owing to existence of lepton numbers which recognize the
generation, we distinguish leptoquarks by its generation quantum number, but
this may induce four-fermion interactions which may enhance some helicity
suppressed process such as $\pi^+\rightarrow {e^+} \nu$ to the extent that
leptonic universality may even be violated. This usually gives a tight
constraint on the leptoquark \cite{constraint}. For non-chiral vector
leptoquarks of electromagnetic strength coupling, this corresponds
to having a mass heavier than 50 TeV for the first generation
leptoquark. With such a heavy leptoquark, we still find a nonzero neutrino
magnetic moment $\mu_{\nu}$ up to $10^{-18}\,\mu_{B}$. For the second and third
generation leptoquarks, their masses are not severely constrained by the above
process. Assuming their mass lying somewhere between $1\sim 100$ TeV,
we obtain $\mu_{\nu}$ of order $10^{-12}\sim 10^{-16}\,\mu_{\rm B}$ (from the
second generation leptoquark) and $10^{-10}\sim 10^{-14}\,\mu_{\rm B}$ (from
the third generation leptoquark), respectively. Such predictions may already
have some observable effects such as those mentioned earlier.
\section{Neutrino magnetic moment in models with leptoquarks}
Leptoquarks arise naturally in many unification models which attempt to put
quarks and leptons in the same multiplet. There are scalar and vector
leptoquarks which may couple to left- and right-handed
neutrinos at the same time, but only vector leptoquarks can couple to the
upper component of the quark SU(2) doublet. The heaviness of the
top quark may enhance the neutrino magnetic moment once we use the vector
leptoquark to connect to the quark doublet. Of course, there are
subtleties regarding renormalization of the vector leptoquark which may be
treated in a way similar to gauge bosons. In our calculation, we adopt
Feynman rules in the $R_{\xi}$-gauge and take $\xi \to\infty$ at the end
of the calculation while neglecting all unphysical particles in the
$R_\xi$-gauge. (This is a step which has often been employed in nonabelian
gauge theories.)
We begin our analysis by constructing a general renormalizable
lagrangian for the quark-lepton-leptoquark coupling. Following \cite{buch},
we demand such action to be an SU(3)$\times$SU(2)$\times$U(1) invariant
which conserves the baryon and lepton numbers but, in addition to \cite{buch}
we add terms which couple to right-handed neutrinos. For leptoquarks with the
fermion number $F\equiv 3B+L=0$,
\begin{eqnarray}
{\cal L}_{F=0}&=& (g_{1L} {\bar Q_L}\gamma^\mu L_L+
g_{1R} {\bar D_R}\gamma^\mu l_R+
g^u_{1\nu} {\bar U_R}\gamma^\mu \nu_R)
\, V^{({2\over3})}_{1\mu}\nonumber\\
&&+(g^d_{2L} {\bar D_R} L^i_L i\tau_{2\,ij}+g_{2\nu}
{\bar Q_{L\,j}} \nu_R)\, S^{j({1\over6})}_2\nonumber\\
&&+(g^u_{2L} {\bar U_R} L^i_L i\tau_{2\,ij}+g_{2R}
{\bar Q_{L\,j}} l_R)\, S^{j({7\over6})}_2\nonumber\\
&&+g_{3L} {\bar Q_L} {\vec \tau} \gamma^\mu L_L\, {\vec V}^{({2\over3})}
_{3\mu}
+g^u_{1R} {\bar U_R} \gamma^\mu l_R\, V^{({5\over3})}
_{1\mu}
+g^d_{1\nu} {\bar D_R} \gamma^\mu \nu_R\, V^{(-{1\over3})}
_{1\mu}\nonumber\\
&&+{\rm c. c.},
\end{eqnarray}
and, for $F=\pm2$,
\begin{eqnarray}
{\cal L}_{F=2}&=& (h_{2L} {\bar u_R}^c\gamma^\mu L^i_Li\tau_{2ij}
+ h_{2\nu} {\bar Q_L}^ci\tau_{2ij}\gamma^\mu\nu_R)
\, V^{(-{1\over6})}_{2\mu}\nonumber\\
&&+(h_{1L} {\bar Q_L}^{i\,c} i\tau_{2\,ij} L^j_L
+h_{1R} {\bar U_R}^c l_R
+h_{1\nu} {\bar D_R}^c \nu_R)\, S^{({1\over3})}_1\nonumber\\
&&+(h_{2L} {\bar D_R}^c\gamma^\mu L^i_L i\tau_{2\,ij}
+h_{2R}{\bar Q_L}^{i\,c}i\tau_{2ij}\gamma^\mu l_R)\,
V^{j({5\over6})}_{2\mu}\nonumber\\
&&+h_{3L} {\bar Q_L}^{c\,i}{\vec\tau}i\tau_{2ij} L_L\, S^{({1\over3})}_{3}
+h_{1R} {\bar D_R}^c l_R\, S^{(-{4\over3})}_1
+h_{1\nu} {\bar U_R}^c \nu_R\, S^{({2\over3})}_1 \nonumber\\
&&+{\rm c. c.}.
\end{eqnarray}
The notation adopted above is self explanatory; for example,
$S,\,V$ denotes scalar and vector leptoquarks respectively,
the superscript is its average electric charge or the hypercharge $Y$,
and the subscript of a leptoquark denotes which
SU(2) multiplet it is in, and the generation index is suppressed.
From here it is clear that
among those leptoquarks that couple to neutrinos of both chiralities,
a radiative $\nu\nu\gamma$ diagram with the exchange of
a virtual $U$-type quark can proceed only when accompanied by
a vector leptoquark, namely $V^{({2\over3})}_{1\mu}$ in
${\cal L}_{F=0}$ or $V^{(-{1\over6})}_{2\mu}$ in ${\cal L}_{F=2}$; on the other
hand, the exchange of a virtual $D$-type quark can proceed only with the scalar
leptoquark, namely $S^{({1\over6})}_{2}$ in ${\cal L}_{F=0}$ or
$S^{({1\over3})}_{1}$ in ${\cal L}_{F=2}$.
Note that we do not consider mixing between different leptoquarks due to Higgs
interactions, which will introduce additional parameters.
The diagram in question is shown explicitly in Fig. 1.
Given these couplings, it is straightforward to calculate induced neutrino
magnetic moments via one loop diagrams. To see that the heavy top quark mass
can enhance the prediction, we calculate the $\nu\nu\gamma$ diagram with the
exchange of up-type quark and $V^{({2\over3})}_{1\mu}$
i.e. the first term in ${\cal L}_{F=0}$.
As one of the standard methods to treat loop diagrams involving massive vecter
particles, we use Feynman rules in the $R_\xi$-gauge and take $\xi \to \infty$
at the end of calculation while neglecting any unphysicical particle.
In addition to minimum substitution, we add the term
$e\,Q_v\,V^{\dag}_\mu\,V_\nu\,F^{\mu\,\nu}$
in the lagrangian,
such that the whole $VV\gamma$ coupling is in a form similar to the non-abelian
$WW\gamma$ type coupling,
and the procedure results in a finite limit under
$\xi\rightarrow\infty$. We obtain, with all couplings chosen to be real,
\begin{eqnarray}
{\cal L}^{eff} &= &-{e\over 2 m_e} {\bar \nu}
({\,\,\sigma^{\mu\nu} \over 2})\nu
F_{\mu\nu}\,F_2,\,\,\,\,\,
|\mu_\nu|={e\over 2 m_e} F_2 \\
F_2\,\,\,\,&=&{1\over16\pi^2}{2m_e\over M}\lbrace{g^u_L g_\nu\, m_q\over M_v}
\lbrack Q_q(\,f_1(a)+f_2(a)\,)+Q_v(\,f_3(a)+f_4(a)\,)\rbrack\nonumber\\
& &\,\,+(\,g_L^2+g^{u2}_\nu)\,{m_\nu\over M_v}\lbrack Q_q(\,g_1(a)+g_2(a)\,)
+Q_v(\,g_3(a)+g_4(a)\,)\rbrack\rbrace,
\end{eqnarray}
where $a=m_q^2/M_v^2$, $Q_q=-Q_v=2/3$, and $e>0$,
while $f_i$ and $g_i$ are given by
\begin{eqnarray}
f_1(a)&=&{2(-1+a^2-2a {\rm log}(a))\over (a-1)^3},
\nonumber\\
f_2(a)&=&-{a(3-4a+a^2+2 {\rm log}(a))\over 2(a-1)^3},
\nonumber\\
f_3(a)&=&-{3(-1+4a-3a^2+2a^2 {\rm log}(a))\over 2(a-1)^3},
\nonumber\\
f_4(a)&=&-1/2,
\nonumber\\
g_1(a)&=&{(-4-5a^3+9a+6a(2a-1) {\rm log}(a))\over 6(a-1)^4},
\nonumber\\
g_2(a)&=&{a(3-4a+a^2+2 {\rm log}(a))\over 4(a-1)^3},
\nonumber\\
g_3(a)&=&{(7-33a+57a^2-31a^3+6a^2(3a-1) {\rm log}(a))\over 12(a-1)^4},
\nonumber\\
g_4(a)&=&{(2-6a+15a^2-14a^3+3a^4+6a^2 {\rm log}(a))\over 12(a-1)^4}.
\end{eqnarray}
Note that one obtains the desirable chiral structure for the magnetic moment
interaction by two different ways: the first is to have an odd number of mass
insertions of the quark mass term, giving rise to the first term in $F_2$;
the other way is by the neutrino mass term, resulting in the second term of
$F_2$. There are two advantages with the first scenario. First of all, one can
obtain a nonzero magnetic moment without restricted by the very light neutrino
mass. Second, one may have a prediction enhanced considerably by the heavy top
quark mass.
\section{Constraints and Numerical Results}
Before working out numerical predictions, we need to consider the constrains
arising from the leptonic decays of the pseudoscalar meson, such as
$\pi^+\rightarrow{e^+}\nu$\cite{constraint}. Intergrating out $V^{({2\over3})}$
and performing Fierz reordering, we obtain ${\cal L}_{eff}$ relevant
to the leptonic decay of a pseudoscalar meson,
\begin{eqnarray}
{\cal L}_{eff}&={1\over M^2_v}&(2 g^*_{1L}g_{1R}{\bar D_R}U_L{\bar \nu_L}l_R
+2 g^{u*}_{1\nu}g_{1L}{\bar D_L}U_R{\bar \nu_R}l_L\nonumber\\
& &-g^*_{1L}g_{1L}{\bar D_L}\gamma^\mu U_L{\bar \nu_L}\gamma_\mu l_L
-g^{u*}_{1\nu}g_{1R}{\bar D_R}\gamma^\mu U_R{\bar \nu_R}\gamma_\mu l_R)
+{\rm c. c.}.
\end{eqnarray}
We consider the universality constraint arising from the $\pi^+$ leptonic
decay, and neglect the neutrino mass contribution.
Define $R=Br(\pi^+\rightarrow e^+\nu)/Br(\pi^+\rightarrow \mu^+\nu)$.
The first and third terms of ${\cal L}_{eff}$
have interference with the standard model Fermi
interaction. This is an order of $1/M^2_v$ correction to $R$,
while the other term is a correction of order $1/M^4_v$. Furthermore,
the first term which is scalar coupling is enhanced by
a factor of $m^2_{\pi}/((m_u+m_d)m_e)$ so this is the dominent term to
constrain the mass of the leptoquark. We assume $g^{u}_{1\nu}=g_{1L}=g_{1R}=g$
which is a natural assumption for the vecter leptoquark.
We obtain
\begin{equation}
R^{exp}=R^{sm} (1+2 {m^2_{\pi}\over m_e (m_u+m_d)}
(-{g^*_{1L}g_{1R}\over {\sqrt2} M^2_v G_F})),
\end{equation}
where experimental average $R^{exp}=(1.230\pm0.004)\times10^{-4}$\cite{pdg},
and standard model calculation $R^{sm}=(1.2352\pm0.0005)\times10^{-4}$
\cite{sirlin}. This correspond to
\begin{equation}
M_v>g\, m_{\pi} {\sqrt{{\sqrt2}\over 0.0075 G_F m_e (m_u+m_d)}}
\sim 50 ({g\over e}){\rm TeV}.
\end{equation}
For a coupling of the electromagnetic strength, this correspond to having the
vector leptoquark with a mass greater than 50 TeV for the first generation.
This constraint is in fact more servere than what we may obtain from the atomic
parity vialation experiment, which we shall ignore in this paper.
For the second and third generations leptoquarks,
there is no direct restriction from the universality of the $\pi$ leptonic
decay, nor from the atomic parity violation experiment. Nevertheless,
one can find various lower bounds for the leptoquark mass \cite{pdg},
from direct searches at the HERA $ep$ collider,
the Tevatron $p\bar p$ collider, and at the LEP $e^+e^-$ collider.
Typical bounds from direct searches is about few hundreds GeV,
while the bounds form indirect searches are given in \cite{indirect}.
We shall consider a leptoquark mass in the general range of TeV's.
For the reason of comparisons, let us recall briefly some of the upper limit
obtained from the leptonic scattering such as elastic $\nu$($\bar\nu$)
with $l^+$($l^-$), $e^+e^-\rightarrow\nu{\bar\nu}\gamma$, etc.,
and also from the astrophysical processes such as cooling of helium stars,
red giant luminosity and so on \cite{pdg}. As a reference point, we recall the
standard model formula on the neutrino magnetic moment arising from a nonzero
neutrino mass\cite{fuji}, $\mu^{sm}_\nu=3.2\times10^{-19} m_\nu({\rm
eV})\mu_B$ (referred to as ``the extended standard electroweak theory'').
Accordingly, the upper limit of $\mu_{\nu}$ for the first
generation neutrino is $\mu^{sm}_\nu\leq2.3\times10^{-18}\mu_B$ with
$m_\nu\leq7.3$ eV. The upper limit may also be obtained from leptonic
scatterings, which is typically $10^{-10}\mu_B$, or
from astrophysics studies with a more stringent upper limit of $10^{-11}\mu_B$.
Our numerical results for the first generation are summarized in Fig. 2, where
the neutrino magnetic moment $\mu_\nu$ in units of $\mu_B$ is shown as a
function of the leptoquark mass. We note that, for the leptoquark mass
$[V^{({2\over3})}_{1\mu}]$ of 50 to 100 TeV, $\mu_\nu$ is of order
$10^{-18}\mu_B$, a value compatible with the extended standard electroweak
theory.
The upper limit of $\mu_{\nu}$ for the second generation neutrino is $0.51
\times 10^{-13}\mu_B$ (with $m_\nu\leq0.17$ MeV)
in the extended standard electroweak theory \cite{pdg},
or in the range of $10^{-10}\mu_B$ from leptonic scatterings,
while from astrophysics the typical value is $10^{-11}\mu_B$.
In Fig. 3, we describe our prediction on the neutrino magnetic moment
$\mu_\nu$ in units of $\mu_B$ as a function of the leptoquark mass of
1 to 100 TeV. We obtain $\mu_\nu$ around $10^{-12}\sim 10^{-16}\mu_B$, a value
very close to being observable.
The upper limit of $\mu_{\nu}$ for the third generation neutrino is $1.1 \times
10^{-11}\mu_B$ (with $m_\nu\leq35$ MeV) in the extended standard electroweak theory \cite{pdg}, or in
the range of $10^{-6}\sim10^{-7}\mu_B$ from leptonic scatterings, while from
astrophysics studies the upper limit is $10^{-12}\sim10^{-11}\mu_B$.
In Fig. 4, we plot the third generation neutrino magnetic moment
$\mu_\nu$ in units $\mu_B$ as a function of the leptoquark mass in the range of
1 to 100 TeV. We find that $\mu_\nu$ is of order $10^{-10}\sim 10^{-14}\mu_B$.
\section{Conclusion}
Vector leptoquarks in the TeV mass range, when couple to both left- and
right-handed neutrinos, offer an alternative mechanism for generating a
nonvanishing neutrino magnetic moment, which in some cases is by no means
negligible. This alternative mechanism (which does not require a nonzero
neutrino mass) makes use of the special feature that leptoquarks couple
simultaneously to leptons and quarks. For the third generation neutrino, there
is a potential enhancement from the very large top quark mass making the
corresponding predicted neutrino magnetic moments fairly sizable.
\section{Acknowledgments}
We would like to acknowledge Dr. C.-T. Chan for valuable discussions. This work
was supported in part by a grant from National Science Council of Republic of
China (NSC88-2112-M002-001Y).
| proofpile-arXiv_065-7844 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Phenomenology}
Figure 1. is a high resolution optical micrograph of a submicron sized, spinning
particle orbiting around the optical axis of a focused laser beam. In fact, the
particle is trapped in a stable orbit in the focal plane of the beam by an
interaction between the particle's spin moment and the large radial optical
intensity gradient characteristic of a focused, Gaussian coherent light source.
\begin{figure}[h]
\epsfxsize=8.5cm
\epsfysize=7cm
\centerline{\epsffile{orbit.eps}}
\caption[1]{\label{f1} Graphite particle in orbit about the optical axis, immersed in 1/2 atm. nitrogen. (1/500 sec at 1000x). }
\end{figure}
This phenomenon was first discovered by Steve Wilson [1] in 1965, when he
trapped micron sized, non-dielectric particles in the focal plane of a focused
Gaussian laser beam. The techniques used are described in detail in his
monograph on experimental microscopy [2].
The particles are contained in a glass cylinder at the focus of a Gaussian beam, and immersed in nitrogen gas at approximately 1/2 atm. The particles are observed to drop out of the beam when the cylinder is pumped down to approximately 0.1 atm. Hence, the mechanism of trapping is distinct from the mechanism of Ashkin [3], and Chu
[4], involving a gradient force on optically transparent, dielectric particles, which obtains in a vacuum.
Rather, the photograph above shows a spinning graphite particle which is
trapped by \emph{radiometric flow} [5], a fluid dynamical regime which depends on the
large temperature gradients induced on the particle by the absorption of optical
energy from the beam. The surface temperature gradients induced on the particle
by the beam cause radiometric forces which drive the system. We note that the
radiometric forces are completely due to the non-equilibrium condition of the hot gas around the particle.
The next photograph shows three spinning, graphite particles. The direction of
the beam is left to right; the circular orbit of each particle
lies in a plane normal to the optical axis. It is seen that the spin axis of
each particle is approximately along the direction of its orbit. The particles
are heated unevenly,
since the part of a particle closer to the optical axis will absorb more
radiant energy than the part further from the axis, and also the side of the particle
closest to the source of the beam (ie the front of the particle) will absorb more
energy than the back. This uneven absorption of optical energy induces a
radiometric moment on each particle which causes it to spin.
\begin{figure}[h]
\epsfxsize=8.5cm
\epsfysize=7cm
\centerline{\epsffile{spin.eps}}
\caption[1]{\label{f2} Three distinct carbon particles spinning on axes approximately
parallel to their orbital motion. The optical axis is from left to right, and
each particle is orbiting in a plane normal to the optical axis. (1/1500 sec at 2200x) }
\end{figure}
Furthermore, additional temperature gradients are induced along the particle spin
axis which causes it to orbit. The spinning particles exhibit a \emph{dynamic chirality} \ k(t) \ which couples the spin degree of freedom to the orbital motion. We have deduced the radiometric force laws which
cause the particles to spin and orbit from the stready solution of radiometric flow
for an ellipsoid (See Section 2). Indeed, a well-defined, small dimensional dynamical
system may be derived from these force laws which can be put on the computer, and
integrated. We find that the simulated system admits limit cycle solutions which are
periodic and stable. The existence of limit cycle solutions in the simulations
verifies
the stability criterion which was derived from pure analysis [6]. We conclude that the
existence of spinning, orbiting particles trapped in a Gaussian beam is a
nonlinear mode of a radiometric-mechanical system of particles immersed in a viscous
fluid.
Figure 3. shows a simulation of a 1 micron particle, initially at a
distance of 8 microns from the optical axis of a focused Gaussian beam with spot
diameter 5 microns. The particle has zero initial spin angular velocity $\omega_0$, and zero
initial orbital angular velocity $\Omega_0$. The large intensity gradient of the beam
immediately induces a large radiometric moment on the particle, causing it to spin,
and orbit the optical axis.
\begin{figure}[h]
\caption[1]{\label{f2} is attached, due to formatting difficulties}
\end{figure}
The particle spirals into a limit cycle [7]. In fact any particle within approximately
3 times the beam spot radius will spiral into the same steady, stable orbit.
That is, the limit cycle attractor is the asymptotic orbit for all such particles,
for any reasonable initial spin and orbital angular velocities.
The next photograph (Figure 4) shows multiple clouds of graphite particles which
are trapped near the focal plane. It is observed that the clouds repel each other, yet
yet appear to be electrostatically neutral. This can be seen by the observed null effect
of electrostatically charged probes brought into the vicinity of the orbiting
clouds.
\vspace{1.0cm}
(Figure 4 appears on the next page)
\begin{figure}[h]
\epsfxsize=8.5cm
\epsfysize=7cm
\centerline{\epsffile{bands.eps}}
\caption[1]{\label{f4} High resolution image of carbon particles immersed in 0.5 atm
of argon. Orbiting particles form multiple mutually repulsive discrete groups. The beam spot radius \ $\sigma$ $\approx$ 5 microns.
}
\end{figure}
\vspace{1.0cm}
We find that the clouds of particles are pushed towards the focal plane by
longitudinal forces which act along the optical axis. These are also radiometric forces
which arise from the fact that the surfaces of constant optical intensity of
a Gaussian beam are in fact hyperboloids [8], so that the spin axis of the orbit
ing particles has a longitudinal component. The temperature gradient along this
spin axis drives the particles towards the focal plane, where they form a linear
array of stable, mutually repulsive clouds.
We therefore claim that the dynamics of multiple clouds of spinning, orbiting
particles can be accurately described by a system that assumes a non-local interaction
between the clouds. We note that for a 100 mW beam and 5 micron spot radius,
the particles will spin at approximately 100,000 rad/s, and the particle will
orbit with an orbital angular velocity of approximately 3000 rad/s. The motion of
the "bare" particle will then induce a toroidal vortex ring in the fluid around
the orbiting particle. It is known that such vortex rings repel [9] by the laws
of fluid potential flow. Thus we explain the existence of multiple trapped clouds
of particles by the laws of vortex motion. We have derived an N-body force
law which describes the interaction of N toroidal clouds of particles, trapped
along the optical axis near the focal plane [10].
\section{Theory}
The theory of this complex physical system must depend on the basic laws of
radiometric flow. We follow Maxwell [11], who in 1880 derived the correct equations
of motion and boundary conditions for fluid flow around objects with large
temperature gradients. We have essentially followed his derivation using more modern
notation, and we find that we can give a rigorous definition of the regime of
radiometric flow at low Reynolds number,
where the Prandtl number $P_r$ = 1, the particle size is of order the mean free path
\ $\lambda$ \ of the immersing gas, which is moreover considered to be incompressible, with no
sources or sinks of heat except at the boundaries.
We find that that the flow is identical to Navier-Stokes flow with the non-standard (radiometric)
boundary condition:
\vspace{0.5cm}
(1)\ \ \ \ \ $\vec{v}$ = $\vec{v}_{slip}(\nabla T)$ at the boundaries \ \ \ \ \ where
(2)\ \ \ \ \ $\vec{v}_{slip}(\nabla T)$ \ is a linear function of \ $\nabla T$ \ at the boundaries.
\vspace{0.5cm}
Maxwell's derivation of this regime depends on the Chapman-Enskog [12] approximation
of the velocity distribution of the gas just outside the particle, which is
in fact non-Maxwellian, ie in a state of non-equilibrium. This non-equilibrium
distribution is parameterized by 20 expansion coefficients, which may be expressed
in measurable quantities, such as the gas density, temperature, etc. Solutions
to the equations of motion with the correct radiometric boundary conditions have
allowed us to calculate the stress tensor of the immersing gas at the particle
surface, and therefore the radiometric forces and moments on a sphere, ellipsoid
of revolution, and also for a circular flat plate, which is a degenerate case
of the ellipsoid [13].
For example, we may calculate the radiometric flow outside a circular plate normal to the z-axis, with the pure quadrupole temperature distribution
\vspace{0.5cm}
(3)\ \ \ \ \ T = T$(v,\phi)$ = $T_0$ + ($\delta T_Q$)[ 1/2 + $G_2(v,\phi))$] \ \ \ \ \ where
(4)\ \ \ \ \ $G_2(v, \phi)$ = (1/8) (3 $cos^2(v)$ - 1)
+ (3/4) $sin(2v)$ cos($\phi$)
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + (3/8) $sin^2(v)$ cos(2$\phi$)
\vspace{0.5cm}
This is the leading term of the surface temperature variation of an ellipsoidal particle irradiated by a beam with a radial intensity gradient $\nabla$I. The induced quadrupole moment \ $\delta T_Q$ \ is given by
\vspace{0.5cm}
(5)\ \ \ \ \ $\delta T_Q$ = (1/4) $a^2$ $\nabla$I / $k_p$
\vspace{0.5cm}
where \ $k_p$ \ is the thermal conductivity of the particle.
The derivation of the exact velocity field around the plate requires the use of ellipsoidal coordinates, and is long and tedious. However, we are able to use the solution to calculate the stress tensor in these coordinates, and therefore the net radiometric moment on a circular plate of radius a
\vspace{0.5cm}
(6)\ \ \ \ \ $\vec{M}$ = $\hat{y}$ \ [ 9$\pi$ $\rho \nu^2$ a $\gamma^{\prime}$ ($\delta T_Q$/T) ]
\vspace{0.5cm}
where \ $\gamma^{\prime}$ \ is a geometrical factor of order unity, and where we have assumed that the optical axis is in the $\hat{z}$ direction. The kinematic viscosity \ $\nu$ = $\eta$ / $\rho$. We find that the spin axis of the particle is perpendicular to the flow of radiant energy of the beam. This radiometric moment then causes the particle to spin with an angular velocity \ $\omega$ $\propto$ $\gamma^{\prime}$ ($\delta T_Q$/T).
Similarly, we may calculate the radiometric force on a circular plate with a dipole temperature variation. The result is
\vspace{0.5cm}
(7)\ \ \ \ \ $\vec{F}$ = $\hat{\phi}$ \ k(t) \ [ 6$\pi$ $\rho \nu^2$ $\gamma$ ($\delta T_D$/T) ]
\vspace{0.5cm}
The induced dipole moment \ $\delta T_D$ \ is proportional to \ $\nabla$I, and there is a small temperature differential along the spin axis which is proportional to k(t) $\delta T_D$ \ in the $\hat{\phi}$ direction. This is the orbital radiometric force which drives the particle against the orbital viscous drag. The dynamical chirality factor k(t) arises from the angular inertia of the spinning particle, and its law of transformation under rotations [14].
With these solutions, which are exact dipole and quadrupole solutions of the
equations of motion for steady flow at low Reynolds number, we are able to build a
analytical model of the radiometric particle trap dynamical system, ie we use the
exact expressions for the steady radiometric forces and moments as the force
laws which are incorporated into a low dimensional mechanical system which captures
the essential dynamics of orbiting, spinning particles trapped by the beam.
The main result of the analysis is the derivation of the so-called \emph{spin-gradient}
central force which holds the particle in its orbit [15]. This force arises as a
coupling of the particle spin angular velocity to the motion of its c.m. due to
the temperature variation of the kinematic viscosity \ $\nu$ $\propto$ $T^{3/2
}$. We find
\vspace{0.5cm}
(8)\ \ \ \ \ $\vec{F}_{s.g.}$ = - $\hat{r}$ [ 18 $\rho \nu^2$
$\gamma^{\prime}$ ($\delta T_Q$/T) $\gamma$ ($\delta T_D$/T) ] $\propto$ $\omega$ $\nabla$I
\vspace{0.5cm}
The typical magnitude of the central acceleration is approximately 50 $m/s^2$
$\approx$ 5 g, sufficient to support the particles in a gravitational field. The
combination of the radiometric moment, orbital force, and spin-gradient trapping
force results in steady, stable, and approximately circular orbits.
The derivation of these force laws can be considered to be semi-rigorous,
consistent with the underlying gas-kinetic theory, and dimensionally consistent
with the regime of radiometric flow. We find that we can pin down all the geometrical
coefficients and other factors of order unity, resulting in a dynamical system
which depends on no free parameters, ie every factor of order unity is accounted
for.
\section{Conclusions}
We have discussed the existence of a steady, stable periodic mode of motion of the system of micron sized particles immersed in a viscous fluid, trapped in the focal plane of a Gaussian beam. The fundamental force laws may be derived from first principles, ie gas-kinetic theory and offer an explanation of the observed phenomena. This is one of the few methods of trapping non-dielectric particles (such as metallic contaminants) and may find important applications in the field of ultra-clean gas flows.
The system is driven by radiometric forces which arise from the non-Maxwel-
lian distribution of the gas molecules near the surface of the immersed particles caused by the large temperature gradients induced by the beam. Because the Gaussian beam profile falls off so abruptly, the particle surface temperature variation contains a large quadrupole component which causes it to spin. The angular inertia of the spinning particle then results in a small coupling of the particle spin angular momentum into the orbital direction, which sustains its motion against the orbital viscous drag. Finally, the effect of the radial intensity gradient coupled with the particle spin produces an asymmetry of forces in the - $\hat{r}$ direction which causes the spin-gradient central force. We have given quantitative estimates for these forces and moments.
Furthermore, the particle trap theory makes qualitative as well as quantitative
predictions. It is non-obvious why the particles are caused to orbit, since a
Gaussian beam has rotational symmetry around the optical axis. The simulations show
that any particle with an infinitesimally small intrinsic chirality $k_0$ will result
in the spin of the particle being coupled into an orbital motion. We conjecture
that even a small helical component of radiation pressure arising from an
infinitesimally small admixture of a Laguerre-Gaussian (helical) mode [16] would
provide such an infinitesimal intrinsic chirality. Special holograms are available
to generate such modes [17], so that a small intrinsic chirality could be generated
with either sense, which should result in clockwise or counter-clockwise orbits.
The system thus exhibits "dynamical symmetry breaking" of the rotational
symmetry of the Gaussian beam.
| proofpile-arXiv_065-7848 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION AND MOTIVATION}
\label{intro}
Nucleon-antinucleon ($N\bar N$) annihilation, due to the richness of possible
final meson states, is considered one of the major testing grounds in the
study of hadronic interactions.
Both quark \cite{Dover92} and baryon exchange models
\cite{Moussa84,Hipp91,Mull95a}
have been applied
to $N\bar N$ annihilation data.
However, the task of extracting information on the dynamics of
the $N\bar N$ process is enormously complicated by the influence
of initial and final state interactions.
Some of the simplest annihilation channels, where the theoretical complexity
of the $N\bar N$ annihilation process is partially reduced, are radiative
two-body decay modes, where final state interaction is negligible.
Experimental branching ratios for radiative decay channels in annihilation
from $p\bar p$ atoms were made available by recent
measurements of the Crystal Barrel collaboration at CERN,
performing a systematic study of the reactions $p\bar p \to \gamma X$
where $X = \gamma, \pi^0, \eta, \omega$ and $\eta ^\prime$ \cite{Ams93}.
Radiative decays of the $p \bar p$ atom where, in contrast to ordinary
production of nonstrange mesonic final states, isospin is not conserved,
are well suited for studying interference effects in the isospin transition
amplitudes \cite{Ams93,Delcourt82}.
The simplest and most natural framework in studying radiative decay modes is
the vector dominance model (VDM) \cite{Sakurai69}.
In setting up the annihilation mechanism one adopts a two-step process
where the $p\bar p$ system first annihilates into two mesons, with at least
one of the mesons being a vector meson ($\rho$ and $\omega$),
and where the produced vector meson converts into a real photon via the
VDM \cite{Delcourt82}.
In this case, production rates of radiative decay modes can be related to
branching ratios
of final states containing one or two vector mesons.
A first analysis \cite{Ams93} in the framework of VDM was performed by
Crystal Barrel,
showing that the interference in the isospin amplitudes is sizable and
almost maximally destructive for all channels considered.
The phase structure of the interference term is determined by two
contributions:
i) the relative signs of the generic strong transition amplitudes for
$p\bar p \to X \omega $ or $X \rho $ acting in different isospin channels;
ii) the presence of the initial state interaction in the $p\bar p$ atom,
which mixes the $p\bar p$ and $n\bar n$ configurations.
Similarly, analogous sources are responsible for the isospin interference
effects in the strong annihilation reactions $p\bar p \to K\bar K$
\cite{Dover92,Furui90,Jaenick91}.
Here, however, definite conclusions concerning the size and sign of the
interference terms depend strongly on the model used for the annihilation
dynamics.
In the present work we show how the determination of the interference terms
in the analysis of the radiative decays can be uniquely connected to the
isospin mixing effects in the $p\bar p$ atomic wave functions.
The extraction of the magnitude and sign of the interference from the
experimental data can in turn be used to investigate the isospin dependence,
at least in an averaged fashion, of the S-wave $N\bar N$ interaction.
We study this point for different $N\bar N$ interaction models.
This paper is organized as follows.
In Sec. \ref{form} we develop the formalism for radiative decays of
protonium.
As in Ref. \cite{Delcourt82} we adopt a two-step formalism, that is
$p\bar p$ annihilation into
two-meson channels containing a vector meson and its
subsequent conversion into a photon via the VDM.
Both steps are derived consistently from the underlying quark model
in order to fix the phase structure of the isospin dependent transition
amplitudes.
We also indicate the derivation of the branching ratios for radiative
decays of S-wave protonium, where the initial state interaction of
the atomic $p\bar p$ system is included.
Sec. \ref{results} is devoted to the presentation of the results.
We first perform a simple analysis to show that theoretically predicted
branching ratios for radiative decays are consistent with the experimental
data.
We then show that the isospin interference terms present
in the expression for the branching ratios can be uniquely connected
to the $p\bar p$ - $n\bar n$ mixing in the atomic wave function,
induced by initial state interaction.
We quantify the details of this effect for different models of the
$N\bar N$ interaction and apply
the formalism developed in Sec. \ref{form} to extract size and
sign of the interference from data, which will be shown to be sensitively
dependent on the kinematical form factors associated with the transition.
Furthermore, we comment on the application of VDM on the transition
$p\bar p \to \gamma \Phi$, where the corresponding large branching
ratio plays a central role in the discussion on the apparent violations of
the Okubo-Zweig-Iizuka (OZI) rule.
A summary and conclusions are given in Sec. \ref{sum}.
\section{FORMALISM FOR RADIATIVE DECAYS OF PROTONIUM}
\label{form}
In describing the radiative decays of protonium we apply the vector dominance
model \cite{Delcourt82,Sakurai69}.
We consider the two-step process of Fig. 1,
where the primary $p\bar p$ annihilation in a strong transition into a
two-meson final state, containing
the vector mesons $\rho$ and $\omega$, is followed by
the conversion of the vector meson into a real photon.
Here we restrict ourselves to orbital angular momentum L=0 for the initial
$p\bar p$ state, corresponding to the dominant contribution in the liquid
hydrogen data of Crystal Barrel \cite{Ams93}.
Furthermore, we consider the transition processes $p\bar p \to \gamma X$,
where $X = \gamma, \pi^0, \eta, \rho , \omega$ and $\eta ^\prime$,
with $X=\phi$ presently excluded.
The final state $\phi \gamma$ plays a special role in the discussion of the
apparent violation of the Okubo-Zweig-Iizuka (OZI) rule, where a strong
enhancement relative to $\omega \gamma$ was observed \cite{Ams95}.
Within the current approach the description of the first-step process
$p\bar p \to \omega (\rho) \phi$ and its phase structure cannot be accomodated
due to the special nature of the $\phi$, a dominant $s\bar s$
configuration.
Later on we will comment on the possibility to explain
the enhanced $\phi \gamma$ rate within the VDM, as suggested in the
literature \cite{Locher94}, and on the implications of the analysis
presented here.
In the two-step process we have to introduce a consistent
description for either transition in order to identify the source of the
interference term.
In particular, the relative phase structure of the strong transition
matrix elements $p\bar p \to \omega M$ versus $p\bar p \to \rho^0 M$
($M = \pi^0, \eta, \rho , \omega$ and $\eta ^\prime$) is a relevant input
in determining the sign of the interference.
Basic SU(3) flavor symmetry arguments \cite{Klempt96} do not allow to uniquely
fix the phase structure, hence further considerations concerning
spin and orbital angular momentum dynamics in the $N\bar N$ annihilation
process have to be introduced.
Microscopic approaches to $N\bar N$ annihilation either resort to quark
models (for an overview see Ref. \cite{Dover92}) or are based on baryon exchange
models \cite{Moussa84,Hipp91,Mull95a}.
Here we choose the quark model approach, which allows to describe both,
the strong transition of $p\bar p$ into two mesons and the vector meson
conversion into a photon.
For the process $p\bar p \to V M$ where $ V = \rho , \omega $ and $M=
\pi^0,~\eta,~\rho,~\omega$ and $\eta^{\prime}$
we apply the so-called A2 model \cite{Dover92},
depicted in Fig. 2a.
In the discussion of annihilation models based on quark degrees of freedom
this mechanism was shown to give the best phenomenological description in
various meson branching ratios \cite{Dover92,Maruy87,Doverfish}.
In a recent work \cite{muhm} we showed that the A2 model combined with a
corresponding annihilation mechanism into three mesons can describe
$p\bar p$ cross sections in a quality expected from simple non-relativistic
quark models.
The transition matrix element of $p\bar p \to V M$ in the A2 model including
initial state interaction is given by:
\begin{eqnarray}
T_{N\bar N(I J) \to V M} & = &< V (j_1 =1) M(j_2 ) l_f \vert
{\cal O}_{A2} \vert N\bar N (IJ)> \nonumber \\
&=& \sum_j < j_1 j_2 m_1 m_2 \vert j m> <j l_f m m_f \vert J M > \nonumber \\
&& \cdot \vert \vec k \vert Y_{l_f m_f}(\hat k)
< V M \vert \vert {\cal O}_{A2} \vert \vert N \bar N (I J ) >
\label{a2def}
\end{eqnarray}
with the reduced matrix element defined as
\begin{equation}
< V M \vert \vert {\cal O}_{A2} \vert \vert N \bar N (I J ) >
= F(k) < IJ \to VM >_{SF} {\cal B}(I,J)~.
\label{reduced}
\end{equation}
The atomic $p\bar p$ state is specified by isospin component I and
total angular momentum $J=0,1$,
the latter values corresponding to the $^1S_0$ and $^3SD_1$ states
respectively.
The two-meson state $VM$ is specified by the intrinsic spin $j_{1,2}$,
the total spin coupling $j$, the relative orbital angular momentum
$l_f =1 $ and the relative momentum $\vec k$.
Eq. (\ref{reduced}) includes a final state form factor $F(k)$,
the spin-flavor weight $< IJ \to VM >_{SF}$ and an initial state interaction
coefficient ${\cal B}(I,J)$, containing the distortion in the protonium
state J with isospin component I.
Detailed expressions for these factors are summarized in Appendix \ref{appA}.
For the process $V \to \gamma $ (Fig. 2b), where the outgoing photon with
energy
$k^0$ is on-mass shell, we obtain, with the details shown in Appendix
\ref{appB}:
\begin{equation}
T_{V\to \gamma } = \vec \epsilon \cdot \vec S (m_1) ~ Tr ( Q \varphi_V ) ~
{ e~ m_{\rho}^{3/2} \over (2k_0)^{1/2} f_{\rho} }
\end{equation}
where $\vec \epsilon $ and $\vec S (m_1) $, with projection $m_1$,
are the polarization vectors of $\gamma $
and V, respectively.
The flavor dependence of the transition is contained in the factor
$Tr (Q \varphi_V )$, where Q is the quark charge matrix and
$\varphi_V$ the $Q\bar Q$ flavor
wave function of vector meson V.
In setting up the two-step process $N\bar N (IJ) \to V M \to \gamma M$
we use time-ordered perturbation theory with the resulting matrix
element \cite{Pilkuhn}
\begin{equation}
T_{N\bar N(I J) \to V M \to \gamma M } =
\sum_{m_1}~
T_{V\to \gamma }~
{2 m_V \over m_V^2 - s} ~
T_{N\bar N(I J) \to V M}
\end{equation}
where the relativistic propagator for the intermediate vector meson
in a zero width approximation is included.
We resort to a relativistic prescription of the vector meson, since,
with the kinematical constraint $\sqrt{s} =0$, V has to be treated as a virtual
particle, which is severely off its mass-shell.
Accordingly, an additional factor $2m_V$, with the vector meson mass $m_V$,
has to be included to obtain the proper normalization.
From redefining
\begin{equation}
T_{V\to \gamma }{2 m_V \over m_V^2 -s }
\equiv
\vec \epsilon \cdot \vec S (m_1) ~A_{V\gamma}
\end{equation}
we generate the standard VDM expression of
\begin{equation}
T_{N\bar N(I J) \to V M \to \gamma M } =
\sum_{m_1}
T_{N\bar N(I J) \to V M} ~\vec \epsilon \cdot \vec S (m_1)~ A_{V \gamma }~.
\end{equation}
The VDM amplitude $A_{V \gamma }$, derived in the quark model, is:
\begin{equation}
A_{V \gamma } = \sqrt{2} ~ Tr( Q \varphi_V ) \sqrt{m_V \over k^0}
{e \over f_{\rho} } ~,
\end{equation}
which in the limit $m_v \approx k^0$ reduces to the well-known
results of \cite{Sakurai69}
\begin{equation}
A_{\rho \gamma } = e/f_{\rho} = 0.055 ~~{\rm and}~~A_{\omega \gamma}
= {1\over 3} A_{\rho \gamma } ~.
\end{equation}
The phase structure of $A_{V\gamma}$, as determined by $\varphi_V$,
is consistent with the corresponding definitions
for the strong transition matrix element.
In the radiative annihilation amplitude, the coherent sum of
amplitudes for $V= \rho$ and $\omega $, arising from different isospin
channels, has to be taken.
This gives:
\begin{equation}
T_{N\bar N (J)\to \gamma M} =
\sum_{V= \rho , \omega } \delta \cdot T_{N\bar N (IJ)\to V M \to \gamma M}
\end{equation}
where $\delta =1 $ for $V\neq M$ and $\delta =\sqrt{2}$ for $V=M$.
The additional $\delta $ accounts for the two possible contributions
to the amplitude from an intermediate state with $V=M$,
including a Bose-Einstein factor.
For the decay width of $N\bar N\to \gamma X$ we write
\begin{equation}
\Gamma_{N\bar N (J) \to \gamma X} =
2 \pi \rho_f \sum_{M, \epsilon_T , m_2 }
{1 \over (2J+1)} \vert T_{N\bar N (J)\to \gamma M} \vert ^2
\end{equation}
$\rho_f $ is the final state density and the sum is over the final state
magnetic quantum numbers of meson X ($m_2$) and of the photon (with transverse
polarization $\epsilon_T$).
The corresponding branching ratio B is:
\begin{equation}
B(\gamma X ) = { (2J+1) \over 4 \Gamma_{tot}(J) }
\Gamma_{N\bar N (J) \to \gamma X}
\end{equation}
where a statistical weight of the initial protonium state J with decay width
$\Gamma_{tot} (J)$ is taken.
With the details of the evaluation indicated in Appendix \ref{appC}, we finally
obtain for the branching ratios of $p\bar p \to \gamma X $
($X= \pi^0, \eta , \eta^{\prime}$):
\begin{eqnarray}
B( \gamma \pi^0) &=& {3\over 4 \Gamma_{tot}(J=1)}
f( \gamma , \pi^0 ) A_{\rho \gamma}^2 \nonumber \\
&&\cdot \vert {\cal B}(0, 1) <^{13}SD_1 \to \rho^0 \pi^0 >_{SF} +
{1 \over 3}{\cal B}(1, 1) <^{33}SD_1 \to \omega \pi^0 >_{SF}
\vert^2 ~.
\end{eqnarray}
Alternatively, $B(\gamma \pi^0)$ can be expressed in terms of
the branching ratios $B(V\pi^0)$ for the strong
transitions $N\bar N \to V \pi^0$ (Eq. (\ref{A14}) of Appendix \ref{appA}):
\begin{equation}
B( \gamma \pi^0) = { f(\gamma , \pi^0 ) \over f( V , \pi^0 )}
A_{\rho \gamma }^2
\left( B(\rho^0 \pi^0 ) + {1\over 9} B(\omega \pi^0 )
+ {2 \over 3} cos \beta_{J=1} \sqrt{B(\rho^0 \pi^0 )B(\omega \pi^0 )}\right)
\label{branpg}
\end{equation}
with the interference phase $\beta_{J=1}$ determined by
\begin{equation}
cos \beta_{J=1} = {Re \left\{ {\cal B}(0,1)^{\ast}{\cal B}(1,1) \right\}
\over \vert {\cal B}(0,1){\cal B}(1,1) \vert }~.
\label{interf1}
\end{equation}
The same equations apply for $X= \eta$ and $\eta^{\prime}$ with $\pi^0$ being
replaced by the respective meson.
Here, a kinematical phase space factor $f$ is introduced, which can
be identified with those derived in specific models
(Eqs. (\ref{A13}) and (\ref{C10})) or taken from phenomenology.
Values for the branching ratios on the right hand side of Eq. (\ref{branpg})
can either
be taken directly from experiment or determined in the quark model
considered in Appendix \ref{appA}.
Magnitude and sign of the interference term, as determined by
$cos \beta_{J=1}$, solely depends on initial state interaction for the
spin-triplet $N\bar N$ state (J=1), as expressed by the coefficients
${\cal B}(I, J=1)$.
Similarly, for the branching ratios of $p\bar p \to \gamma X$ $(X=\rho^0 ,
\omega , \gamma )$, now produced from the spin-singlet state (J=0) of
protonium, we obtain:
\begin{equation}
B( \gamma \rho^0) = { f(\gamma , \rho^0 ) \over f( V , V)}
A_{\rho \gamma }^2
\left( {1\over 9} B(\rho^0 \omega ) + {2} B( \rho^0 \rho^0 )
+ {2 \sqrt{2} \over 3} cos \beta_{J=0} \sqrt{B(\rho^0 \omega )B(\rho^0 \rho^0)
}\right) ~,
\label{branrg}
\end{equation}
\begin{equation}
B( \gamma \omega ) = { f(\gamma , \omega ) \over f( V , V)}
A_{\rho \gamma }^2
\left( B(\rho^0 \omega ) + {2\over 9} B( \omega \omega )
+ {2 \sqrt{2} \over 3} cos \beta_{J=0} \sqrt{B(\rho^0 \omega )
B(\omega \omega ) }\right) ~,
\label{branog}
\end{equation}
and
\begin{eqnarray}
B( \gamma \gamma ) = { f(\gamma , \gamma ) \over f( V , V)}
A_{\rho \gamma }^4&
\left\{ B(\rho^0 \rho^0 ) + {2\over 9} B( \omega \rho^0) +
{1 \over 81} B( \omega \omega ) +
+{2 \over 9} \sqrt{ B( \rho^0 \rho^0 ) B( \omega \omega )} +
\right.
\nonumber \\
& \left. + {2 \sqrt{2} \over 3} cos \beta_{J=0} \sqrt{B(\rho^0 \omega )}
\left( \sqrt{B(\rho^0 \rho^0 )} + {1\over 9} \sqrt{B(\omega \omega )}
\right) \right\}
\label{brangg}
\end{eqnarray}
with the interference determined as
\begin{equation}
cos \beta_{J=0} = {Re \left\{ {\cal B}(0,0)^{\ast}{\cal B}(1,0) \right\}
\over \vert {\cal B}(0,0){\cal B}(1,0) \vert }~.
\label{interf0}
\end{equation}
Again, the sign and size of the interference $cos \beta_{J=0}$ are fixed by the
the initial state interaction, here for protonium states with J=0.
Eqs. (\ref{branpg}), (\ref{branrg}) and (\ref{branog}) are analogous to
those of Ref. \cite{Delcourt82};
this is also true for Eq. (\ref{brangg}) in the SU(3) flavor limit
with $B(\rho^0 \rho^0)= B(\omega \omega )$.
However, the essential and new feature of the present derivation is that
the interference term is completely
traced to the distortion in the initial protonium state.
The possibility to link the interference terms $cos \beta_J$ to the initial
state interaction in protonium is based on the separability of the
transition amplitude $T_{N \bar N (IJ) \to V M }$.
The sign and size of $cos \beta_J$ (J=0,1) will have a direct physical
interpretation, which will be discussed in the following chapter.
We briefly comment on alternative model descriptions for the strong transition
amplitudes $N\bar N \to VM$ and its consequences for the interference terms
in radiative $p\bar p$ decays.
Competing quark model approaches in the description of $N\bar N$ annihilation
into two mesons concern rearrangement diagrams as opposed to the planar diagram
of the A2 prescription of Fig. 2a.
In the rearrangement model a quark-antiquark pair of the initial $N\bar N$ state
is annihilated and the remaining quarks rearrange into two mesons.
The quantum numbers of the annihilated quark-antiquark pair are either that
of the vacuum ($^3P_0$-vertex, R2 model \cite{Green84}) or that of a gluon
($^3S_1$-vertex, S2 model \cite{Maruy85,Henley86}).
In the R2 model, two ground state mesons cannot be produced from an initial
$N\bar N$ state in a relative S-wave;
hence R2 is not applicable to the annihilation process considered here.
The S2 model generates transition matrix elements for $p\bar p \to VM$,
which are analogous to the ones of the A2 model of Eqs. (\ref{a2def})
and (\ref{reduced}), but with different absolute values for the spin-flavor
weights $<IJ \to VM>_{SF}$ \cite{Maruy85,Henley86}.
However, the relative signs of the matrix elements $<IJ \to \rho M>$ and
$<IJ \to \omega M>$ are identical to the ones of the A2 model, except in the
case $M=\eta$ where it is opposite.
Therefore, results for branching ratios $B(\gamma M)$ of radiative decays
expressed in terms of the branching ratios $B(VM)$ are identical both in the
A2 and the S2 approach, except for $B(\gamma \eta)$ where $cos \beta_{J=1}$
changes sign.
But, as will be shown later, the sign structure of $cos \beta_J$ deduced in
the framework of the A2 model is consistent with the one deduced from
experiment.
Possible deviations from the formalism presented here include contributions from
virtual $N\bar \Delta \pm \Delta \bar N$ and $\Delta \bar \Delta$ states
to the annihilation amplitudes as induced by initial state interaction.
The role of $\Delta$ state admixture and its effect on $p\bar p$ annihilation
cross sections in the context of quark models was studied in
Refs. \cite{Maruy87,GreenNis}.
Although contributions involving annihilation from $N\bar \Delta$ and $\Delta
\bar \Delta$ states can be sizable \cite{GreenNis}, the overall effect
on the annihilation cross section is strongly model dependent.
In the case of the A2 model \cite{Maruy87}, these contributions are found to
be strong for $N\bar N$ D-wave coupling to channels with a virtual $\Delta$ in
the S-wave, hence dominantly for the $^{13}SD_1$ partial wave, where for
isospin $I=0$ the tensor force induces strong mixing.
However, for the radiative decay processes at rest considered here, the
possible $N\bar \Delta \pm \Delta \bar N$ configurations only reside in the
$^{33}SD_1$ state (here $^{33}SD_1 \to \pi^0 \omega$ and
$^{33}SD_1 \to \eta \rho^0$).
Due to the weak D-wave coupling in the I=1 channel, $N\bar \Delta$
configurations play a minor role and are neglected.
Alternatively, the strong transition amplitudes $N\bar N \to VM$ can be
derived in baryon exchange models \cite{Moussa84,Hipp91,Mull95a}.
Here however, the analysis is strongly influenced by the presence both
of vector and tensor couplings of the vector mesons to the nucleon,
by contributions of both $N$ and $\Delta$ exchange (where the latter one
contributes to the $\rho^0 \rho^0$ and $\pi^0\rho^0$ channels) and by the
addition of vertex form factors.
The interplay between these additional model dependencies complicates an
equivalent analysis.
Due to simplicity we restrict the current approach to a certain class of quark
models, although deviations from the analysis given below when applying
baryon exchange models cannot be excluded.
\section{PRESENTATION OF RESULTS}
\label{results}
In Sec. \ref{branch} we discuss the direct application of the quark model
approach
to the radiative $N\bar N$ annihilation process.
In Sec. \ref{isospint} we focus specifically on the isospin interference effects
occuring in radiative transitions.
We show that the the interference term is solely determined
by the isospin dependent $N \bar N$ interaction,
and give theoretical predictions for the phase
$cos \beta_J$ in various $N\bar N$ interaction models.
Sign and size of $cos \beta_J$ can be interpreted by the dominance of
either the $p\bar p$ or the $n\bar n$ component of the protonium
wave function in the annihilation region.
Furthermore we show that extraction of the interference term from
experimental data is greatly affected by the choice of the kinematical
form factor.
Finally we comment on the applicability of the vector dominance approach
to the $p\bar p \to \gamma \phi$ transition.
\subsection{Branching ratios of radiative protonium annihilation}
\label{branch}
In a first step we directly evaluate the expression for $B(\pi^0\gamma)$
and $B( X \gamma)$, $X=\eta ,~\omega , ~\eta^{\prime} ,~\rho$ and $\gamma $,
as given by Eq. (\ref{branpg}), (\ref{branrg}) - (\ref{brangg})
and Eq. (\ref{A14}).
To reduce the model dependencies we choose a simplified phenomenological
approach as advocated in studies for two-meson branching ratios in $N\bar N$
annihilation \cite{Dover91}.
The initial state interaction coefficients ${\cal B}(I,J)$
are related to the probability for a protonium state with spin J and isospin I,
with the normalization condition $\vert {\cal B}(0,J) \vert^2 +
\vert {\cal B}(1,J) \vert^2 =1$.
The total decay width of state J is given by $\Gamma_{tot} (J)$ with the
separation into isospin contributions as $\Gamma_{tot} (J) = \Gamma_0 (J)
+ \Gamma_1 (J)$.
We identify the ratio of isospin probabilities $\vert {\cal B}(0,J) \vert^2
/ \vert {\cal B}(1,J) \vert^2$ with that of
partial annihilation widths $\Gamma_0 (J) / \Gamma_1 (J)$.
For our calculations we adopt the isospin probabilities deduced from
protonium states obtained with
the Kohno-Weise $N\bar N$ potential \cite{Kohno86},
where $p\bar p -n\bar n$ isospin mixing and tensor coupling
in the the $^3SD_1$ state are fully included \cite{Carbonell89}.
The resulting values for ${\cal B} (I,J)$ are
shown in Table \ref{tab1}.
The kinematical form factor $f( \gamma , X)$ is taken of the
form \cite{Vander88}
\begin{equation}
f(\gamma , X) = k \cdot exp\left\{ -A \sqrt{ s - m_X^2} \right\}
\label{van}
\end{equation}
where k is the final state c.m. momentum with total energy $\sqrt{s}$.
The constant $A=1.2~GeV^{-1}$ is obtained from a phenomenological fit
to the momentum dependence of various multipion final states in
$p\bar p$ annihilation \cite{Vander88}.
Results for the branching ratios in this simple model ansatz
are given in Table \ref{tab2}.
For the decay modes $\eta \gamma$ and $\eta^{\prime} \gamma $ we use
a pseudoscalar mixing angle of $\Theta_p = -17.3^{\circ}$ \cite{Ams92}.
The model contains a free strength parameter, corresponding to the strong
annihilation into
two mesons in the two-step process.
Since we compare the relative strengths of the branching ratios,
we choose to normalize the entry for $B(\gamma \pi^0)$
to the experimental number.
The A2 quark model prediction for the hierarchy of branching ratios
is consistent with experiment.
In particular, the relative strength of transitions from the spin-singlet
$(^1S_0)$ and triplet ($^3SD_1$) $N\bar N$ states is well
understood.
The results of Table \ref{tab2} give a first hint, that the VDM approach is a
reliable tool in analysing the radiative decays of protonium.
Furthermore, all considered branching ratios are consistent with
minimal kinematical and dynamical assumptions.
We stress that the good quality of the theoretical fit to the experimental
data of Table \ref{tab2} should not be overemphasized given the simple
phenomenological approach where initial state interaction is introduced
in an averaged fashion.
Although the A2 model provides a reasonable account of $N\bar N$
annihilation data, discrepancies remain in certain two-meson channels
\cite{Dover92,Dover91}.
In particular, observed two-meson annihilation branching ratios can show strong
deviations from simple statistical or flavor symmetry estimates
(dynamical selction rules), which in their
full completeness cannot be described by existing models.
Furthermore, theoretical predictions for two-meson branching ratios can be
strongly influenced by initial
state $N\bar N$ interaction (see for example Ref. \cite{Maruy88}),
as in the case of radiative decays,
but also by the possible presence of final state meson-meson scattering
\cite{muhm,Mull95b}.
Given these limitations in the understanding of two-meson annihilation
phenomena we will in turn dominantly focus on the determination of the
interference term present in radiative $p\bar p$ decays.
Here $N\bar N$ annihilation model dependencies are avoided by resorting
to the experimentally measured two-meson branching ratios.
\subsection{Isospin interference and initial state interaction}
\label{isospint}
In a second step we focus on the determination and interpretation of
the isospin interference terms $cos \beta_J $ (J=0,1) given by
Eqs. (\ref{interf1}) and (\ref{interf0}),
which in turn are related to the $N\bar N$ initial state
interaction via the coefficients ${\cal B}(I, J)$ in Eq. (\ref{A10}).
A full treatment of protonium states must include both Coulomb and
the shorter ranged strong interaction, where the coupling of
$p\bar p$ and $n\bar n$ configurations is included.
The isospin content of the corresponding protonium wave function $\Psi $
depends on r; for large distances $\Psi $ approaches a pure $p\bar p$
configuration, i.e., $\Psi (I=0) = \Psi (I=1)$.
As r decreases below 2 fm, $\Psi$ starts to rotate towards an isospin
eigenstate, i.e. $\Psi $ takes the isospin of the most attractive potential
in the short distance region.
The $N\bar N$ annihilation process under consideration here is most
sensitive to the behaviour of $\Psi $ for $r \leq 1 $ fm,
where the strong spin- and isospin dependence of the $N\bar N$ interaction
may cause either the $I=0$ or the $I=1$ component to dominate.
The consequences of the spin-isospin structure for energy shifts and
widths of low lying protonium states have been discussed in Refs.
\cite{Carbonell89,Kauf79,Richard82}.
The sensitivity of $p \bar p - n\bar n$ mixing in protonium states
to changes in the meson-exchange contributions to the $N \bar N$ interaction
was explored in \cite{Dover91}.
Let us first discuss the physical interpretation of the interference terms
$cos \beta_J $.
For a protonium state described by a pure $p\bar p$ wave function,
the isospin dependent initial
state interaction coefficients are related, ${\cal B}(I=0 , J) =
{\cal B}(I=1 , J)$.
Similarly, for a protonium state given by a pure $n\bar n$ wave function
in the annihilation region, that is $\Psi (I=0) = - \Psi (I=1)$,
${\cal B}(I=0 , J) = - {\cal B}(I=1 , J)$.
Together with Eqs. (\ref{interf1}) and (\ref{interf0}), we obtain
for the interference terms
\begin{equation}
cos \beta _J
= \left\{ \begin{array}{*{2}{c}}
+1 & \, {\rm for~ pure~ p\bar p} \\
-1 & \, {\rm for~ pure~ n\bar n}
\end{array}\right. ~.
\end{equation}
Therefore, a dominant $p\bar p$ component in the protonium wave function leads
to constructive interference in radiative annihilation, with $cos \beta_J =1$.
Destructive interference reflects the dominance of the $n\bar n$ component
in the annihilation region of the protonium state.
Given this direct physical interpretation of the interference terms,
radiative annihilation serves as a indicator for the isospin dependence
of the $N\bar N$ protonium wave functions.
For quantitative predictions of the interference terms and for comparison,
we resort to protonium wave functions calculated \cite{Carbonell89} with
three different potential models
of the $N\bar N$ interaction,
that by Kohno and Weise
\cite{Kohno86} (KW) and the two versions of the Dover-Richard
\cite{Dover80,Richard82} (DR1 and DR2) potentials.
The calculation of Ref. \cite{Carbonell89}
takes fully account of the neutron-proton mass difference, tensor coupling
and isospin mixing induced by the Coulomb interaction.
Results for the interference terms $cos \beta _J$ as deduced from
the three different potential models are given in Table \ref{tab3}.
The value of the range parameter $d_{A2}$ in the initial state form factor
entering in Eq. (\ref{A10}) is adjusted to the range of the annihilation
potential of the respective models.
With the choice of $d_{A2} = 0.12 ~fm^2$ (KW and DR2) and
$d_{A2} = 0.03 ~fm^2$ (DR1) the calculated ratios of isospin probabilities
$\vert {\cal B}(0,J) \vert^2 / \vert {\cal B}(1,J) \vert^2$ are close to
those of partial annihilation widths $\Gamma_0 (J) / \Gamma_1 (J)$
calculated in Ref. \cite{Carbonell89}.
All three potential models consistently predict constructive interference
for radiative annihilation from the atomic $^1S_0$ state, indicating
a dominant $p\bar p$ component.
For radiative annihilation from the spin triplet state $^3S_1$
predictions range from nearly vanishing (DR1) to a sizable destructive
interference, where latter effect can be traced to a dominant
short ranged $n \bar n$ component in the protonium state.
In Table \ref{tab3} we also indicate predictions for the interference
term $cos \beta_1$, where the D-wave admixture in the $^3SD_1$ state
has been included.
The results are obtained for the specified values of $d_{A2}$ with an
additional choice of hadron size parameters (that is $R_N^2 /R_M^2 = 0.6$
or $<r^2 >^{1/2}_N / <r^2>^{1/2}_M =1.2$) entering in the expression of
Eq. (\ref{A10}).
The inclusion of D-wave admixture in the initial state interaction coefficients
${\cal B}(I,J=1)$ as outlined in Appendix \ref{appA} is a particular
feature of the A2 quark model.
Hence, predictions for $cos \beta_1$ with the $^3D_1$ component of
the atomic $^3SD_1$ state included are strongly model dependent and
should not be overestimated.
Generally, inclusion of the D-wave component in the form dictated by the
quark model tends to increase the values of the interference terms.
We also investigated the sensitivity of the interference term $cos\beta _J$
on the range of the initial state form factor, expressed by the coefficient
$d_{A2}$.
Although the absolute values for the initial state interaction coefficients
${\cal B}(I,J)$ sensitively depend on the specific value for $d_{A2}$,
variation of $d_{A2}$ by up to 50 \% has little influence on sign and also
on size of $cos \beta_J$.
Thus, predictions for the interference terms $cos \beta_J$ in all three
potential models considered, are fairly independent on the specific
annihilation range of the $N\bar N$ initial state.
The models used for describing the $N\bar N$ initial state interaction
in protonium are characterized by a state independent, complex
optical potential due to annihilation.
Potentials of this type reproduce the low-energy $p\bar p$ cross sections
and protonium observables, such as energy shifts and widths, fairly well.
A more advanced fit \cite{pignone}
to $N\bar N$ scattering data, in particular to the
analysing powers for elastic and charge-exchange scattering, requires
the introduction of an explicit state and energy dependence in the
phenomenological short range part of the $N\bar N$ interaction.
At present, latter $N\bar N$ potential \cite{pignone}
was not applied to the protonium
system; hence the model predictions of Table \ref{tab3} should be regarded
as a first estimate for the $p\bar p -n\bar n$ mixing mechanism in the
$N\bar N$ annihilation region.
\subsection{Isospin interference from data}
\label{isospind}
The VDM approach allows to relate the branching ratios of radiative
annihilation modes to branching ratios with final states containing one or two
vector mesons.
Using these measured branching ratios in Eqs. (\ref{branpg}) and
(\ref{branrg}) - (\ref{brangg}) we can
extract the interference terms $cos \beta_J$ directly
from experiment.
However, conclusions on the sign and size of the interference terms
strongly depend on the choice of the kinematical form factor
$f(X_1, X_2)$, $X_1$ and $X_2 = \gamma$ or meson, entering in the different
expressions.
A first analysis \cite{Ams93} for determining the interference terms from data
was performed by the Crystal Barrel Group, assuming a form factor of
\cite{Hippel72}
\begin{equation}
f(X_1 , X_2 ) = k \left( { (kR)^2 \over 1 + (kR)^2 } \right) ~,
\label{hippel}
\end{equation}
where k is the final state c.m. momentum and the interaction range is chosen
as $R = 1.0 ~fm$.
This form factor is appropriate for small momenta k,
taking into account the centrifugal barrier effects near threshold.
However, for radiative decays, with high relative momenta in the final state,
the choice of Eq. (\ref{van}) is more appropriate,
it contains an exponential which restricts
the importance of each decay channel to the energy region near threshold.
This can be regarded as a manifestation of multichannel unitarity,
that is the contribution of a given decay channel cannot grow linearly with
k (as in the form of Eq. (\ref{hippel})),
since other channels open up and compete for
the available flux, subject to the unitarity limit.
Also, the latter form factor is given a sound phenomenological basis in
$N\bar N$ annihilation analyses, for a more detailed discussion see for
example Ref. \cite{Amsler97}.
Extracted values for the interference terms $cos \beta _J$ with
different J and different prescriptions for the kinematical form factor
are given in Table \ref{tab4}.
We also include there a third choice for the
kinematical form factor (Eq. (\ref{A13})), as deduced from the A2 quark model
description of the $N\bar N$ annihilation process.
Although finite size effects of the hadrons are included here,
through the harmonic oscillator ansatz for the hadron wave functions
the form factor is again useful for low relative momenta k.
For the results of Table \ref{tab4} we use the measured branching ratios of
$p\bar p \to \pi^0 \rho^0$ \cite{Chiba88}, $\pi^0 \omega, ~\eta \omega,
~\omega \omega , ~\eta^{\prime} \omega$ \cite{Amsler93},
$\eta \rho$ \cite{Adiels89} or \cite{Amslermyhrer91}, $\rho \omega $
\cite{Bizarri69} and $\eta^{\prime} \rho$ \cite{Amsler97}.
Values for $cos \beta_J$ using the phase space factor of Eq. (\ref{hippel})
are directly taken from the original analysis of Ref. \cite{Ams93}.
Error estimates for the other entries in Table \ref{tab4} assume statistical
independence of the measured branching ratios.
For the radiative decay channel $\eta^{\prime} \gamma $ only an upper limit
for $cos \beta_1$ can be given.
For all three choices of the kinematical form factor, the extracted values of
$cos \beta _J$ are consistent with the VDM assumption as they correspond
to physical values.
However, as evident from Table \ref{tab4}, conclusions on sign and size of the
interference strongly depend on the form of the kinematical phase space factor.
For the preferred choice, i.e. Eq. (\ref{van}), we deduce destructive interference
for radiative annihilation from the $^3SD_1$ state, while for the $^1S_0$ state
the corresponding isospin amplitudes interfere constructively.
This is in contrast to the original analysis of Ref. \cite{Ams93}, where the
interference term is determined to be almost maximally destructive
for all channels considered.
Given the large uncertainties for $cos \beta _J$ using the preferred
form factor,
the values deduced from data are at least qualitatively consistent
with the theoretical predictions of Table \ref{tab3}, indicating a
dominant $p\bar p$ component for the $^1S_0$ and a sizable $n\bar n$ component
for the $^3SD_1$ protonium wave function.
As discussed in Sec. \ref{isospint}, precise values for $cos\beta_J $ are rather
sensitive on the isospin decomposition of the protonium wave function in the
annihilation region.
However, the current uncertainties in the experimental data should be
very much improved to allow a more quantitative
analysis of the isospin dependence of the $N\bar N$
interaction.
\subsection{Vector dominance model and the $p\bar p \to \gamma \phi$
transition}
\label{phi}
Measurements on nucleon-antinucleon annihilation reactions into
channels containing $\phi$ mesons indicate apparent violations
of the Okubo-Zweig-Iizuka (OZI) rule \cite{Amsler97}.
According to the OZI rule, $\phi$ can only be produced through its non-strange
quark-antiquark component, hence $\phi $ production should vanish for an
ideally mixed vector meson nonet.
Defining the deviation from the ideal mixing angle $\theta _0 =35,3^{\circ}$
by $\alpha = \theta - \theta_0$ and asssuming the validity of the OZI rule,
one obtains the theoretically expected ratio of branching ratios
\cite{Dover92}:
\begin{equation}
R(X)=B( N \bar N \to \phi X )/B(N \bar N \to \omega X) = tan^2 \alpha
\approx 0.001 - 0.003
\label{ratiox}
\end{equation}
where X represents a non-strange meson or a photon.
Recent experiments \cite{Amsler97} have provided data on the
$\phi /\omega $ ratios
which are generally larger than the standard estimate of Eq. (\ref{ratiox}).
The most notable case is the $\phi \gamma $ channel
for $p\bar p$ annihilation in liquid hydrogen \cite{Ams95}, where data
show a dramatic violation of the OZI rule of up to two orders of
magnitude, that is $R(X = \gamma ) \approx 0.3$.
Substantial OZI rule violations in the reactions $p\bar p \to X \phi$
can possibly be linked to the presence of strange quark components
in the nucleon \cite{Ellis95,Gutsche97}.
However, apparent OZI rule violations can also be generated by conventional
second order processes, even if the first order term corresponds to a
disconnected quark graph \cite{Gortch96,Marku97}.
In Refs. \cite{Marku97,Locher94} the apparently large value for the
branching ratio $B(\gamma \phi)$
for $p\bar p$ annihilation in liquid hydrogen is explained within
the framework of the VDM.
Using the experimental rates of $B(\rho \phi) =(3.4 \pm 1.0)
\times 10^{-4}$ and $B(\omega \phi) =(5.3 \pm 2.2) \times 10^{-4}$
\cite{Reifen91} as inputs, the branching ratio $B(\gamma \phi )$
is given in the VDM by:
\begin{equation}
B(\gamma \phi ) = {f(\gamma , \phi ) \over f(\omega V)}
\left( 12.0 + cos\beta_0 \cdot 8.5 \right) \cdot 10^{-7}
\label{phirel}
\end{equation}
Since the $\phi \omega $ and $\phi \rho $ channels also violate
the OZI rule estimate, $R(X= \rho) \approx R(X=\omega ) \approx 10^{-2}$
\cite{Amsler97},
the standard $\omega -\phi$ mixing cannot be the dominant
mechanism for the production of the $\phi \omega $ and $\phi \rho $ channels
and
the formalism developed in Sec. \ref{form} cannot be used to determine
the phase structure of the interference term $cos \beta_0$ for
$B(\gamma \phi)$.
Consequently, the interference term $cos \beta_0$ extracted in the
$\gamma \omega $ reaction is not necessarily consistent with that of
the $\gamma \phi$ decay channel.
For maximal constructive interference $(cos\beta = 1)$ one obtains
an upper limit for $B(\gamma \phi)$ in the VDM calculation of:
\begin{eqnarray}
B( \gamma \phi) &=& 2.7 \times 10^{-5} \quad {\rm for~}
f=k^3~\cite{Locher94} \nonumber \\
B( \gamma \phi) &=& 1.5 \times 10^{-6} \quad {\rm for~}
f~ {\rm given ~ by~ Eq.~} (\ref{van})
\label{upper}
\end{eqnarray}
This is to be compared with
the experimental result $B(\gamma \phi) = (2.0 \pm 0.4)\times 10^{-5}$
\cite{Ams95}.
The possibility to explain the experimental value of $B(\gamma \phi)$ in VDM
depends again strongly on the choice of the kinematical form factor.
In Ref. \cite{Locher94} the form $f=k^3$ is used, appropriate for relative
momenta $k$ near threshold, resulting in an upper limit which lies slightly
above the observed rate for $B(\gamma \phi)$.
With the choice of Eq. (\ref{van}) the upper value underestimates
the experimental number by an order of magnitude.
When we extract the interference terms $cos \beta_J$ from the conventional
radiative decay modes with the choice $f= k^3$, we obtain:
$cos \beta_1 = -1.32 $ for $\pi \gamma$, $cos \beta_1 =
-0.94 $ for $\eta \gamma$ and $cos \beta_0 =-.90 $ for
$\omega \gamma $.
Hence, a near threshold prescription for the kinematical form factor
in the VDM
leads to maximal destructive interference for all channels
considered,
exceeding even the physical limit in the case of $\pi \gamma $.
This would indicate a nearly pure $n\bar n$ component in the annihilation
range of the protonium wave functions for both the J=0 and 1 states.
These results are in strong conflict with the theoretical expectations
for $cos \beta_J$ reported in Sec. \ref{isospint},
where at least
qualitative consistency is achieved with the kinematical form factor
of Eq. (\ref{van}).
Recent experimental results \cite{evang} for the reaction cross section
$p\bar p \to \phi \phi$ exceed the simple OZI rule estimate by about two orders
of magnitude.
Therefore, in the context of VDM an additional sizable contribution to the
branching ratio $B(\gamma \phi)$ might arise, although off-shell,
from the $\phi\phi$ intermediate state.
With an estimated cross section of $p\bar p \to \omega \omega $ of about
5 mb in the energy range of the $\phi \phi$ production experiment, the ratio
of cross sections is given as $\sigma_{\phi \phi}/\sigma_{\omega \omega}
\approx 3.5~\mu b / 0.5 ~mb$ \cite{evang}.
Given the measured branching ratios of $\omega \omega $ \cite{Amsler93}
and $\omega \phi$ \cite{Reifen91} we can simply estimate the ratio of
strong transition matrix elements for annihilation into $\phi \phi$ and
$\omega \phi$ from protonium of $\sqrt{B(\phi \phi )/B( \omega \phi )}
\approx 0.43 $.
For this simple order of magnitude estimate we assume that $\sigma_{\phi \phi}/
\sigma_{\omega \omega}$ is partial wave independent and phase space corrections
are neglected.
With the VDM amplitude $A_{\phi \gamma} ={ \sqrt{2} \over 3} A_{\rho \gamma}$
we obtain an upper limit of $B(\gamma \phi) \approx 2.3 \times 10^{-6}$ with
f given by Eq. (\ref{van}), where the contribution of the $\phi \phi $
intermediate state is now included.
Excluding an even further dramatic enhancement of the $\phi \phi$ channel
for $N\bar N$ S-wave annihilation, inclusion of the $\phi\phi$ intermediate
state does not alter the conclusions drawn from the results of Eq.
(\ref{upper}).
Hence,
the large observed branching ratio for $\gamma \phi$ remains unexplained
in the framework of VDM.
\section{SUMMARY AND CONCLUSIONS}
\label{sum}
We have performed a detailed analysis of radiative $p\bar p$ annihilation
in the framework of a two-step process, that is $p\bar p$ annihilates
into two-meson channels containing a vector meson which is subsequently
converted into a photon via the VDM.
Both processes are consistently formulated in the quark model, which allows
to uniquely identify the source of the isospin interference present in
radiative transitions.
Based on the separability of the transition amplitude $N\bar N \to VM$,
sign and size of the interference terms can be linked to the dominance
of either the $p\bar p$ or the $n\bar n$ component of the 1s protonium wave
function in the annihilation region, hence constitutes a direct test
of the isospin dependence of the $N \bar N$ interaction.
In a first step we directly applied the quark model in a simplified
phenomenological approach to the radiative $N\bar N $ annihilation process.
Model predictions are consistent with data and confirm the usefulness
of VDM in the analysis of radiative transitions.
In a second step we discussed sign and size of the interference term as
expressed by $cos \beta_J$ ($J=0,1$).
Direct predictions of $cos \beta_J$, as calculated for different
potential models of the $N\bar N$ interaction, are qualitatively consistent,
in that a sizable constructive interference is deduced for radiative
annihilation from the atomic $^1S_0$ state, while for the $^3S_1$ state the
interference term is vanishing or destructive.
These predictions should be tested with more realistic parameterizations
of the $N\bar N$ interaction \cite{pignone}.
Extraction of the interference effect from data is greatly influenced by
the choice of the kinematical form factor associated with the transition.
Values of $cos \beta_J$ determined for the preferred form of Eq. (\ref{van})
are qualitatively consistent with our theoretical study;
however, a more quantitative analysis is restricted by the present
uncertainties in the experimental data.
Within the consistent approach emerging from the analysis of non-strange
radiative decay modes of protonium, an explanation of the measured
branching ratio for the OZI suppressed reaction $p\bar p \to \gamma \Phi$
cannot be achieved.
New mechanisms, linked to the strangeness content in the
nucleon, may possibly be responsible for the dramatic violation of the
OZI rule in the $\gamma \Phi$ final state.
\begin{acknowledgments}
This work was supported by a grant of
the Deutsches Bundesministerium
f\"ur Bildung und Forschung (contract No. 06 T\"u 887) and by
the PROCOPE cooperation project (No. 96043).
We also acknowledge the generous help of Jaume Carbonell for providing
us with the protonium wave functions used in this paper.
\end{acknowledgments}
\newpage
\begin{appendix}
\section{Nucleon-antinucleon annihilation into two mesons in the
quark model}
\label{appA}
In describing the annihilation process of $N\bar N \to VM$ where $V=\rho,~
\omega$ and $M=\pi^0,~\eta,~\rho,~\omega$ and $\eta^{\prime}$
we use the A2-model of Fig. 2a.
Detailed definitions and derivation of this particular quark model
are found in Refs. \cite{Dover92,Maruy87}.
The initial state $N\bar N$ quantum numbers are defined by $i={ILSJM}$
(I is the isospin, L is the orbital angular momentum,
S is the spin and J
is the total angular momentum with projection M).
For the final two meson state $VM$ we specify the angular momentum quantum
numbers, with $j_{1,2}$ indicating the spin of mesons 1 and 2, $j$ the total
spin coupling and $l_f$ the relative orbital angular momentum.
For the transitions of interest the quantum numbers are restricted to
L=0 and 2, corresponding to $p\bar p$ annihilation at rest in liquid hydrogen,
$j_1 =1$, representing the
vector meson, and $l_f=1$, given by parity conservation.
Taking plane waves for the initial and final state wave functions with
relative momenta $\vec p$ and $\vec k$, respectively, the transition matrix
element is given in a partial wave basis as:
\begin{eqnarray}
T_{N\bar N(i) \to V M} & = &< V (j_1) M(j_2) l_f \vert
{\cal O}_{A2} \vert N\bar N (i)> \nonumber \\
&=& \sum_j < j_1 j_2 m_1 m_2 \vert j m> <j l_f m m_f \vert J M > \nonumber \\
&& \cdot \vert \vec k \vert Y_{l_f m_f}(\hat k) Y_{LS}^{JM~\dagger} (\hat p)
<VM (j_1, j_2 ,j, l_f)\vert \vert {\cal O}_{A2} \vert
\vert N\bar N (i)>.
\end{eqnarray}
The reduced matrix element of the two-meson transition is given in the
A2 model as:
\begin{equation}
<VM\vert \vert {\cal O}_{A2} \vert \vert N\bar N (i)> =
F_{L,l_f} p^L exp(-d_{A2}(3/4 k^2 + 1/3 p^2) <i\to VM>_{SF} ~.
\label{A2}
\end{equation}
The factor $F_{L,l_f}$ is a positive geometrical constant depending on the size
parameters of the hadrons for given orbital angular momenta $L$ and $l_f$.
The exponentials arise from the overlap of harmonic oscillator wave functions
used for the hadrons with the coefficient $d_{A2}$ depending on the size
parameters $R_N$ and $R_M$ of the nucleon and meson:
\begin{equation}
d_{A2} = {R_N^2 R_M^2 \over 3R_N^2 + 2 R_M^2}~.
\end{equation}
The matrix elements $<i\to VM>_{SF}$ are the spin-flavor weights
of the different transitions listed in Table \ref{tab5}.
Note that with the flavor part of the vector mesons defined as
\begin{equation}
\rho^0 = {1\over \sqrt{2}} (u\bar u - d\bar d) ,~~
\omega = {1\over \sqrt{2}} (u\bar u + d\bar d)
\label{A4}
\end{equation}
the matrix elements $<i \to \rho M >$ and $<i \to \omega M >$ have same
sign.
For the tensor force coupled channel $^3SD_1$ the spin-flavor matrix elements
are simply related by a proportionality factor, dependent on the isospin
channel, but independent of the $VM$ combination, that is:
\begin{eqnarray}
&F_{L=2, l_f=1} <^{2I+1, 3}D_1\to VM>_{SF}
= C(I) \cdot F_{L=0, l_f=1} <^{2I+1, 3}S_1\to VM>_{SF}~, \nonumber \\
&C(I)= \left ( {I=0\mbox{, }-\frac{2\sqrt{2}}{5} \atop
I=1\mbox{, }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\frac{2\sqrt{2}}{13}} \right)
\cdot \left ( -\frac{1}{3}
\frac{(R_N^2+R_M^2)R_N^2}{3/2R_N^2+R_M^2} \right ) ~.
\end{eqnarray}
In coordinate space the protonium wave function, including tensor coupling
and isospin mixing, is written as:
\begin{equation}
\Psi_{p\bar p} (J,S) = \sum_{L,I} \psi_{ILSJ} (r) Y_{LS}^{JM} (\hat r )~.
\end{equation}
Inserting this wave function into the expression
for the transition matrix element results in:
\begin{eqnarray}
T_{N\bar N(IJ) \to V M} & = &
\sum_j < j_1 j_2 m_1 m_2 \vert j m> <j l_f m m_f \vert J M > \nonumber \\
&& \cdot \vert \vec k \vert Y_{l_f m_f}(\hat k)
F (k) <i\to VM>_{SF} {\cal B} (I,J)~, \nonumber \\
F( k )& \equiv & exp(-d_{A2}3/4 k^2)~.
\label{A8}
\end{eqnarray}
The distortion due to initial state interaction is contained in the
coefficient ${\cal B}(I,J)$, which is simply the overlap of the isospin
decomposed protonium wave function with the effective initial form factor
arising in the transition.
By taking the Fourier transform of the initial state form factor
contained in Eq. (\ref{A2}),
these coefficients for the 1s atomic states of protonium are defined as:
\begin{eqnarray}
{\cal B}(I,J=0)&=&F_{L=0, l_f =1}\left( 2d_{A2}/3 \right)^{-3/2} \int_0^{\infty}
dr r^2 exp(-3 r^2/(4d_{A2})) \psi_{I 0 0 0} (r)~~{\rm for}~^1S_0~,
\nonumber \\
{\cal B}(I,J=1)&=&F_{L=0, l_f =1}\left\{ \left( 2d_{A2}/3 \right)^{-3/2}
\int_0^{\infty}
dr r^2 exp(-3 r^2/(4d_{A2})) \psi_{I 0 1 1} (r) - \right.
\nonumber \\
&& \left. - C(I)
\left( 2d_{A2}/3 \right)^{-7/2} \int_0^{\infty}
dr r^4 exp(-3 r^2/(4d_{A2})) \psi_{I 2 1 1} (r) \right\}~~{\rm for}~
^3SD_1~.
\label{A10}
\end{eqnarray}
The partial decay width for the annihilation of a protonium state with
total angular momentum J into two mesons $VM$ is given by
\begin{equation}
\Gamma_{p\bar p\to VM} (I,J) = 2\pi { E_V E_M \over E} k
\int d\hat k \sum_{m_1 m_2 m_f} \vert T_{N\bar N(IJ) \to V M} \vert^2
\end{equation}
where E is the total energy and $E_{V,M} =\sqrt{m_{V,M}^2 + \vec k^2}$ the
energy of the respective outgoing meson with $\vert \vec k \vert$
fixed by energy conservation.
With the explicit form of the transition amplitude of Eq. (\ref{A8}),
the partial decay width is written as:
\begin{equation}
\Gamma_{p\bar p\to VM} (I,J) = f(V,M) <i\to VM>_{SF}^2 \vert {\cal B} (I,J)
\vert^2
\end{equation}
with the kinematical phase space factor defined by:
\begin{equation}
f(V,M) = 2\pi {E_V E_M\over E} k^3 exp\left( - 3/2 d_{A2} k^2 \right) ~.
\label{A13}
\end{equation}
Taking an admixture of initial states given by their statistical weight,
the branching ratio of S-wave $p\bar p$ annihilation into the two meson
final state $VM$ is given by:
\begin{equation}
B( V M) =
B(p\bar p \to V M) = \sum_{J=0,1} {(2J+1) \Gamma_{p\bar p \to VM}
(I,J) \over 4 \Gamma_{tot} (J)} ~.
\label{A14}
\end{equation}
\newpage
\section{Vector meson - photon conversion in the quark mo\-del}
\label{appB}
The transition $V \to \gamma$ (Fig. 2b), where $V= \rho$ or $\omega $,
can be formulated
in the quark model, and related to the physical process
of $V \to e^+ e^-$.
An explicit derivation of the latter process can be found in Ref.
\cite{Yaouanc88}.
We just quote the main results necessary for the discussion of the
radiative decays of protonium.
The $Q\bar Q \gamma$ interaction is defined by the Hamiltonian
\begin{equation}
H_I = e \int d^3x j^{\mu}_{em} (\vec x ) A_{\mu} ( \vec x )
\end{equation}
with the quark current
\begin{equation}
j^{\mu}_{em} (\vec x ) = \bar q( \vec x) Q \gamma^{\mu} q(\vec x)
\end{equation}
where $q(\vec x) $ is the quark field and $A_{\mu}(\vec x)$ the
electromagnetic field given in a free field expansion.
For emission of a photon with momentum $\vec k$, energy $k^0$
and polarization
$\epsilon_{\mu}$ from a vector meson with momentum $\vec p_V$ we obtain:
\begin{equation}
<\gamma ( \vec k , \epsilon_{\mu}) \vert H_I \vert V (\vec p_V )>
= \delta (\vec k -\vec p_V ) T_{V \to \gamma }
\end{equation}
with
\begin{equation}
T_{V \to \gamma } = {e (2\pi)^{3/2} \over (2k^0)^{1/2} }~
\epsilon^{\ast}_{\mu}
< 0\vert j_{em}^{\mu} (\vec x = \vec 0 ) \vert V >~.
\end{equation}
For the conversion of a vector meson V into a real photon only the spatial
part of the current matrix element contributes.
Using standard techniques for the evaluation of the current matrix element
we obtain
\begin{equation}
T_{V \to \gamma } = {e \sqrt{6} \over (2k^0)^{1/2} } ~ {\vec \epsilon \cdot
\vec S}~
Tr(Q\varphi_V )~ \psi (\vec r = 0 )
\end{equation}
with the quark charge matrix Q and the polarization $\vec S$ of the vector
meson.
The $Q\bar Q$ flavor wave function $\varphi_V$ is consistently defined
as in Eq. (\ref{A4}) of Appendix \ref{appA} and contributes to the transition amplitude:
\begin{equation}
Tr(Q\varphi_V )
= \left\{ \begin{array}{*{2}{c}}
{1\over \sqrt{2}} & \, {\rm for }~\rho^0 \\
{1\over 3 \sqrt{2}} & \, {\rm for }~\omega
\end{array}\right. ~.
\end{equation}
The spatial part of the $Q\bar Q$ wave function
at the origin $\psi (\vec r = 0 )$
is given within the harmonic oscillator description as
$\vert \psi (0) \vert^2 = ( \pi R_M^2 )^{-3/2}$,
where the oscillator parameter $R_M$ is related to
to the rms-radius as $< r^2 >^{1/2} =\sqrt{3/8} R_M$.
Extending the outlined formalism to the physical decay process $V \to
e^+ e^-$ the decay width is given as \cite{Yaouanc88}
\begin{equation}
\Gamma_{V\to e^+ e^-} = {16 \pi \alpha^2 \over m_V^2} \left\{
Tr ( Q \varphi_V )\right\}^2 \vert \psi ( 0 ) \vert^2
\end{equation}
with $\alpha = e^2/(4\pi)$ and mass $m_V$ of the vector meson.
Latter result can be compared to the one obtained in the vector dominance
approach resulting for example in \cite{Sakurai69}
\begin{equation}
\Gamma_{\rho^0 \to e^+ e^-} = {4 \pi \over 3} {\alpha^2 m_{\rho}\over
f_{\rho}^2 }
\end{equation}
with the decay constant $f_{\rho }$.
Hence we can identify
\begin{equation}
\vert \psi ( 0 ) \vert^2 = { m_{\rho}^3 \over 6 f_{\rho}^2 }
\end{equation}
which with the experimental result of $\Gamma_{\rho^0 \to e^+ e^- } = 6.77$
yields $f_{\rho} =5.04$ or equivalently $R_M = 3.9~GeV^{-1}$, very close to the preferred value obtained
in the analysis of strong decays of mesons.
Hence, the matrix element for the conversion of a vector meson into a real
photon is alternatively written as:
\begin{equation}
T_{V \to \gamma } = {\vec \epsilon \cdot \vec S}~
Tr(Q\varphi_V ) ~ {e~ m_{\rho}^{3/2} \over (2k^0)^{1/2} f_{\rho} } ~.
\end{equation}
\newpage
\section{Matrix elements and decay width in radiative annihilation}
\label{appC}
In the following we present details for the evaluation of the matrix element
of Eq. (6),
which is explicitly written as:
\begin{eqnarray}
T_{N\bar N(I J) \to V M \to \gamma M} &
= & \sum_{m_1} < j_1 j_2 m_1 m_2 \vert j m> <j l_f m m_f \vert J M >
\vert \vec k \vert Y_{l_f m_f}(\hat k)
\nonumber \\
&& \cdot
< V M \vert \vert {\cal O}_{A2} \vert \vert N \bar N (I J ) >
\vec \epsilon \cdot \vec S (m_1) A_{V\gamma}
\label{C1}
\end{eqnarray}
where $l_f =1$ and $j=1$, for the processes considered.
The relative final state momentum $\vec k$ and the photon
polarization $\vec \epsilon $ are written in a spherical basis as:
\begin{equation}
\vert \vec k \vert Y_{1 m_f}(\hat k) =\sqrt{3 \over 4\pi}k_{m_f}
~~{\rm and}~~
\vec \epsilon \cdot \vec S (m_1) = \epsilon_{m_1}
\end{equation}
which together with Eq. (\ref{C1}) leads to the result:
\begin{eqnarray}
&T_{N\bar N(I J) \to V M \to \gamma M} =
\sqrt{3\over 4\pi } A_{V\gamma}
< V M \vert \vert {\cal O}_{A2} \vert \vert N \bar N (I J ) >
{i\over \sqrt{2} }
\nonumber \\
&\cdot \left\{ \begin{array}{*{2}{c}}
(\epsilon \times \vec k )_M & \, {\rm for}~j_2=0,~J=1~(M=\pi^0 ,~\eta ) \\
{ (-)^{m_2} \over \sqrt{3}} (\epsilon \times \vec k)_{-m_2}
& \, {\rm for}~j_2=1,~J=0~(M=\rho^0 ,~\omega ) \end{array}\right.
\end{eqnarray}
Consequently, for the process $N\bar N \to V_1 V_2
\to \gamma_1 \gamma_2 $
the transition matrix element is determined as:
\begin{eqnarray}
T( N \bar N (I J) \to V_1 V_2 \to \gamma_1 \gamma_2 )
=\sum_{m_2} \epsilon_{m_2} (2) A_{V_2\gamma } T(N\bar N\to
V_1 V_2 \to \gamma_1 V_2) \nonumber \\
= {1\over \sqrt{4 \pi}} A_{V_1 \gamma }A_{V_1 \gamma }
< V_1 V_2 \vert \vert {\cal O}_{A2} \vert \vert N \bar N (I J ) >
{i\over \sqrt{2}} \left( \vec \epsilon (1) \times \vec k \right)
\cdot \vec \epsilon (2)
\end{eqnarray}
where $\vec \epsilon (i)$ refer to the polarization of the photon i.
The derivation of the decay widths for the radiative transitions is
examplified here for the process $N \bar N \to \gamma \pi^0 $.
The corresponding matrix element is obtained by a coherent sum of intermediate
vector meson states $\rho $ and $\omega $ as:
\begin{eqnarray}
& T_{N\bar N(J) \to \gamma \pi^0} =
T_{^{13}SD_1 \to \rho^0 \pi^0 \to \gamma \pi^0} +
T_{^{33}SD_1 \to \omega \pi^0 \to \gamma \pi^0}
\nonumber \\
&=\sqrt{3\over 4\pi } {i\over \sqrt{2}} (\vec \epsilon \times
\vec k)_M
\left\{ A_{\rho^0 \gamma } < \rho^0 \pi^0 \vert \vert
{\cal O}_{A2} \vert \vert ^{13}SD_1 > +
A_{\omega \gamma } < \omega \pi^0 \vert \vert
{\cal O}_{A2} \vert \vert ^{33}SD_1 > \right\}~.
\end{eqnarray}
The decay width for $N\bar N \to \gamma \pi^0 $ is then:
\begin{equation}
\Gamma_{N\bar N \to \gamma \pi^0} =
2 \pi \rho_f \sum_{\epsilon_T , M} {1\over 2J+1} \vert
T(N\bar N(J) \to \gamma \pi^0) \vert^2
\end{equation}
with the final state density
\begin{equation}
\rho_f = {E_{\pi^0} k^2 \over E_{N\bar N} } \int d\hat k ~,
\end{equation}
$\vert \vec k \vert =k $, and the sum is over the two transverse photon
polarizations
$\epsilon_T$ and the total projection M of the $N\bar N$ protonium with total
angular momentum J.
Using
\begin{equation}
\sum_{\epsilon_T , M} \int d\hat k \vert (\vec \epsilon \times \vec k )
\vert ^2 = 8 \pi k^2
\end{equation}
together with the expression for the reduced matrix
element in Eq. (2), we finally obtain:
\begin{eqnarray}
&&\Gamma_{N\bar N \to \gamma \pi^0} = \nonumber \\
&& = f(\gamma , \pi^0 ) A_{\rho \gamma }^2
\vert < ^{13}SD_1 \to \rho^0 \pi^0 >_{SF} {\cal B} (0,1) +
{1\over 3} < ^{33}SD_1 \to \omega \pi^0 >_{SF} {\cal B} (1,1) \vert^2
\end{eqnarray}
with the kinematical phase space factor defined in analogy to
Eq. (\ref{A13}) as:
\begin{equation}
f(\gamma , M) = 2\pi {E_M k^4 \over E}
exp\left( - 3/2 d_{A2} k^2 \right)~.
\label{C10}
\end{equation}
\end{appendix}
\newpage
| proofpile-arXiv_065-7849 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In the past few years many authors, working in different fields, have
shown great interest in the so--called {\it quantum chaos} or
{\it quantum chaology}, i.e. the signature in the quantal systems
of the chaotic properties of the corresponding ($\hbar \to 0$)
semiclassical Hamiltonian [1--4]. Incidentally, as stressed by Berry [5],
"the semiclassical limit $\hbar \to 0$ and the long time limit
$t\to\infty$ are not interchangeable --
the origin of the $(\hbar , t^{-1})$ plane is mightily singular".
\par
The subject is very wide but, for reasons of space, we
focus our attention only on the connection between the
mean--field approximation and the onset of chaos.
For a quantum system with discrete spectrum, dynamical chaos is possible
only as a transient with lifetime $t_H$, the so--called Heisenberg time,
which scales as the number of degrees of freedom.
Because $t_H$ can be very long for a many--body system, we suggest
that the {\it transient chaotic dynamics} of quantum states and
the related observables can be experimentally measured.
Moreover, when the mean--field
theory is a good approximation of the exact many--body problem,
one can use the nonlinear mean--field equations to estimate
the transient chaotic behaviour of the many--body system.
As a specific example, we consider the dynamics of a trapped
weakly--interacting Bose--Einstein condensate.
\section{Variational principle and mean--field approximation}
Let us consider a $N$--body quantum system with
Hamiltonian ${\hat H}$. The exact time--dependent Schr\"odinger equation
can be obtained by imposing the quantum last action principle on
the Dirac action
\begin{equation}
S= \int dt <\psi (t) | i\hbar
{\partial \over \partial t} - {\hat H} |\psi (t) > \; ,
\end{equation}
where $\psi$ is the many--body wavefunction of the system. Looking
for stationary points of $S$ with respect to variation of the conjugate
wavefunction $\psi^*$ gives
\begin{equation}
i\hbar {\partial \over \partial t}\psi = {\hat H}\psi \; .
\end{equation}
As is well known, it is usually impossible to obtain the exact solution
of the many--body Schr\"odinger equation and some approximation must be used.
\par
In the mean--field approximation the total wavefunction
is assumed to be composed of independent particles, i.e. it can be
written as a product of single--particle wavefunctions $\phi_j$.
In the case of identical fermions, $\psi$ must be antisymmetrized [6].
By looking for stationary action with respect to variation of a
particular single--particle conjugate wavefunction $\phi_j^*$ one finds
a time--dependent Hartree--Fock equation for each $\phi_j$:
\begin{equation}
i\hbar {\partial \over \partial t}\phi_j = {\delta \over \delta \phi_j^*}
<\psi | {\hat H}| \psi > = {\hat h} \phi_j \; ,
\end{equation}
where ${\hat h}$ is a one--body operator.
The main point is that, in general,
the one--body operator ${\hat h}$ is nonlinear. Thus
the Hartree--Fock equations are non--linear (integro--)differential
equations. These equations can give rise, in some cases,
to chaotic behaviour (dynamical chaos) of the mean--field wavefunction.
\section{Mean--Field Approximation and Chaos}
In the mean--field approximation the
mathematical origin of {\it dynamical chaos} resides in the nonlinearity
of the Hartree--Fock equations. These equations provide an approximate
description, the best independent--particle description, which
describes, for a certain time interval, the very complicated
evolution of the true many--body system.
Two questions then arise: \\
1) Does this chaotic behavior persist in time? \\
2) What is the best physical situation to observe this kind of nonlinearity?
\par
To answer the first question,
it should be stressed that quantum systems evolve according
to a linear equation and this is an important feature which makes them
different from classical systems. Since the Schr\"odinger equation
is linear, so is any of its projections. Its time
evolution follows the classical one, including chaotic
behaviour, up to $t_H$.
After that, in contrast to the classical dynamics,
we get localization (dynamical localization). The
Liouville equation, on the other hand, is linear in
classical and quantum mechanics. However, for bound
systems, the quantum evolution operator has a purely
discrete spectrum (therefore no long--term chaotic behaviour).
By contrast, the classical evolution operator
(Liouville operator) has a continuous spectrum
(implying and allowing chaos).
This means that persistent chaotic behaviour in the evolution
of the states and observables is not possible.
Loosely speaking, chaotic behaviour is possible in quantum mechanics only
as a transient with lifetime $t_H$ [7,8].
\par
The Heisenberg time, or break time, can be estimated from the Heisenberg
indetermination principle and reads
\begin{equation}
t_H \simeq {\hbar \over \Delta E} \; ,
\end{equation}
where $\Delta E$ is the mean energy level spacing and,
according to the Thomas-Fermi rule, $\Delta E \propto \hbar^N$,
where N is the number of degrees of freedom, i.e. the dimension of the
configuration space. So, as $\hbar \rightarrow 0$, the Heisenberg time
diverges as
\begin{equation}
t_H \sim \hbar^{1-N} \; ,
\end{equation}
and it does so faster, the higher $N$ is [9].
We observe that the limitation to persistent chaotic dynamics
in quantum systems does not apply if the spectrum of the Hamiltonian
operator ${\hat H}$ is continuous.
\par
Concerning the second question, it is useful
to remember that, in the thermodynamic limit,
i.e. when the number $N$ of particles tends to
infinity at constant density, the spectrum is, in general, continuous
and true chaotic phenomena are not excluded [10].
\par
We have seen that the Heisenberg time $t_H$ is very large for
systems with many particles. This fact suggests
that the {\it transient chaotic dynamics} of quantum states
and observables can be experimentally observed in many--body quantum systems.
Moreover, when the mean--field
theory is a good approximation of the exact many--body problem,
one can use the nonlinear mean--field equations to estimate
the properties of the transient chaotic dynamics.
\section{Nonlinear dynamics of a Bose condensate}
In this section we discuss the mean--field approximation
and the nonlinear dynamics for a system of trapped weakly--interacting
bosons in the same quantum state, i.e. a Bose--Einstein condensate [11].
In this case the Hartree--Fock equations reduce to only one equation,
the Gross-Pitaevskii equation, which describes the dynamics
of the condensate [12]. Nowadays, this equation is intensively studied
because of the recent experimental achievement of Bose--Einstein
condensation for atomic gasses in magnetic traps at very low temperatures
(about $10^{-7}$ Kelvin) [13].
\par
The Hamiltonian operator of a system of $N$ identical
bosons of mass $m$ is given by
\begin{equation}
{\hat H}=\sum_{i=1}^N \Big( -{\hbar^2\over 2 m} \nabla_i^2
+ V_0({\bf r}_i) \Big) +
{1\over 2} \sum_{ij=1}^N V({\bf r}_i,{\bf r}_j) \; ,
\end{equation}
where $V_0({\bf r})$ is an external potential and $V({\bf r},{\bf r}')$
is the interaction potential.
In the mean-field approximation the totally symmetric
many--particle wavefunction of the Bose--Einstein condensate reads
\begin{equation}
\psi({\bf r}_1,...,{\bf r}_N,t) = \phi({\bf r}_1,t) ...
\phi({\bf r}_N,t) \; ,
\end{equation}
where $\phi ({\bf r},t)$ is the single particle wavefunction.
By using the quantum variational principle for the Dirac action
we get the equation
\begin{equation}
i\hbar {\partial \over \partial t}\phi ({\bf r},t)=
\Big[ -{\hbar^2\over 2m} \nabla^2
+ V_0({\bf r}) + (N-1)
\int d^3{\bf r}' V({\bf r},{\bf r}') |\phi ({\bf r}',t)|^2
\Big] \phi ({\bf r},t) \; ,
\end{equation}
which is an integro--differential nonlinear Schr\"odinger equation.
If the bosons are weakly interacting, it is possible to substitute
the true interaction with a pseudo--potential
$V({\bf r},{\bf r}') = g \delta^3 ({\bf r}-{\bf r}')$,
where $g={4\pi \hbar^2 a_s/m}$ is the scattering amplitude and $a_s$
the scattering length. In this way we obtain
the so--called Gross--Pitaevskii (GP) equation
\begin{equation}
i\hbar {\partial \over \partial t}\phi ({\bf r},t)=
\Big[ -{\hbar^2\over 2m} \nabla^2
+ V_0({\bf r}) + g(N-1) |\phi ({\bf r},t)|^2 \Big] \phi ({\bf r},t) \; .
\end{equation}
We now consider a triaxially asymmetric harmonic trapping potential of
the form
\begin{equation}
V\left(\vec{r}\right)=
{1\over 2}m\omega_0^2\left(\lambda_1^2 x^2+
\lambda_2^2 y^2+\lambda_3^2 z^2\right)\;,
\end{equation}
where $\lambda_i$ ($i=1,2,3$) are adimensional constants proportional
to the spring constants of the potential along the three axes.
\par
It has been shown, using a hydrodynamical approach [14],
that in the strong coupling limit the GP equation has exact
solutions which satisfy a set of
ordinary differential equations given by
\begin{equation}
\frac{d^2}{d\tau^2}\sigma_i+\lambda_i^2\sigma_i=
\frac{\tilde{g}}{\sigma_i\sigma_1\sigma_2\sigma_3}\;, \;\;\;\;\;\;
i=1,2,3 \; .
\end{equation}
These nonlinearly coupled ordinary differential equations describe
the time evolution of the widths $\sigma$
of the condensate wavefunction $\psi$ along each direction
\footnote{The same equations can be obtained by minimizing
the Dirac action $S$ with a trial mean--field wavefunction
$\psi({\bf r}_1,...,{\bf r}_N,t) = \phi({\bf r}_1,t)...\phi({\bf r}_N,t)$,
where
$$
\phi\left({\bf r},t\right)=
\Big( {1\over \pi^3 a_0^6 {\sigma}_1^2(t) {\sigma}_2^2(t)
{\sigma}_3^2(t)} \Big)^{1/4}
\prod_{i=1,2,3}
\exp\left\{-\frac{x_i^2}{2a_0^2 {\sigma}_i^2(t)}
+i \beta_i(t) x_i^2 \right\}\;,
$$
with $(x_1,x_2,x_3)\equiv(x,y,z)$. $\sigma_i$ and $\beta_i$ are
the time-dependent variational parameters [15].}.
Here, we have defined the coupling constant
$\tilde{g}=(2/\pi)^{1/2}(N-1)a_s/a_0$, proportional to the
condensate number $N$ and the scattering length $a_s$.
Note that $\tau=\omega_0 t$ and $a_0=(\hbar/m\omega_0)^{1/2}$
is the harmonic oscillator length.
\par
The three differential equations correspond to the classical
equations of motion for a particle with coordinates $\sigma_i$ and
Hamiltonian
\begin{equation}
H = \frac{1}{2}\left(\dot{\sigma}_1^2+\dot{\sigma}_2^2+
\dot{\sigma}_3^2\right) +
\frac{1}{2}\left(\lambda_1^2\sigma_1^2+\lambda_2^2\sigma_2^2
+\lambda_3^2\sigma_3^2\right)
+\tilde{g}\frac{1}{\sigma_1\sigma_2\sigma_3} \; .
\end{equation}
For $\tilde{g}\neq 0$ this Hamiltonian is nonintegrable
and thus generic. As is well known, integrable
systems are rather exceptional in the sense that they are typically
isolated points in the functional space of the Hamiltonians and their
measure is zero in this space. If we choose at random a system
in nature, the probability that the system is nonintegrable is one [16].
\par
The small oscillations and the nonlinear coupling of these
modes have been studied by Dalfovo {et al.} for $\lambda_1=\lambda_2=1$ and
$\lambda_3=\sqrt{8}$ (axially symmetric trap) [14].
One of us (L.S.) has recently calculated the mode frequencies of the low
energy excitations of the condensate in the case of
the triaxially asymmetric potential [17]. These excitations correspond to
the small oscillations of variables $\sigma$'s around the equilibrium
point, corresponding to the minimum of the effective potential
energy of $H$. The eigenfrequencies $\omega$
for the collective motion, in units of $\omega_0$, are found as the
solutions of the equation
\begin{equation}
\omega^6
-3\left(\lambda_1^2+\lambda_2^2+\lambda_3^2\right)\omega^4
+8\left(\lambda_1^2\lambda_2^2+\lambda_1^2\lambda_3^2
+\lambda_2^2\lambda_3^2\right)\omega^2
-20\lambda_1^2\lambda_2^2\lambda_3^2
=0\;.
\end{equation}
\par
Near the minima of the potential the trajectories in
the phase--space are quasi--periodic. On the contrary,
far from the minima, the effect of the nonlinearity becomes important.
As the KAM theorem [18] predicts,
parts of phase space become filled with chaotic orbits,
while in other parts the toroidal
surfaces of the integrable system are deformed but not destroyed.
The study of this order--chaos transition for the Bose condensate
in the triaxially asymmetric potential is currently under
investigation by our group.
\section{Conclusions}
The main conclusion of this paper is that the use of mean--field
approximation leads to nonlinear equations. As a consequence,
in some cases, the behaviour of the wavefunctions may be chaotic.
\par
As a specific example, the Bose--Einstein condensation
for weakly--interacting trapped bosons has been discussed
in great detail.
\section*{Acknowledgments}
This work has been partially supported by the Ministero
dell'Universit\`a e della Ricerca Scientifica e Tecnologica (MURST).
L.S. thanks INFM for support through a Research Advanced Project
on Bose-Einstein Condensation.
| proofpile-arXiv_065-7852 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Fixed-order QCD perturbation theory fails in
some asymptotic regimes where large logarithms multiply
the coupling constant. In those regimes resummation of the perturbation
series to all orders is necessary to describe many high-energy processes.
The Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation~\cite{bfkl} performs such a
resummation for virtual and real soft gluon emissions in dijet production at
large rapidity difference in
hadron-hadron collisions (see Figure~\ref{fig:feyn}(a))
and in forward jet
production in lepton-hadron collisions (Figure~\ref{fig:feyn}(b)).
In the latter case,
resummation leads to the characteristic
BFKL rise
in the forward jet cross section, $\hat\sigma \sim (x_j/x_{Bj})^\lambda$,
with $\lambda = 4C_A\ln 2\, \alpha_s/\pi \approx 0.5$. Similarly,
in dijet production at hadron colliders
BFKL resummation gives~\cite{muenav} a
subprocess cross section that increases with rapidity difference as
$\hat\sigma\sim\exp(\lambda \Delta)$,
where $\Delta$ is the rapidity difference of the two jets with comparable
transverse momenta $p_{T1}$ and $p_{T2}$.
Experimental studies of these processes have recently begun at the
Tevatron $p \bar p$ and HERA $ep$ colliders.
Tests so far have been inconclusive; the data tend to lie between
fixed-order QCD and analytic BFKL predictions. However the
applicability of these analytic BFKL solutions is limited by the
fact that they implicitly contain integrations over arbitrary numbers
of emitted gluons with arbitrarily large transverse momentum: there
are no kinematic constraints included. Furthermore,
the implicit sum
over emitted gluons leaves only leading-order kinematics, {\it i.e.}\/
only the momenta of the `external' particles are made explicit.
The absence of kinematic constraints and energy-momentum conservation cannot,
of course, be reproduced in experiments. While the effects of such constraints
are in principle sub-leading, it is desirable to include them in
predictions to be compared with experimental results. As we
will see, kinematic constraints can affect predictions substantially.
\begin{figure}
\psfig{figure=herapic.ps,height=3.5in,angle=270}
\vskip -.5cm
\caption{Schematic representation of (a) dijet production with large
rapidity separation $\Delta$ in hadron--hadron collisions, and (b) forward
jet production in deep inelastic scattering.}
\label{fig:feyn}
\end{figure}
\section{Monte Carlo Approach to BFKL Physics}
The solution to this problem of lack of kinematic constraints in analytic
BFKL predictions is to unfold the implicit sum over gluons to make the
gluon sum explicit, and to implement the result in a Monte Carlo
event generator~\cite{os,schmidt}. This is achieved as follows.
The BFKL equation contains separate integrals over real and virtual
emitted gluons. We can reorganize the equation by combining the
`unresolved' real emissions --- those with transverse momenta
below some minimum value (in practice chosen to be small
compared to the momentum threshold for measured
jets) --- with the virtual emissions. Schematically,
we have
\begin{equation}
\int_{virtual} + \int_{real} = \int_{virtual+real, unres.} +
\int_{real, res.}
\end{equation}
We perform
the integration over virtual and unresolved real
emissions analytically. The integral containing the
resolvable real emissions is left explicit.
We can then solve the
BFKL equation
by iteration, and we obtain a differential cross section
that contains an explicit sum over emitted gluons along with
the appropriate phase space factors. In addition, we obtain
an overall form factor due
to virtual and unresolved emissions. The subprocess cross section is
\begin{equation}
d\hat\sigma=d\hat\sigma_0\times\sum_{n\ge 0} f_{n}
\end{equation}
where $f_{n}$ is the iterated solution for $n$ real gluons emitted and
contains the overall form factor.
It is then straightforward to implement the result in a Monte Carlo
event generator. Emitted real (resolved) gluons appear explicitly,
so that conservation of momentum and energy, as well as
evaluation of parton distributions that multiply $d\hat\sigma$,
is based on exact kinematics
for each event. In addition, we include the running of the strong
coupling constant. See~\cite{os} for further details.
\section{Dijet Production at Hadron Colliders}
At hadron colliders, the BFKL increase in the dijet subprocess cross section
with rapidity difference is unfortunately washed out by the falling
parton distribution functions (pdfs). As a result, the BFKL prediction for
the total cross section is simply a less steep falloff than obtained in
fixed-order QCD, and tests of this prediction are sensitive to pdf
uncertainties. A more robust pediction is obtained by noting that
the emitted gluons (cf. Figure~\ref{fig:feyn}(a)) give rise to
a decorrelation in azimuth between the two leading jets.\cite{many,os} This
decorrelation becomes stronger as the rapidity difference $\Delta$ increases
and more gluons are emitted. In lowest order in QCD, in contrast, the jets
are back-to-back in azimuth and the (subprocess) cross section is
constant, independent
of $\Delta$.
This azimuthal decorrelation is illustrated in Figure~\ref{fig:decor}
for dijet production at the Tevatron $p\bar p$ collider~\cite{os}, with
center of mass energy 1.8 TeV and jet transverse momentum $p_T>20\ {\rm GeV}$.
The azimuthal angle difference $\Delta\phi$ is defined such that
that $\cos\Delta\phi=1$ for back-to-back jets.
The solid line shows the analytic BFKl prediction. The BFKL Monte Carlo
prediction is shown as crosses. We see that the kinematic constraints
result in a less strong decorrelation due to suppression of
emitted gluons, and we obtain improved agreement with
preliminary measurements by the D$\emptyset$\ collaboration~\cite{dzeropl},
shown as diamonds in the figure.
The azimuthal decorrelation can also be studied at the LHC $pp$
collider~\cite{oslhc}, which has higher rapidity reach than the Tevatron.
Figure~\ref{fig:decorlhc} compares the decorrelation at the Tevatron
for $p_T>20\ {\rm GeV}$
(dotted curve; same as crosses in Fig.~\ref{fig:decor}) to that at the
LHC for $p_T>20\ {\rm GeV}$ (solid curve) and $p_T>50\ {\rm GeV}$ (dashed curve).
We see that at the LHC for $p_T>20\ {\rm GeV}$ the decorrelation is stronger
and reaches to larger rapidities than the Tevatron. The LHC's
higher center of mass energy ($\sqrt{s}=14\
{\rm TeV}$) relative to $p_T$ threshold allows for more emitted gluons,
and the characteristic BFKL effects are more pronounced. For the perhaps
more realistic LHC $p_T$ threshold of $50\ {\rm GeV}$, the
kinematic suppression is more pronounced, but we still see a
strong decorrelation.
In all three curves we see the suppression of the decorrelation by
the kinematic
constraints as $\Delta$ approaches the kinematic limit, where the suppression
of emitted gluons is so strong that the curve turns over and the correlation
begins to return.
In addition to studying the azimuthal decorrelation, one can
look for the BFKL rise in dijet cross section with
rapidity difference by considering ratios of cross sections
at different center of mass energies at fixed $\Delta$.
The idea is to cancel the pdf dependence, leaving the pure
BFKL effect. This turns out to be rather tricky~\cite{osratio},
because the desired cancellations occur only at lowest
order, and the kinematic constraints strongly affect the predicted
behavior, not only quantitatively but sometimes
qualitatively as well~\cite{osratio,oslhc}.
\begin{figure*}[t]
\psfig{figure=decor.ps,width=16.0cm}
\vskip -.75cm
\caption[]{The azimuthal angle decorrelation in dijet production at the
Tevatron
as a function of dijet rapidity difference $\Delta$, for
jet transverse momentum $p_T>20\ {\rm GeV}$.
The analytic BFKL solution is shown as a solid curve
and a preliminary D$\emptyset$\ measurement~\cite{dzeropl} is shown
as diamonds. Error bars represent statistical and
uncorrelated systematic errors; correlated jet energy scale systematics
are shown as an error band. \label{fig:decor}}
\end{figure*}
\begin{figure*}[t]
\psfig{figure=decorlhc.ps,width=16.0cm}
\vskip -.5cm
\caption[]{
The azimuthal angle decorrelation in dijet production at the Tevatron
($\sqrt{s}=1.8\; {\rm GeV}$) and LHC ($\sqrt{s}=14\; {\rm TeV}$)
as a function of dijet rapidity difference $\Delta$.
Dotted curve: Tevatron, $p_T>20\; {\rm GeV}$;
solid curve: LHC, $p_T>20\; {\rm GeV}$; dashed curve: LHC, $p_T>50\; {\rm GeV}$.
\label{fig:decorlhc}}
\end{figure*}
\section{Forward Jet Production at Lepton--Hadron Colliders}
In deep inelastic scattering at lepton-hadron colliders, the production
of forward jets~\cite{FJ} is subject to the effects of
multiple soft gluon emission just as in dijet production at hadron colliders.
Now the large rapidity separation is between the current and forward
jets; see Fig.~\ref{fig:feyn}(b).
The BFKL equation resums such emissions, and it is
relatively straightforward
to adapt the dijet formalism
to calculate the cross section for the production of a forward jet with a given
$k_T$ and longitudinal momentum fraction $x_j \gg x_{\rm Bj}$. In fact there
is a direct correspondence between the variables: $p_{T2} \leftrightarrow
k_T$ and $\Delta \leftrightarrow \ln(x_j/x_{\rm Bj})$. In the DIS case the variable
$p_{T1}$ corresponds to the transverse momentum of the $q \bar q$ pair
in the upper `quark box' part of the diagram. In practice this variable
is integrated with the off--shell $\gamma^* g^* \to q \bar q$ amplitude
such that $p_{T1}^2 \sim Q^2$. As a result, it is appropriate to consider
values of $k_T^2$ of the same order, and to consider the (formal)
kinematic limit $x_j/x_{\rm Bj} \to \infty$, $Q^2$ fixed. In this limit we
obtain the `naive BFKL' prediction
$\hat \sigma_{\rm jet} \sim (x_j/x_{\rm Bj})^\lambda$,
the analog of $\hat\sigma_{jj} \sim \exp(\lambda\Delta)$.
\begin{figure*}
\psfig{figure=djherax.ps,width=16.0cm}
\vskip -1.25cm
\caption{Differential structure function for forward jet production in
$ep$ collisions at HERA. The curves are described in the text.
\label{fig:djhera}}
\end{figure*}
Figure~\ref{fig:djhera} shows the differential structure function
$\partial^2 F_2/\partial x_j\partial k_T^2$ as a function of $x_{\rm Bj}$ at HERA,
with
\begin{equation}
x_j = 0.1, \; \; \; Q^2 = 50\; {\rm GeV}^2, \; \; \; Q^2/2 < k_T^2 < 4 Q^2.
\end{equation}
The lower dashed curve is the QCD leading--order prediction from the
process $\gamma^* {\cal G} \to q \bar{q} {\cal G}$, with ${\cal G} = g,q$, with no
overall energy--momentum constraints. This is the analog of the
$\hat\sigma_{jj} \to\ $constant prediction for dijet production. Note that
here the parton distribution function at the lower end of the ladder
is evaluated at $x = x_j$, independent of $x_{\rm Bj}$. In practice, when
$x_{\rm Bj}$ is not small we have $x > x_j$ and the cross section is suppressed,
as indicated by the lower solid curve in Fig.~\ref{fig:djhera}. The upper
dashed curve is the asymptotic BFKL prediction with the characteristic
$(x_j/x_{\rm Bj})^\lambda$ behavior. Finally the upper solid line is the
prediction of the full BFKL Monte Carlo, including kinematic constraints
and pdf dependence. We see a significant suppression of the
cross section. We emphasise that Fig.~\ref{fig:djhera} corresponds to
`illustrative' cuts and should not be directly compared to the experimental
measurements. Nevertheless, the BFKL--MC predictions do appear to
follow the general trend of the H1 and ZEUS measurements \cite{HERA}.
A more complete study, including realistic experimental cuts and an
assessment of the uncertainty in the theoretical predictions, is under way
and will be reported elsewhere \cite{os3}.
\section{Conclusions}
In summary, we have developed a BFKL Monte Carlo event generator that
allows us to include the subleading effects such as kinematic constraints
and running of $\alpha_s$. We have applied this Monte Carlo to
dijet production at large rapidity separation at the Tevatron and LHC, and
to forward jet production at HERA; the latter work is currently
being completed. We found that kinematic constraints, though nominally
subleading, can be very important. In particular they lead to suppression
of gluon emission, which in turn suppresses some of the behavior that is
considered to be characteristic of BFKL physics. It is clear therefore
that reliable BFKL tests can only be performed using predictions
that incorporate kinematic constraints.
\section*{Acknowledgements}
Work supported in part by the U.S. Department of Energy,
under grant DE-FG02-91ER40685 and by the U.S. National Science Foundation,
under grants PHY-9600155 and PHY-9400059.
\section*{References}
| proofpile-arXiv_065-7859 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The radiative capture $n+p\rightarrow d+\gamma$ is a
classic nuclear physics process where meson exchange currents play a
role.
For protons at rest and incident neutrons, with speed
$|{\bf v}| = 2200\ {\rm m/s}$, the cross section for this process
has the experimental value,
$\sigma^{\rm expt} = 334.2\pm 0.5\ {\rm mb}$\cite{CWCa}.
Naively, one expects that an effective range calculation of this
cross section\cite{BLa,Noyes}
would be very close to this.
However, such a calculation gives a value which is approximately $10\%$
smaller than $\sigma^{\rm expt}$.
As first suggested by Brown and Riska\cite{BrRia}, this
discrepancy can at least partly
be accounted for by the inclusion of meson exchange currents.
More recent work by Park, Min and Rho\cite{PMRa} using effective
field theory with Weinberg's power
counting\cite{Weinberg1}
for the nucleon-nucleon interaction, a resonance saturation
hypothesis for the coefficients of some operators
and a momentum cutoff finds the value
$\sigma=334\pm 2\ {\rm mb}$. This prediction is
relatively insensitive to the value of the cut-off and is compatible
with $\sigma^{\rm expt}$.
Recently, a consistent power counting for the nucleon-nucleon interaction has
been established\cite{KSW}. At leading order in Weinberg's scheme,
pion exchange is included in the $NN$ potential and it is interated to
all orders to predict the $NN$ scattering amplitude \cite{Bira}.
However, the work of
\cite{KSW,LM,DR} shows that iterating the pions without including the
effects of operators with explicit factors of the quark masses or
derivatives does not give a systematic
improvement in the prediction for the $NN$ scattering amplitude.
Using the power counting of\cite{KSW},
the bubble chain formed by multiple insertions of the momentum independent
four-nucleon operator gives the leading scattering amplitude for
systems with large scattering lengths.
Higher derivative operators, operators involving insertions of the light quark
mass matrix and pion exchange are of subleading order and are treated in
perturbation theory.
The expansion parameter used in \cite{KSW} is $Q\sim |{\bf p}|, m_\pi$, and
one expands in $Q/\Lambda$ while keeping all orders in $aQ$
(see also \cite{Kolck}). Here $\Lambda$
is a nonperturbative hadronic scale and $a$ is the scattering length.
At next-to-leading order (NLO) in the $Q$ expansion simple analytic
expressions can be derived for physical quantities. Various observables
in the two-nucleon sector have been determined at NLO with this
power counting, such as the electromagnetic form factors and moments of the
deuteron\cite{KSW2}, the polarizabilities of the deuteron\cite{CGSSpol},
Compton scattering cross sections\cite{CGSScompt,Ccompt} and parity violating
observables\cite{SSpv,KSSWpv}.
In this work we compute the cross section for the radiative capture of
extremely low
momentum neutrons
$n+p\rightarrow d+\gamma$ at NLO in the effective field
theory $Q$ expansion.
Capture from the ${}^3\kern-.14em S_1$ channel is suppressed in the expansion
compared to capture from the ${}^1\kern-.14em S_0$ channel.
At zero-recoil, the amplitude for capture from the ${}^3\kern-.14em S_1$ channel vanishes
since it is simply the overlap of two orthogonal
eigenstates of the strong Hamiltonian.
Hence, at leading order, only the isovector $^1S_0$ capture occurs and
it arises from the isovector
magnetic moment interactions of the nucleons.
Since the amplitudes for capture from the ${}^1\kern-.14em S_0$ and ${}^3\kern-.14em S_1$ channels do not
interfere, at NLO the cross section comes only from the
amplitude for capture from the ${}^1\kern-.14em S_0$ channel.
At this order there are contributions from a single insertion of four-nucleon
operators with two derivatives,
from a single insertion of four-nucleon operators with an insertion of the
quark mass matrix, and from the exchange of a
potential pion. The potential
pion exchange occurs in graphs with a
nucleon magnetic moment interaction and also in graphs where the potential
pion is minimally
coupled to the electromagnetic field. The later contributions
are historically called meson exchange currents.
In addition to these contributions there is also a contribution from a
four-nucleon-one-photon contact interaction. The coefficient of this operator
has not been previously determined and a major purpose of this paper is
to fix its value.
\section{Effective Field Theory for Nucleon-Nucleon Interactions}
The terms in the effective Lagrange density describing the interactions
between nucleons, pions, and photons can be classified by the number of
nucleon fields that appear. It is convenient to write
\begin{equation}
{\cal L} = {\cal L}_0 + {\cal L}_1 + {\cal L}_2 + \ldots,
\end{equation}
where ${\cal L}_n$ contains $n$-body nucleon operators.
${\cal L}_0$ is constructed from the photon field $A^\mu = (A^0, {\bf A})$
and the pion fields which are incorporated into an $SU(2)$ matrix,
\begin{equation}
\Sigma = \exp\left({\frac{\displaystyle
2i\Pi}{\displaystyle f}}\right)\ \ \ ,\qquad \Pi = \left(
\begin{array}{cc}
\pi^0/\sqrt{2} & \pi^+ \\
\pi^- & -\pi^0/\sqrt{2}
\end{array}
\right) \ \ \ \ ,
\end{equation}
where $f=131~MeV$ is the pion decay constant. $\Sigma$ transforms under
the global $SU(2)_L \times SU(2)_R$ chiral and $U(1)_{em}$ gauge symmetries
as
\begin{equation}
\Sigma \rightarrow L\Sigma R^\dagger, \qquad \Sigma \rightarrow e^{i\alpha
Q_{em}} \Sigma e^{-i\alpha Q_{em}} \ \ \ ,
\end{equation}
where $L\in SU(2)_L$, $R\in SU(2)_R$ and $Q_{em}$ is the charge matrix,
\begin{equation}
Q_{em} = \left(
\begin{array}{cc}
1 & 0 \\
0 & 0
\end{array}
\right) \ \ \ .
\end{equation}
The part of the Lagrange density without nucleon fields is
\begin{eqnarray} {\cal L}_0 &&= {1\over
2} ({\bf E}^2 - {\bf B}^2) \ +\ {f^2\over 8} {\rm Tr\,} D_\mu \Sigma D^\mu
\Sigma^\dagger \ +\ {f^2\over 4} \lambda {\rm Tr\,} m_q (\Sigma + \Sigma^\dagger)
\ +\ \ldots \ \ \ \ . \end{eqnarray} The ellipsis denote operators with
more covariant derivatives $D_\mu$, insertions of the quark mass matrix $m_q
= {\rm diag} (m_u, m_d)$, or factors of the electric and magnetic fields.
The parameter $\lambda$ has dimensions of mass and $m_\pi^2 = \lambda (m_u +
m_d)=(137~ {\rm MeV})^2$. Acting on $\Sigma$, the covariant derivative is
\begin{equation}
D_\mu \Sigma = \partial_\mu \Sigma + ie [Q_{em},\Sigma] A_\mu \ \ \ .
\end{equation}
When describing pion-nucleon interactions, it is convenient to introduce the
field $\xi = \exp\left(i \Pi/f\right) = \sqrt{\Sigma}$. Under $SU(2)_L
\times SU(2)_R$ it transforms as
\begin{equation}
\xi \rightarrow L\xi U^\dagger = U\xi R^\dagger,
\end{equation}
where $U$ is a complicated nonlinear function of $L,R$, and the pion fields.
Since $U$ depends on the pion fields it has spacetime dependence. The
nucleon fields are introduced in a doublet of spin $1/2$ fields
\begin{equation}
N = \left({ {p \atop n} }\right)
\end{equation}
that transforms under the chiral $SU(2)_L \times SU(2)_R$ symmetry as $N
\rightarrow UN$ and under the $U(1)_{em}$ gauge transformation as $N
\rightarrow e^{i\alpha Q_{em}} N$. Acting on nucleon fields, the covariant
derivative is
\begin{equation}
D_\mu N = (\partial_\mu + V_\mu + ie Q_{em}A_\mu )N \, \, ,
\end{equation}
where
\begin{eqnarray}
V_\mu &&= {1\over 2} (\xi D_\mu \xi^\dagger + \xi^\dagger D_\mu \xi)\nonumber
\ =\
{1\over 2} (\xi \partial_\mu \xi^\dagger + \xi^\dagger \partial_\mu\xi
+ ie A_\mu (\xi^\dagger Q_{em} \xi - \xi Q_{em} \xi^\dagger))
\ \ \ .
\end{eqnarray}
The covariant derivative of $N$ transforms in the same way as $N$ under $%
SU(2)_L \times SU(2)_R$ transformations
({\it i.e.} $D_\mu N \rightarrow U D_\mu N$)
and under $U(1)$ gauge transformations
({\it i.e.} $D_\mu N \rightarrow e^{i\alpha Q_{em}} D_\mu N$).
The one-body terms in the Lagrange density are
\begin{eqnarray}
{\cal L}_1 & = & N^\dagger \left(i D_0 + {{\bf D}^2\over 2M}\right) N
+ {ig_A\over 2} N^\dagger {\bbox \sigma} \cdot
(\xi {\bf D} \xi^\dagger - \xi^{\dagger} {\bf D} \xi)
N\nonumber \\
& + & {e\over 2M} N^\dagger
\left( \kappa_0 + {\kappa_1\over 2} [\xi^\dagger \tau_3\xi
+ \xi \tau_3 \xi^\dagger]\right) {\bbox \sigma} \cdot {\bf B} N
+ \ldots,
\label{lagone}
\end{eqnarray}
where $M=939~{\rm MeV}$ is the nucleon mass and
$\kappa _0={\frac 12}(\kappa _p+\kappa _n)=0.4399\ $ and
$\kappa _1={\frac 12}(\kappa _p-\kappa _n)= 2.35294\ $
are the isoscalar and isovector nucleon magnetic moments
in nuclear magnetons. The nucleon matrix element of the axial
current is $g_A=1.25$.
The two-body Lagrange density needed for NLO calculations is
\begin{eqnarray}
{\cal L}_2 &=&
-\left(C_0^{({}^3\kern-.14em S_1)}+ D_2^{({}^3\kern-.14em S_1)} \lambda{\rm Tr\,} m_q\right)
(N^T P_i N)^\dagger(N^T P_i N)
\nonumber\\
& + & {C_2^{({}^3\kern-.14em S_1)}\over 8}
\left[(N^T P_i N)^\dagger
\left(N^T \left[ P_i \overrightarrow {\bf D}^2 +\overleftarrow {\bf D}^2 P_i
- 2 \overleftarrow {\bf D} P_i \overrightarrow {\bf D} \right] N\right)
+ h.c.\right]
\nonumber\\
& & -\left(C_0^{({}^1\kern-.14em S_0)}+ D_2^{({}^1\kern-.14em S_0)} \lambda{\rm Tr\,} m_q\right)
(N^T \overline{P}_i N)^\dagger(N^T \overline{P}_i N)
\nonumber\\
& + & {C_2^{({}^1\kern-.14em S_0)}\over 8}
\left[(N^T \overline{P}_i N)^\dagger
\left(N^T \left[ \overline{P}_i \overrightarrow {\bf D}^2
+\overleftarrow {\bf D}^2 \overline{P}_i
- 2 \overleftarrow {\bf D} \overline{P}_i
\overrightarrow {\bf D} \right] N\right)
+ h.c.\right]
\nonumber\\
&& + \left[e L_1 \ (N^T P_i N)^\dagger (N^T \overline{P}_3 N) B_i
\ -\
e L_2\ i\epsilon_{ijk} (N^T P_i N)^\dagger (N^T P_j N) B_k
+h.c. \right] ,
\label{lagtwo}
\end{eqnarray}
where $P_i$ and $\overline{P}_i$ are spin-isospin projectors for
the spin-triplet channel and the spin-singlet channel respectively,
\begin{eqnarray}
& & P_i \equiv {1\over \sqrt{8}} \sigma_2\sigma_i\tau_2
\ \ \ ,
\qquad {\rm Tr\,} P_i^\dagger P_j ={1\over 2} \delta_{ij}
\nonumber\\
& & \overline{P}_i \equiv {1\over \sqrt{8}} \sigma_2\tau_2\tau_i
\ \ \ ,
\qquad {\rm Tr\,} \overline{P}_i^\dagger \overline{P}_j ={1\over 2} \delta_{ij}
\ \ \ .
\end{eqnarray}
The $\sigma $ matrices act on the nucleon spin indices, while the $\tau $
matrices act on isospin indices. The local operators responsible for $S-D$
mixing do not contribute at NLO.
Terms in ${\cal L}_2$ involving the pion field have been neglected in
eq.~(\ref{lagtwo}).
The values of the coefficients of the four-nucleon operators in ${\cal L}_2$
depend on the regularization and subtraction scheme that is adopted.
The power counting of \cite{KSW} is manifest in the power divergence subtraction
scheme, PDS, and we shall use it in this paper. (Momentum subtraction schemes
can also have this power counting \cite{MehStew,Geg}).
In PDS one works in $D$ dimensions and the poles at both $D=4$ and $D=3$
in the loop integrations associated with Feynman diagrams are subtracted.
The $D=4$ poles are from logarithmic ultraviolet divergences and the $D=3$
poles are from linear ultraviolet divergences.
There is some freedom in
how the Lagrangian is continued to $D$-dimensions. We choose to keep the
Pauli spin matrices three dimensional and continue the derivatives to
$D$-dimensions. This is similar to the scheme proposed by t'Hooft and Veltman
\cite{TV} for chiral gauge theories and ends up being convenient since the
$n+p \rightarrow d+\gamma$ amplitude is proportional to the
antisymmetric Levi-Civita tensor, $\epsilon_{ijk}$.
At NLO two basic divergent integrals are encountered. The first is
\begin{eqnarray}
\openup3\jot
I_0&\equiv&\left({\mu \over 2}\right)^{4-D}
\int {{{\rm d}}^{(D-1)}{\bf q}\over (2\pi)^{(D-1)}}\,
\({1\over {\bf q}^2 + a^2}\)
\nonumber\\
&=& (\sqrt {a^2}~)^{D-3} \Gamma\({3-D\over 2}\)
{(\mu/2)^{4-D}\over (4\pi)^{(D-1)/2}}\ .
\label{int1}
\end{eqnarray}
$I_0$ has no pole at $D=4$ but does have a pole at $D=3$. Its value in the PDS
scheme is,
\begin{equation}
I_0^{PDS}=\left({1\over 4\pi}\right) (\mu - \sqrt{a^2}).
\end{equation}
The second is the two loop integral,
\begin{equation}
I_1 \equiv \left({\mu\over 2}\right)^{2 (4-D)}
\int\ {d^{D-1}{\bf q}\over (2\pi)^{D-1}}{d^{D-1}{\bf l}\over (2\pi)^{D-1}}
\ {1\over {\bf q}^2+a^2}\ {1\over {\bf l}^2+b^2}\ {1\over ({\bf q}-{\bf l})^2+c^2}.
\end{equation}
$I_1$ has no pole at $D=3$ but does have the pole, $-1/32\pi^2(D-4)$, at
$D=4$. Therefore $I_1$ has the same value in minimal subtraction (MS) as
in PDS \cite{Int},
\begin{equation}
I_1^{PDS}=I_1^{MS}=-{1\over 16\pi^2}
\left(\log\left({\sqrt{a^2}+\sqrt{b^2}+\sqrt{c^2}\over\mu}\right)
\ +\ \delta\right),
\label{int2}
\end{equation}
where
\begin{equation}
\delta={1\over 2}\left(\gamma_E-1-{\rm log}\left({\pi \over 4}\right)\right)
\label{scheme}
\end{equation}
and $\gamma_E$ is Euler's constant. Note that because of its logarithmic
divergence, $ [3I_1/(D-1)]^{PDS}=I_1^{PDS}+1/96\pi^2$.
There is considerable freedom in
the precise way the subtractions are handled. For example, if the
poles in $D=4$ are subtracted with ${\overline {MS}}$ then $\delta =-1/2$.
Finally, we stress that one cannot blindly evaluate the integrals in
$D$-dimensions and subtract the poles to get the required PDS value. For
example, if $a$ and $b$ are set to zero then $I_1$ has a double pole at
$D=3$. However, this is associated with a logarithmic
infrared divergence in three
dimensions, not an ultraviolet divergence, and so it is not subtracted.
Most of the coefficients in ${\cal L}_2$ have been determined.
At NLO the deuteron magnetic moment \cite{KSW2} is
\begin{eqnarray}
\mu_d & = & {e\over 2 M} \left(\kappa_p\ +\ \kappa_n\ +\ L_2\ { 2M \gamma
(\mu-\gamma)^2\over\pi}\right)
\ \ \ ,
\end{eqnarray}
where $\gamma=\sqrt{MB}$ with $B=2.225\ {\rm MeV}$ the binding energy of the
deuteron and $\mu$ is the subtraction point. The coefficient $L_2$ depends on
the subtraction point in such a way that the physical quantity $\mu_d$ is $\mu$
independent.
The experimental value of the magnetic moment of the deuteron is
$\mu_d=0.85741$ nuclear magnetons and comparing this with the
prediction above implies that the coefficient $L_2$
(renormalized at $\mu=m_\pi$) is,
\begin{eqnarray}
L_2 (m_\pi) & = & -0.149\ {\rm fm^4}
\ \ \ \ .
\end{eqnarray}
Note that,
\begin{equation}
N^TP_i\sigma_j N= i \epsilon_{ijk} N^TP_kN,
\end{equation}
and so the operator with coefficient $L_2$ in eq.(\ref{lagtwo})
is the same as in \cite{KSW2}.
The coefficients of the four-nucleon operators in eq. (\ref{lagtwo})
that don't involve the electromagnetic field have been fixed from
comparison with experimental
data on $NN$ scattering. We will review this in the following section.
The only unknown coefficient that contributes at NLO is $L_1$ and it
will be determined in this work.
\section{S-wave $NN$ Scattering}
The $^1S_0$ $NN$ scattering amplitude, at center of mass momentum $p$,
has the expansion
${\cal A}^{(^1S_0)}(p)=\sum_{n=-1}^\infty {\cal A}^{(^1S_0)}_n(p)$, where
${\cal A}^{(^1S_0)}_n(p)$ is of order $Q^{n}$ . At leading order only the
four nucleon operator with no derivatives need be included and
\begin{equation}
{\cal A}^{(^1S_0)}_{-1}(p)
= { -C_0^{(^1S_0)}\over 1 + C_0^{(^1S_0)} M
(\mu + ip)/4 \pi}
\ \ \ .
\label{firsto}
\end{equation}
It is convenient to break the next order contribution into several pieces,
${\cal A}^{(^1S_0)}_0={\cal A}_0^{(I)}+
{\cal A}_0^{(II)}+{\cal A}_0^{(II)}+{\cal A}_0^{(IV)}+{\cal A}_0^{(V)}$, and,
using PDS, ref. \cite{KSW} found,
\begin{eqnarray}
{\cal A}_0^{(I)} &=&
-C_2^{({}^1\kern-.14em S_0)} p^2
\left[ {{\cal A}^{(^1S_0)}_{-1}\over C_0^{({}^1\kern-.14em S_0)} } \right]^2
\ \ \ ,
\nonumber\\
{\cal A}_0^{(II)} &=& \left({g_A^2\over 2f^2}\right) \left(-1 + {m_\pi^2\over
4p^2} \ln \left( 1 + {4p^2\over m_\pi^2}\right)\right)
\ ,
\nonumber\\
{\cal A}_{0}^{(III)} &=& {g_A^2\over f^2}
\left( {m_\pi M{\cal A}^{(^1S_0)}_{-1}\over 4\pi}
\right) \Bigg( - {(\mu + ip)\over m_\pi}
+ {m_\pi\over 2p} \left[\tan^{-1} \left({2p\over m_\pi}\right) + {i\over 2} \ln
\left(1+ {4p^2\over m_\pi^2} \right)\right]\Bigg)
\ ,
\nonumber\\
{\cal A}_0^{(IV)} &=& {g_A^2\over 2f^2} \left({m_\pi M{\cal A}^{(^1S_0)}_{-1}\over
4\pi}\right)^2 \Bigg(-\left({\mu + ip\over m_\pi}\right)^2
+ i\tan^{-1} \left({2p\over m_\pi}\right) - {1\over 2} \ln
\left({m_\pi^2 + 4p^2\over\mu^2}\right) +{1 \over 6}- \delta\Bigg)
\ ,
\nonumber\\
{\cal A}_0^{(V)} &=& - D^{({}^1\kern-.14em S_0)}_2 m_\pi^2
\left[ {{\cal A}^{(^1S_0)}_{-1}\over C_0^{({}^1\kern-.14em S_0)} }\right]^2
\ .
\label{secondo}
\end{eqnarray}
Actually, the above expression for ${\cal A}_0^{(IV)}$ is
slightly different from that in \cite{KSW} because there
some terms that appear above were
absorbed into a redefinition of $D^{({}^1\kern-.14em S_0)}_2$.
The scattering length gets contributions from each order in the $Q$ expansion.
To use these results for the scattering amplitude over a region that
includes very low $p$ it is necessary that the the leading
order amplitude give almost the correct the scattering length.
This can be achieved at NLO by reordering the expansion in the
following way \cite{MehStew}. Write
\begin{equation}
C_0^{({}^1\kern-.14em S_0)}=\bar C_0^{({}^1\kern-.14em S_0)}+\Delta C_0^{({}^1\kern-.14em S_0)},
\end{equation}
treat $\bar C_0^{({}^1\kern-.14em S_0)}$ nonperturbatively and $\Delta C_0^{({}^1\kern-.14em S_0)}$
as a perturbation. Then in eqs. (\ref{firsto}) and (\ref{secondo})
the following changes occur,
\begin{equation}
C_0^{({}^1\kern-.14em S_0)} \rightarrow \bar C_0^{({}^1\kern-.14em S_0)}~~,~~D^{({}^1\kern-.14em S_0)}_2 m_\pi^2 \rightarrow
D^{({}^1\kern-.14em S_0)}_2 m_\pi^2+\Delta C_0^{({}^1\kern-.14em S_0)}.
\end{equation}
The coefficient $\bar C_0^{({}^1\kern-.14em S_0)}$ is no longer independent
of the light quark masses and is chosen so that the physical value of the
scattering length is close to the first term in its $Q$ expansion,
\begin{eqnarray}
{1 \over a^{({}^1\kern-.14em S_0)}}=&&\left(\mu+{4\pi \over M\bar C_0^{({}^1\kern-.14em S_0)}}\right)-
{4\pi(D_2^{({}^1\kern-.14em S_0)}m_{\pi}^2+\Delta C_0^{({}^1\kern-.14em S_0)}) \over M (\bar C_0^{({}^1\kern-.14em S_0)})^2}
\nonumber \\
&&-{g_A^2M m_{\pi}^2 \over 4 \pi f^2}
\left(\left(\mu+{4\pi \over M\bar C_0^{({}^1\kern-.14em S_0)}}\right)
{m_{\pi}-\mu \over m_{\pi}^2}+{1 \over 2} {\rm ln}\left({m_{\pi} \over \mu}\right)
+{\delta \over 2}-{1 \over 12}+{\mu^2 \over 2m_{\pi}^2}\right).
\end{eqnarray}
The subtraction point is arbitrary and the coefficients
$ \bar C_0^{({}^1\kern-.14em S_0)}$, $C_2^{({}^1\kern-.14em S_0)}$
and $D_2^{({}^1\kern-.14em S_0)}m_{\pi}^2+\Delta C_0^{({}^1\kern-.14em S_0)}$ depend on $\mu$ in
such a way that ${\cal A}_{-1}^{({}^1\kern-.14em S_0)}$ and ${\cal A}_0^{({}^1\kern-.14em S_0)}$
are independent of $\mu$. These coefficients
are determined from the measured $NN$ phase shift. We
will only need the first two of them and for these a
fit over the region $7~{\rm MeV}<p<100~{\rm MeV}$ finds, at $\mu=m_{\pi}$
\cite{MehStew},
\begin{eqnarray}
\bar C_0^{({}^1\kern-.14em S_0)}(m_\pi) =-3.529{\rm\ fm}^2\ ,\
C_2^{({}^1\kern-.14em S_0)}(m_\pi) = 3.04{\rm\ fm}^4
\ .
\label{eq:numfitc}
\end{eqnarray}
Discussions of the results of different fitting procedures can
be found in \cite{KSW,KSW2,MehStew,SteFurn,CohHan}
In the $^3S_1$ channel, identical formulae hold
for the scattering amplitude and scattering length once the
replacement $^1S_0$ $\rightarrow$$^3S_1$ is made for the superscripts.
However, the fit to
the data is done a little differently. For processes involving the
deuteron it is convenient to constrain $\bar C_0^{({}^1\kern-.14em S_0)}$ so that
${\cal A}^{({}^3\kern-.14em S_1)}_{-1}$ gives the correct deuteron binding energy,
$B=2.2255~{\rm MeV}$. This implies that
\begin{equation}
\bar C_0^{({}^3\kern-.14em S_1)}(\mu) = -{4 \pi \over M}\left({1 \over \mu-\gamma}\right),
\end{equation}
where $\gamma=\sqrt{MB}$. In this channel a constrained
fit to the $NN$ phase shift yields
\begin{eqnarray}
\bar C_0^{({}^3\kern-.14em S_1)}(m_\pi) =-5.708{\rm\ fm}^2\ ,\
C_2^{({}^3\kern-.14em S_1)}(m_\pi) =10.8{\rm\ fm}^4\ .
\end{eqnarray}
\section{Cross section for Radiative Capture}
The amplitude for
the radiative capture of extremely low momentum neutrons $n+p\rightarrow d+\gamma$
has contributions from both the ${}^1\kern-.14em S_0$ and ${}^3\kern-.14em S_1$ $NN$
channels. It can be written as
\begin{eqnarray}
\label{eq:matrix}
i{\cal A}(np\rightarrow d\gamma) & = &
e\ X\ N^T\tau_2\ \sigma_2 \ \left[ {\bbox \sigma}\cdot {\bf k}\
\ {\bbox\epsilon} (d)^* \cdot {\bbox \epsilon} (\gamma)^*
\ -\ {\bbox \sigma} \cdot {\bbox \epsilon} (\gamma)^*\
\ {\bf k}\cdot {\bbox \epsilon} (d)^*
\right] N
\\ \nonumber
& + &
i e\ Y\ \epsilon^{ijk}\ \epsilon (d)^{i*}\
k^j\ {\bbox\epsilon} (\gamma)^{k*}
\ (N^T\tau_2 \tau_3 \sigma_2 N)
\ \ \ \ ,
\end{eqnarray}
where $e=|e|$ is the magnitude of the electron charge,
$N$ is
the doublet of nucleon spinors, ${\bbox \epsilon}(\gamma)$ is
the polarization vector for the photon, ${\bbox \epsilon}(d)$ is the polarization
vector for the deuteron and ${\bf k}$ is the outgoing photon momentum.
The term with coefficient $X$ corresponds to capture from the ${}^3\kern-.14em S_1$ channel
while the term with coefficient $Y$ corresponds to capture from the ${}^1\kern-.14em S_0$
channel.
For convenience, we define dimensionless variables $\tilde X$ and $\tilde Y$,
by
\begin{eqnarray}
X & = & i {2\over M} \sqrt{\pi\over\gamma^3}\ \tilde X
\ \ ,\ \
Y = i {2\over M} \sqrt{\pi\over\gamma^3}\ \tilde Y
\ \ \ \ .
\end{eqnarray}
Both $\tilde X$ and $\tilde Y$ have the $Q$ expansions,
$\tilde X = \tilde X_0+ \tilde X_1+...$, and
$\tilde Y=\tilde Y_0+ \tilde Y_1+...$, where a subscript $n$ denotes a
contribution of order $Q^n$.
The capture cross section for neutrons with speed $|{\bf v}|$
arising from eq.~(\ref{eq:matrix}) is
\begin{eqnarray}\label{eq:sig}
\sigma & = & {8\pi \alpha \gamma^3 \over M^5 |{\bf v}|}
\left[ 2 |\tilde X|^2\ +\ |\tilde Y|^2\right]
\ \ \ ,
\end{eqnarray}
where $\alpha$ is the fine-structure constant.
\begin{figure}[t]
\centerline{{\epsfxsize=3.0in \epsfbox{npstrongV2.eps}} }
\noindent
\caption{\it Graphs contributing to the amplitude
for $n+p\rightarrow d+\gamma$ at leading order in the
effective field theory expansion.
The solid lines denote nucleons
and the wavy lines denote photons.
The light solid circles correspond to the nucleon magnetic
moment coupling to
the electromagnetic field.
The crossed circle represents an insertion of the deuteron
interpolating
field which is taken to have ${}^3\kern-.14em S_1$ quantum numbers.
}
\label{fig:strong}
\vskip .2in
\end{figure}
At leading order,
$\tilde X$ and $\tilde Y$ are calculated from the sum of Feynman diagrams
in Fig.~(\ref{fig:strong})
and from wavefunction renormalization associated with the deuteron
interpolating
field\cite{KSW2}, giving
\begin{equation}
\tilde X_0\ =\
\kappa_0 \ \left( 1\ +\ {\gamma M\over 4\pi}{\cal A}_{-1}^{({}^3\kern-.14em S_1)}(0) \right)
\ \ \ \ ,\ \ \ \
\tilde Y_0\ =\
\kappa_1 \ \left( 1\ +\ {\gamma M\over 4\pi}{\cal A}_{-1}^{({}^1\kern-.14em S_0)}(0) \right)
\ \ \ ,
\label{xy}
\end{equation}
where ${\cal A}_{-1}^{({}^1\kern-.14em S_0)}(p)$ is the leading, order $Q^{-1}$,
contribution nucleon-nucleon scattering amplitude in the ${}^1\kern-.14em S_0$ channel
at an center of
mass momentum $p$. The scattering length is related to the nucleon-nucleon
scattering amplitude at zero momentum
\begin{eqnarray}
{\cal A}^{({}^1\kern-.14em S_0)}(0) = -{4\pi\over M} a^{({}^1\kern-.14em S_0)}
\ \ \ ,
\label{sl}
\end{eqnarray}
and the experimental value for the $^1S_0$ scattering length is,
$a^{({}^1\kern-.14em S_0)}=-23.714 \pm 0.013~ {\rm fm}$.
An analogous expression holds in the ${}^3\kern-.14em S_1$ channel.
At leading order,
${\cal A}_{-1}^{({}^1\kern-.14em S_0)}(0) = -4 \pi a^{({}^1\kern-.14em S_0)}/M$ and
${\cal A}_{-1}^{({}^3\kern-.14em S_1)}(0) =-4\pi/ M \gamma$.
Using this in eq. (\ref{xy}) gives,
\begin{equation}
\tilde X_0\ =\ 0
\ \ \ \ ,\ \ \ \
\tilde Y_0\ =\
\kappa_1 \ \left( 1\ -\ \gamma a^{({}^1\kern-.14em S_0)} \right)
\ \ \ ,
\label{leading}
\end{equation}
which implies the radiative capture cross section,
\begin{equation}
\label{eq:leading}
\sigma^{LO}={8\pi\alpha\gamma^5\kappa_1^2 (a^{({}^1\kern-.14em S_0)})^2 \over |{\bf v}| M^5}
\left(1-{1 \over \gamma a^{({}^1\kern-.14em S_0)}} \right)^2
\ =\ 297.2\ {\rm mb}
\ \ \ .
\end{equation}
This agrees with the results of Bethe and Longmire~\cite{BLa,Noyes}
when terms in their expression involving the effective range are neglected.
Eq.~(\ref{eq:leading}) is about $10\%$ less than the experimental value,
$\sigma^{\rm expt} = 334.2\pm 0.5\ {\rm mb}$\cite{CWCa}.
Because $ \tilde X_0$ vanishes we only need compute $\tilde Y_1$
to obtain the cross section at NLO.
\begin{figure}[t]
\centerline{{\epsfxsize=3.0in \epsfbox{npstrong_subc2.eps}} }
\noindent
\caption{\it Graphs contributing to the amplitude
for $n+p\rightarrow d+\gamma$ at subleading order
due to the insertion of the $C_2$ and $D_2$ operators.
The solid lines denote nucleons
and the wavy lines denote photons.
The light solid circles correspond to the nucleon magnetic
moment coupling of the photon.
The solid square denotes either a $C_2$ operator or a $D_2$
operator. The crossed circle represents an insertion of the deuteron
interpolating
field . The last graph denotes the contribution from wavefunction
renormalization.
}
\label{fig:strongsubc2}
\vskip .2in
\end{figure}
\begin{figure}[t]
\centerline{{\epsfxsize=3.5in \epsfbox{npstrong_subpiB.eps}} }
\noindent
\caption{\it Graphs contributing to the amplitude
for $n+p\rightarrow d+\gamma$ at subleading order
due to the exchange of a potential pion with the photon
coupling to the magnetic moment of the nucleons.
The solid lines denote nucleons
and the wavy lines denote photons. The dashed line denotes a pion.
The light solid circles correspond to the nucleon magnetic
moment coupling of the photon.
The crossed circle represents an insertion of the deuteron interpolating field.
The last graph denotes the contribution from wavefunction
renormalization. }
\label{fig:strongsubpiB}
\vskip .2in
\end{figure}
\begin{figure}[t]
\centerline{{\epsfxsize=3.0in \epsfbox{npstrong_subpiE.eps}} }
\noindent
\caption{\it Graphs contributing to the amplitude
for $n+p\rightarrow d+\gamma$ at subleading order
due to meson exchange currents.
The solid lines denote nucleons
and the wavy lines denote photons. The dashed line denotes a pion.
The dark solid circles correspond to minimal coupling of the photon.
The crossed circle represents an insertion of the deuteron interpolating
field. }
\label{fig:strongsubpiE}
\vskip .2in
\end{figure}
\begin{figure}[t]
\centerline{{\epsfxsize=2.0in \epsfbox{npstrong_subL1.eps}} }
\noindent
\caption{\it Local counterterm contribution to the amplitude
for $n+p\rightarrow d+\gamma$ at NLO.
The solid lines denote nucleons
and the wavy lines denote photons.
The solid circle corresponds to an insertion of the $L_1$ operator.
The crossed circle represents an insertion of the deuteron
interpolating
field. }
\label{fig:strongsubpiL1}
\vskip .2in
\end{figure}
At NLO there are contributions from insertions of the $D_2$, $C_2$ operators
and from the exchange of potential pions, with the photon coupling both
minimally to the pions and to the nucleons via their magnetic moment.
In addition there is a contribution from the $L_1$ four-nucleon-one-photon
operator.
These can be divided into two categories, those that build up
the ${}^1\kern-.14em S_0$ scattering amplitude at NLO, ${\cal A}_0^{({}^1\kern-.14em S_0)}$, and those that do
not. Writing, $\tilde Y_1= \tilde Y^{(\rm rescatt)}+\tilde Y^{(\rm C_2)}+
\tilde Y^{(\rm \pi, B)}+\tilde Y^{(\rm \pi, E)}+\tilde Y^{(\rm L_1)}$,
we find the graphs in
Figs.~(\ref{fig:strongsubc2}), (\ref{fig:strongsubpiB}),
(\ref{fig:strongsubpiE}) and (\ref{fig:strongsubpiL1})
give contributions
\begin{eqnarray}
\tilde Y^{(\rm rescatt)} & = & \kappa_1 {\gamma M\over 4\pi} {\cal
A}_0^{({}^1\kern-.14em S_0)}(0)
\nonumber\\
\tilde Y^{(\rm C_2)} & = &
-\kappa_1 {\gamma^2\over 2} \left[ {C_2^{({}^1\kern-.14em S_0)}+C_2^{({}^3\kern-.14em S_1)}\over
\bar C_0^{({}^1\kern-.14em S_0)}
\bar C_0^{({}^3\kern-.14em S_1)}}{\cal A}_{-1}^{({}^1\kern-.14em S_0)}(0)
\ +\
{2\over \gamma} {C_2^{({}^3\kern-.14em S_1)} (\mu-\gamma)\over \bar C_0^{({}^3\kern-.14em S_1)}}
\left( 1\ +\ {\gamma M\over 4\pi}{\cal A}_{-1}^{({}^1\kern-.14em S_0)}(0) \right)
\right]
\nonumber\\
\tilde Y^{(\rm \pi, B)} & = & \kappa_1 {g_A^2 M \gamma \over 8\pi f^2}
\left({m_\pi-2\gamma\over m_\pi+2\gamma}
\ +\
{M m_\pi\over 4\pi}{\cal A}_{-1}^{({}^1\kern-.14em S_0)}(0)
\left( {m_\pi\over\gamma}{\rm ln}\left(1+2{\gamma\over m_\pi}\right)
- 2 {m_\pi+\gamma\over m_\pi+2\gamma}\right)\right)
\nonumber\\
\tilde Y^{(\rm \pi, E)} & = & {g_A^2 M \gamma ^2 \over 12\pi f^2}
\left( {m_\pi-\gamma\over (m_\pi+\gamma)^2}
\ +\
{ M\over 4\pi}{\cal A}_{-1}^{({}^1\kern-.14em S_0)}(0)
\left( { 3m_\pi-\gamma\over 2(m_\pi+\gamma)} +
{\rm ln}\left({m_\pi+\gamma\over\mu}\right)
- {1\over 6} + \delta \right)\right)
\nonumber\\
\tilde Y^{(\rm L_1)} & = & L_1\ \gamma^2\ { {\cal
A}_{-1}^{({}^1\kern-.14em S_0)}(0)\over \bar C_0^{({}^1\kern-.14em S_0)}
\bar C_0^{({}^3\kern-.14em S_1)}}
\ \ \ .
\label{ys}
\end{eqnarray}
The NLO ${}^1\kern-.14em S_0$ scattering amplitude ${\cal A}_0^{({}^1\kern-.14em S_0)}$ was given in the
previous section.
Notice that the meson exchange current contribution
$ \tilde Y^{(\rm \pi, E)}$ depends upon the renormalization scale $\mu$.
The graphs contributing to this term have a pole at $D=4$ and
require a subtraction. This is the reason for the
logarithmic $\mu$-dependence. It is
canceled by the $\mu$-dependence of the constant $L_1$ so that $\tilde Y_1$
is independent of $\mu$.
It is interesting to note that the contribution from the $C_2$ operators,
$\tilde Y^{(\rm C_2)}$, is not $\mu$ independent either.
This explicit $\mu$-dependence is also canceled by the $\mu$-dependence of $L_1$.
The renormalization
group equation for the subtraction point dependence of $L_1$ is
\begin{equation}
\mu {d\over d\mu}
\left[ { L_1 - {1\over 2}\kappa_1 \left( C_2^{({}^1\kern-.14em S_0)} +
C_2^{({}^3\kern-.14em S_1)}\right)\over
\bar C_0^{({}^1\kern-.14em S_0)} \bar C_0^{({}^3\kern-.14em S_1)}}\right]
={g_A^2 M^2\over 48\pi^2 f^2}.
\end{equation}
Note that this is quite different from the renormalization group equation,
\begin{equation}
\mu {d\over d\mu}
\left[ { L_2 \over (\bar C_0^{({}^3\kern-.14em S_1)})^2} \right]
=0,
\end{equation}
that $L_2$ satisfies.
There is a NLO contribution that we have not explicitly included,
a one-loop correction to the magnetic moments of the nucleons that is of
order $m_{\pi}/(4 \pi f^2)$. However, by using the value $\kappa_1=2.35294$,
which follows from the measured values of the neutron and proton magnetic
moments, in $\tilde Y_0$, this effect has been taken into account. Similarly,
using the measured value for $a^{({}^1\kern-.14em S_0)}$ in
eq. (\ref{leading}) includes the effects of $\tilde Y^{(\rm rescatt)}$.
Demanding that the NLO expression for $\tilde Y$ give the measured
cross section implies that,
\begin{equation}
\tilde Y^{(\rm C_2)}+\tilde Y^{(\rm \pi, B)}+\tilde Y^{(\rm \pi, E)}+
\tilde Y^{(\rm L_1)}~=~0.92.
\label{solve}
\end{equation}
In eq. (\ref{ys}) using $\mu=m_{\pi}$, $\delta$
given by eq. (\ref{scheme}) and
${\cal A}_{-1}^{({}^1\kern-.14em S_0)}=-4 \pi a^{({}^1\kern-.14em S_0)}/M$ yields,
$\tilde Y^{(\rm C_2)}=0.38$, $\tilde Y^{(\rm \pi, B)}=-0.33$, and
$\tilde Y^{(\rm \pi, E)}=0.60$. Note that for
$\tilde Y^{(\rm C_2)}$ there is
a significant cancellation between the two terms in the square brackets
of eq.~(\ref{ys}).
About two thirds of the discrepancy
between the measured cross section and the leading order expression
is made up from the meson exchange current contribution. Most of the rest
comes from $\tilde Y^{(\rm L_1)}$. Eq.~(\ref{solve}) implies that
\begin{eqnarray}
L_1 (m_\pi) = 1.63\ {\rm fm^4}
\ \ \ .
\end{eqnarray}
The value of $L_1$ is quite sensitive
to the precise way that the poles at $D=4$ are
handled. For example if $\overline{MS}$ is used ({\it i.e.}
$\delta=-1/2$) then $\tilde Y^{(\rm \pi, E)}=0.37$ which gives
$L_1 (m_\pi) = 3.03\ {\rm fm^4}$.
With $L_1$ determined, all the counterterms in the strong and
electromagnetic sector that occur at next-to-leading order in the
effective field theory $Q$ expansion are known.
It is interesting to see that in this framework there is nothing special about
meson exchange currents. They are simply one of the several contributions at
NLO, occuring along with the strong interaction
corrections to diagrams where the photon couples to the
nucleon magnetic moments.
\section{Concluding Remarks}
We have computed the cross section for the radiative capture
process $n+p\rightarrow d+\gamma$.
At leading order we recover the effective range theory result (when the terms
involving $r_0$ are neglected)
which is about $10\%$ smaller than the measured
cross section .
At NLO there are contributions from perturbative insertions of the $C_2$
operators, the $D_2$ operators, potential pion exchanges and from a previously
unconstrained four-nucleon-one-photon counterterm with coefficient
$L_1$.
In order to reproduce the measured cross section,
$\sigma^{\rm expt}$, we find that $L_1 (m_\pi) = 1.65\ {\rm fm^4}$.
In more traditional approaches,
meson exchange currents are required to explain the value of
$\sigma^{\rm expt}$. In effective field theory, the meson exchange current
graphs are divergent and require regularization. As a result, their
contribution to the cross section is not unique and depends upon the choice of
regularization scheme. In addition, a local counterterm is required to absorb
these divergences and its value is { \it a priori} unknown and scheme dependent.
Having determined the value of $L_1$ from the radiative capture cross
section, other processes
arising from electromagnetic interactions such as deuteron breakup
$e+ d\rightarrow e^\prime + n + p$ can be computed at NLO. Work on this is
in progress.
\vskip 0.5in
We would like to thank Jiunn-Wei Chen and David Kaplan for several discussions.
This work is supported in part by the U.S. Dept. of Energy under
Grants No. DE-FG03-97ER4014, DE-FG02-96ER40945 and DE-FG03-92-ER40701.
KS acknowledges support from the NSF under a Graduate Research Fellowship.
| proofpile-arXiv_065-7862 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Bacterial colony formation has been the subject of recent
studies performed by both
microbiologists\cite{Sha88,AH91,BB91,Sha95,BE95}
and physicists\cite{mat89,mat90,bj92,mat93,bj94,nature,BCSALT95,PRL2}
with the aim to
acquire a deeper understanding of the collective behavior of
unicellular organisms.
Since bacterial colonies can consist of up to $10^{10}$
microorganisms,
one can distinguish several length and time scales
when describing their development.
At the largest, macroscopic scale (above cca. $10^{-3}$~m) one is
concerned about the morphology of the whole colony, which can be often
described in terms of fractal geometry \cite{mat89,mat93,nature}.
At this length scale, the {\it physical}~ laws of diffusion
govern colony formation, and only certain details of the
underlying microscopic dynamics (such as
anisotropy \cite{anis}) can be amplified up to this scale by various
instabilities. Thus these
large-scale features could have been successfully understood using
relatively simple models which neglect most of the microscopic
details. In contrast, when looking at the microscopic length
scale of individual bacteria (cca. $10^{-6}$ m), such global physical
constraints play a much smaller role and it is the {\it biology} which
provides us the knowledge of how the microorganisms move,
multiplicate and communicate. In this work we focus on the
intermediate
length and time scales which range from $10^{-5}$~m to $10^{-3}$~m and
from $1$~s to $10^2$~s. In this regime the proliferation of the
bacteria can be neglected,
and the most striking observable
phenomena are connected to the motion of the organisms.
Bacterial {\it swimming} has been well understood in liquid cultures
\cite{Adler,Berg,CHT}, where
the interaction between cells is usually negligible and the
trajectory of each organism can be described as a biased random walk
towards the higher concentration of the chemoattractants, e.g.,
certain nutrients. In fact, the bacterial motion consists of
straight sections separated by short periods of random rotation
called tumbling, when the flagellas operate in a quite
uncorrelated manner. If an increase of the local
concentration of the chemoattractant is detected, bacteria delay
tumbling and continue to swim in the same direction. This behavior
yields
the above mentioned bias towards the chemoattractant. Recent
experimental observations \cite{BB91,nature,BE95} revealed that under
stress conditions even chemotactic communication occurs in
colonies: chemoattractants or chemorepellents are emitted by the
bacteria
themself, thus controlling the development of various self-organized
structures\cite{BCSALT95}.
However, numerous bacterial strains like {\it Proteus
mirabilis, Serratia marcenses} \cite{AH91} {\it Bacillus
circulans, Archangium violanceum, Chondromyces apiculatus,
Clostridium tetani} \cite{film} or {\it Bacillus
subtilis} \cite{bj94} are also able to migrate on surfaces to
ensure their survival under hostile environmental conditions.
In all of these examples, irrespective of the cell membrane
(Gram $+$ or $-$) and the type of locomotion ({\it swarming} or
{\it gliding}), a significantly {\it coordinated} motion can be
observed: the randomness, which is a characteristic feature of
the {\it swimming} of an individual organism, is rather repressed.
In particular the {\it swarmer}
cells of the strain {\it Proteus} are very elongated
(up to 50$\mu$m) and move parallel to each other \cite{AH91}.
The strain {\it Bacillus circulans} obtained its name from the
very complex flow trajectories exhibited, which include rotating
droplets or rings made up of thousands of bacteria \cite{film,czPRE}.
To some extent similar behavior can be observed in each of the
above listed examples, which naturally raises the long standing
question as to whether this form of migration requires some external
(or
self-generated) chemotactic signal or whether simple local cell-cell
interactions are sufficient to explain such correlated behavior.
One of the purposes of this paper is to show that the answer to such
questions requires taking into account that the bacteria are
far-from equilibrium, self-propelled systems.
Recent studies have
revealed that such systems show interesting and unexpected features:
in various traffic models
spontaneous jams appear \cite{Nag93,Ker93} and there is a strong
dependence on
quenched disorder \cite{Csa94}.
Many biological system (such as swarming bacteria, schools of fishes,
birds, ants, etc.) can be described as special self-propelled systems
with local velocity-velocity interaction introduced by Vicsek {\it et
al.}
\cite{Novel} and also studied in a lattice gas model \cite{Csz95}.
Both of these models revealed transition from disordered to ordered
phase.
Although most of the models above incorporate some level of
discreteness, a continuum approach also has been applied to study the
traffic
flow problem \cite{Ker93} or to describe the geotactic motion of
bacteria
\cite{Chi81}.
Toner and Tu have shown using renormalization group theory
that in a closely related continuum model \cite{Tu95}
a long range order can develop even in two dimensions; this is
not possible in the equivalent equilibrium models.
In this paper we construct a
continuum description for the particularly
interesting collective motion
in bacterial colonies.
First we introduce our model, then we give a theoretical analysis of
the problem. In Section\ \ref{sec_numres} we present the numerical
results
and finally in Section\ \ref{sec_concl} we draw our conclusions.
\section {The Model}
When migrating on surfaces, bacteria usually form a very thin film
consisting only of a single or a few layers of cells, thus their
motion can be regarded as quasi two-dimensional. Due to the
small length scale, the very small Reynolds number yields an
overdamped
dynamics in which the thermal fluctuations (Brownian noise)
and the various surface effects (e.g., surface tension, wetting)
dominate the motion.
The constrains of the cell-cell interaction must also
be taken into account since the elongated migrating bacteria usually
align
parallel to each other, which results in an
{\it effective viscosity} that tends to homogenize the velocities as
we
show later.
Bacteria also tend to move close to the substrate which
manifests in an {\it effective gravity}, flattening the colony.
The particles of the ``bacterial fluid'' (i.e., the colony)
are bacteria surrounded by a droplet of moisture
(extracellular fluid, see Fig.\ \ref{bacifluid}) which is essential
for them
both to move and to
extract food from the substrate.
The volume ($V^*$) and the mass ($m^*$) of such a particle
is considered to be constant, which gives a constant (3D)
density $\varrho ^* = m^*/V^*$.
A typical bacterial colony grown on a surface can be considered as
flat,
so our fluid will be characterized by
the 2D {\it number} density $\varrho ({\bf r},t)$
and velocity ${\bf v} ({\bf r},t)$ fields, where ${\bf r}\in {\rm
R}^2$
and $t$ is the time.
The number density is simply related to the local height of the
colony $h$ by
$ h({\bf r},t) = \varrho ({\bf r},t) V^*$.
\begin{figure}
\centerline{\psfig{figure=n2.eps,width=9cm}}
\caption{Schematic crossection of a bacterial colony.}
\label{bacifluid}
\end{figure}
In contrast to fluid mechanics, here the fluid element consists of
only a few ``atoms''. Thus random fluctuations do not cancel
out and they have to be taken into account by an additional noise
term in the equation of motion for the velocity field.
The two basic equations governing the dynamics are
the continuity equation
\begin{equation}\label{cont0}
\partial_t \varrho + \nabla (\varrho {\bf v})= 0
\end{equation}
and the equation of motion, which is a suitable form of the
Navier-Stokes
equation
\begin{eqnarray}\label{eom0}
\partial_t {\bf v} + ({\bf v}\cdot \nabla) {\bf v} &=&
-{1\over \varrho^*} \nabla p + \nu \nabla^2 {\bf v} +
{1\over\varrho^*}{\bf F}({\bf v}) - {1\over\tau} {\bf v} \nonumber\\
&&+{\bf \eta}({\bf x},t),
\end{eqnarray}
where $p$ is the pressure, $\nu$ is the kinematic viscosity,
${\bf F}({\bf v})$ is the {\it intrinsic} driving force of biological
origin,
$\tau$ is the time scale associated with substrate friction
and ${\bf \eta}({\bf x},t)$ is an uncorrelated fluctuating force
with zero mean and variance $\sigma$ representing the random
fluctuations.
Let us now go through the terms of Eq.\ \ref{eom0}.
The pressure is composed of the effective hydrostatic pressure,
the capillary pressure, and the externally applied pressure
\begin{equation}\label{press}
p = \varrho^* g h + \gamma \nabla^2 h + p_{\rm ext}.
\end{equation}
If the radius of surface curvature is larger
then the capillary length
\begin{equation}\label{lcap}
l_{\rm cap} = \sqrt{\gamma\over {\varrho^* g}}
\end{equation}
then the capillary pressure can be neglected.
Since this is normally the case ($l_{\rm cap}\approx 3$ mm for
water),
we consider
only the first term of Eq.\ \ref{press} in the rest of the paper.
The surface tension becomes relevant only at the boundary of
the colony where the local curvature is not negligible.
The viscous term in Eq.\ \ref{eom0} represents not only the viscosity
of
the extracellular fluid, but also incorporates the local ordering of
the cells. As we show in Appendix A for a simple hard rod model of the
cell-cell interaction, the deterministic part of the dynamics of
the orientational ordering is described in a similar manner
to \cite{Novel}:
\begin{equation}
{{\rm d}\theta_i\over{\rm d}
t}=\mu\Bigl(\langle\theta\rangle_\epsilon-\theta_i\Bigr),
\label{orient}
\end{equation}
where the orientation of the $i$th rod is denoted by $0<\theta_i<\pi$
and $\langle\cdot\rangle_\epsilon$ denotes spatial averaging over a
ball
of radius $\epsilon$.
The angle $\theta$ can be readily replaced by ${\bf e}_\theta$,
a unit vector at angle $\theta$ to the $x$ axis.
If the changes in the magnitude of the velocity are small
then ${\bf e}_\theta$ can be replaced by ${{\bf v}}$ and
Eq.\ \ref{orient} yields an interaction term proportional to
$\langle {{\bf v}} \rangle_{\epsilon} - {{\bf v}}$.
Taking Taylor series expansions for the velocity
and the density fields yields
\begin{eqnarray}
\langle {{\bf v}} \rangle_{\epsilon} - {\bf v} &=&
{
\int_{\vert {\bf \xi} \vert < \epsilon} \hbox{d} {\bf \xi} \Bigl(
{{\bf v}}\varrho+
({\bf \xi}\nabla) {{\bf v}} \varrho +{1\over 2}({\bf \xi}\nabla)^2
{{\bf v}}\varrho
+ \dots \Bigr)
\over
\int_{\vert {\bf \xi} \vert < \epsilon} \hbox{d} {\bf \xi} \Bigl(
\varrho + ({\bf \xi}\nabla)\varrho +{1\over 2}({\bf
\xi}\nabla)^2\varrho
+ \dots \Bigr) } - {\bf v} \nonumber\cr
& =& {\epsilon^2\over 6}\Bigl(
{\nabla^2 ({{\bf v}}\varrho)\over \varrho} -
{{\bf v}}{\nabla^2 \varrho \over \varrho} \Bigr) + \dots \nonumber\cr
&=&
{\epsilon^2\over 6}\Bigl( \nabla^2{{\bf v}} +
2(\nabla {{\bf v}}) {\nabla\varrho\over\varrho} \Bigr) + \dots
\label{DIFF1}
\end{eqnarray}
If the density changes are small, we recover the viscous term of Eq.\
\ref{eom0},
so it, in fact, includes the self-alignment rule of previous models.
Bacteria tend to maintain their motion continuously
by propelling with their flagellas. This rather complex behavior can
be taken
into account as a constant magnitude force acting
in the direction of their velocity
\begin{equation}
{\bf F} = \varrho^* {c\over\tau}{{{\bf v}}\over{\vert {{\bf v}}
\vert}},
\label{sdrive}
\end{equation}
where $c$ is the speed determined by the balance of the propulsion
and friction forces. That is, $c$ would be the speed of a homogeneous
fluid.
Combining Eqs.\ \ref{cont0}, \ref{eom0}, \ref{press} and
\ref{sdrive} we obtain the following final form for the equations
of the bacterial flow:
\begin{equation}
\label{cont}
\partial_t h + \nabla (h {\bf v})= 0,
\end{equation}
and
\begin{eqnarray}
\label{eom}
\partial_t {\bf v} + ({\bf v}\cdot \nabla) {\bf v} &=& -{g} \nabla h +
-{1\over\varrho^*}\nabla p_{\rm ext} +
\nu \nabla^2 {\bf v} \nonumber\\
&&+{c\over\tau}{{{\bf v}}\over{\vert {{\bf v}} \vert}} - {1\over\tau}
{\bf v} + {\bf \eta}.
\end{eqnarray}
These equations are similar to those studied in \cite{Tu95} but
here we derived them from plausible assumptions based on the
underlying
microscopic dynamics instead of phenomenological concepts.
Nevertheless, the analysis of Toner and Tu also holds for our
equations
which permits us to use our model as a test of their prediction of
the existence of a phase transition.
\section{Analytical results}
For certain simple geometries of the boundary condition
it is possible to obtain analytical solutions for the noiseless
($\sigma=0$) stationary state
of our model if we suppose incompressibility ($h=const.$).
Taking $\partial_t \equiv 0$ in Eqs.\ \ref{cont} and \ref{eom},
the following dimensionless equations are obtained:
\begin{equation}
\nabla' {{\bf v}}' = 0
\label{cont2}
\end{equation}
and
\begin{equation}
{{c \tau}\over{\lambda}}
({\bf v}' \nabla') {\bf v}' =
-{1\over{\varrho^* \lambda}}\nabla p_{\rm ext} + {\nabla}'^2 {\bf v}'
+
{{\bf v}'\over{\vert {{\bf v}}' \vert}} - {{{\bf v}}}',
\label{eom2}
\end{equation}
where ${{\bf v}}'={{\bf v}}/c$, $\lambda=\sqrt{\nu\tau}$
and the $\nabla'$ operator derivates
with respect to ${\bf r}'={\bf r}/\lambda$.
For the sake of simplicity we drop the prime in the rest of this
section.
First let us consider the simplest geometry when the system is defined
on an infinite plane. In this case the stationary state is
trivially
\begin{equation}
\vert {{\bf v}}({\bf r}) \vert = 1,
\end{equation}
where the orientation of ${{\bf v}}$ is arbitrary, but independent of
the spatial position.
In this state there is a sustained non-zero
net flux of the fluid so it is regarded to be {\it ordered}.
In the next section we examine numerically how
increasing the noise level yields a disordered state where there is
no net flux present.
Another simple, but practically more relevant boundary condition is
realized when the system is confined to a circular area of radius $R$.
In contrary to the previous example this is a finite geometry
which in turn means that in the stationary
state no net flux is possible. Thus the flow field must include
vortices and we
show that a single vortex is indeed
a possible stationary configuration of the system.
Let us assume that the velocity field,
expressed in polar coordinates $(r,\phi)$,
is a function only of $r$:
\begin{equation}
{{\bf v}} = v(r)~{\bf e}_\phi,
\label{vpdef}
\end{equation}
where ${\bf e}_\phi$ is the tangential unit vector.
Taking the expression of the nabla operator in polar coordinates,
one can easily see that Eq.\ \ref{cont2} is satisfied for any
velocity profile $v(r)$.
Substituting Eq.\ \ref{vpdef} into Eq.\ \ref{eom2} yields two
ordinary differential
equations:
\begin{equation}
0= r^2 {{{\rm d}^2 v}\over {\rm d} r^2} + r {{{\rm d} v}\over {\rm d}
r}
- v (1+r^2) + r^2,
\end{equation}
and
\begin{equation}
-{1\over r^2}v^2 =
-{1\over{\varrho^* \lambda}}{{\partial p_{\rm ext}}\over \partial r}.
\end{equation}
The first equation gives the velocity profile while the second
determines the pressure to be applied for maintaining the constant
height
($h=$const.) condition.
The boundary conditions for the velocity profile are either
$v(0)=0$ and $v(R')=0$ (closed boundary at $r=R'=R/\lambda$)
or $v(0)=0$ and ${{{\rm d} v}\over{{\rm d} r}}(R')=0$ (free boundary
at $r=R'$).
The homogeneous solution of the equation of motion with
the above boundary conditions is
given by $I_1(r)$, the modified Bessel function of order one.
The particular solution is
$-{\pi\over 2}L_1(r)$, where $L_1$ is the modified Struve function of
order one \cite{Stegun}.
Thus the velocity profile of the single vortex
stationary state in a noiseless system with circular boundary is
\begin{equation}
v(r) = \alpha I_1(r) -{\pi\over 2}L_1(r).
\label{vp}
\end{equation}
The parameter $\alpha$ should be chosen to satisfy
the boundary condition at $r=R'$.
In Fig.\ \ref{velprofs} we show velocity profiles
for different values of $\lambda$ having $R=1$ fixed and
imposing closed boundary condition.
The maximal velocity decreases for decreasing $R/\lambda$ ratio
(Fig.\ \ref{vmax}); thus the system behaves more similar to the usual
(not self-propelled) systems where $v(r)\equiv 0$.
In this respect $R/\lambda$ is a measure of
the self-propulsion. From Fig.\ \ref{velprofs} it is also clear
that the minimal size
of a vortex is $\lambda$ so if $R/\lambda\gg 1$ then
many vortices are likely to be present in the system.
This phenomenon is analogous to the appearance of turbulent
motion in fluid dynamics and thus $R/\lambda$ can be regarded as a
quantity
analogous to the Reynolds number:
\begin{equation}
{\rm Re}' = {R\over\lambda}.
\end{equation}
Another dimensionless quantity characterizing the flow
is $c\tau/\lambda$ which gives the relative strength of
the inertial forces compared the viscous ones. This quantity
can be regarded as an internal Reynolds number
\begin{equation}
{\rm Re}'' = {c\tau\over\lambda}.
\end{equation}
In our
simulations we always had ${\rm Re}'' \ll {\rm Re}'$.
\begin{figure}
\centerline{\psfig{figure=f2.eps,width=9cm}}
\vspace{0.5cm}
\caption{Velocity profiles in a vortex
for various values of ${\rm Re}'$ (bottom curve: ${\rm Re}'=1$,
top curve: ${\rm Re}'=16$).}
\label{velprofs}
\end{figure}
\begin{figure}
\centerline{\psfig{figure=f2max.eps,width=8cm}}
\vspace{0.5cm}
\caption{Position (solid line) and value (dashed line)
of the maximal velocity for the profiles on
\protect Fig.\ \ref{velprofs}
as a function of ${\rm Re}'$. At the maximal velocity, open boundary
condition
(${\rm d} v /{\rm d} r =0$) is satisfied.}
\label{vmax}
\end{figure}
\section{Numerical results}
\label{sec_numres}
For further investigations we used numerical solutions of Eq.\
\ref{cont}
and Eq.\ \ref{eom}.
To study the closed circular geometry we implemented our model
on a hexagonal region of a triangular lattice with closed boundary
conditions. We used a simple explicit integration scheme to solve
numerically
the equations.
We started our simulations
from a uniform density and a random velocity distribution.
Figure\ \ref{numvort} shows the stationary state for the high
viscosity
($\lambda\simeq 3.16$)
and high compressibility ($g=750$) case, corresponding to
${\rm Re}'\simeq 4.4$.
The length and direction of the arrows show the velocity, while
the thickness is proportional to the local density of the fluid.
In Fig.\ \ref{numfit} we present the radial velocity distribution for
the vortex
shown in Fig.\ \ref{numvort} and the velocity profile given by our
calculations
(Eq.\ \ref{vp}). Rather good agreement is seen; the differences are
due to the
fact that our numerical system is not perfectly circular.
We have also performed velocity profile analysis for two vortices from
\cite{czPRE}.
The data and the fitted curves are displayed in Fig.\ \ref{bacifit}.
Again, reasonable agreement is seen which means that Eqs.\ \ref{cont}
and \ref{eom} give a correct continuous description for this
particular example of self-propelled systems.
\begin{figure}
\centerline{\psfig{figure=a04.ps,width=8cm}}
\caption{A numerically generated vortex for
${\rm Re}'\simeq 4.4$ and $g= 750$. The length of the arrows is
proportional
to the local velocity, while their thickness is proportional to the
density.}
\label{numvort}
\end{figure}
\begin{figure}
\centerline{\psfig{figure=numfit.eps,width=8cm}}
\vspace{0.5cm}
\caption{The measured (circles) and
the theoretical (solid line)
velocity profile for the vortex in \protect Fig.\ \ref{numvort}.}
\label{numfit}
\end{figure}
\begin{figure}
\centerline{\psfig{figure=bacifit.eps,width=8cm}}
\vspace{0.5cm}
\caption{Velocity profile data for two vortices taken from
\protect \cite{czPRE} (circles and squares) and
fitted profiles using \protect Eq.\ \ref{vp}.}
\label{bacifit}
\end{figure}
In the case of real bacteria one hardly can observe
a perfect vortex, so we tuned the parameters to see
the behavior of the model far from stationarity.
Lowering the value of the compressibility $g$, we observed temporally
periodic structures instead of a constant velocity profile vortex.
Figure\ \ref{wash} shows such a configuration.
The density at the lower
left part of the system is higher than at the top right part.
As the system evolves in time the whole flow pattern
rotates anti-clockwise, thus
this state has non-zero net flux which oscillates in time.
Similar behavior also has been reported for certain
bacterial colonies\cite{bj94} .
\begin{figure}
\centerline{\psfig{figure=a05.ps,width=8cm}}
\caption{Non-stationary configuration at ${\rm Re}'= 4.4 , g= 300$.}
\label{wash}
\end{figure}
Another interesting case occurs when the compressibility is high and
the viscosity is low, which corresponds to high values of ${\rm Re}'$.
In this limit we observed a long lifetime multi-vortex state
(Fig.\ \ref{xxvortex}). For the parameter values we use, these
vortices
eventually disappeared leaving behind a single vortex. We conjecture
that there exists a ${\rm Re}'_c$ above which the multi-vortex
state does not die out, this would correspond to the turbulent
state of the normal hydrodynamics.
\begin{figure}
\centerline{\psfig{figure=a06.ps,width=8cm}}
\caption{Multiple vortex state at ${\rm Re}'= 25.3 , g= 128$.}
\label{xxvortex}
\end{figure}
We also performed simulations in the open planar geometry.
In this case our numerical
system was confined to a square with periodic boundary conditions.
Our goal was to find if there exists the phase transition
as a function of the noise strength $\sigma$.
To do this we first
defined an order parameter
\begin{equation}
\Phi = \langle \vert {\bf v} \vert \rangle,
\end{equation}
where $\langle\cdot\rangle$ denotes spatial average over the entire
system.
If $\Phi=0$ there is no net current and the system
is in disordered state. If $\Phi>0$ some level
of order is present.
We have performed long time ($>10^7$ Monte-Carlo steps)
runs for two different system sizes
(L=24 and 48) to check
the presence of the transition. Figure\ \ref{phtrans} shows the
results obtained
which strongly suggests that there is a transition around
$\sigma\approx 6.5$. This value is by far not universal as it depends
on
the actual values of parameters used in the simulations.
However, the extraction of the critical exponents would require much
larger
computational efforts (i.e., larger system sizes).
\begin{figure}
\centerline{\psfig{figure=ordp.eps,width=8cm}}
\caption{The order parameter $\Phi$ as a function of
the noise strength $\sigma$.}
\label{phtrans}
\end{figure}
\section{Conclusion}
\label{sec_concl}
We have investigated a continuous model for cooperative
motion of bacteria. For simple geometries
and specific parameters we were able to obtain analytical
results for the velocity profile of vortices often observable
on intermediate length scales in various bacterial colonies.
Our results are in quantitative agreement with biological
observations \cite{film} and with previous simulations
\cite{czPRE}. We also showed that there exists a transition
\cite{Tu95}
from an ordered to a disordered phase as a function of noise
strength.
In its present form our model does not give a model for the
pattern formation of bacterial colonies. However, it is rather
straightforward to include surface tension and wetting effects in
their
full detail \cite{Gen85} to handling the dynamics of the colony
border. Including proliferation, the presented model can be also
expanded towards larger time and length scales to study the fingering
instabilities. It is also possible to introduce chemotactic response
which would allow long range interactions to build up.
Further work needs to be done to characterize the turbulent regime
(which is mostly observed in swarming colonies)
and to relate the effective parameters used in our model
to the real world quantities such as food or agar concentration.
\section{Acknowledgments}
The authors thank E. Ben-Jacob, H. Herrmann and T. Vicsek for useful
discussions and comments.
This work was supported by contracts T019299 and T4439
of the Hungarian Science Foundation
(OTKA) and by the ``A Magyar Tudom\'any\'ert'' foundation of
the Hungarian Credit Bank.
\newpage
| proofpile-arXiv_065-7864 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction: KW duality as a 3d topological claim}
Kramers--Wannier (KW) duality in 2d statistical models can be formulated as a simple topological claim about
pictures like this one:
$$\epsfxsize 6cm \epsfbox{woodoo.eps}$$
The picture represents a 3d body (a ritual mask) made of yellow material, with the surface partially painted in red and black. For definiteness imagine that the invisible side is unpainted (i.e. completely yellow). In general we have a compact oriented yellow 3-fold $\Omega$ with the
boundary coloured in this way (in a locally nice way: the borders of
the colour stains are piecewise linear (say) and at most three of them meet at
a single point).
We choose a finite abelian group $G$ and its dual $\tilde G$. Let $y$ be the yellow
part of the boundary; it is an oriented surface with the boundary
coloured in black and red. The relative cohomology groups $H^1(y,r;G)$ and
$H^1(y,b;\tilde G)$ are mutually dual via Poincar\'e duality (in expressions like
$H^k(X,r;G)$, $r$ denotes the red part of $X$, and $b$ the black part). Let
$\rho:H^1(\Omega,r;G)\rightarrow H^1(y,r;G)$ and $\tilde\rho:H^1(\Omega,b;\tilde G)\rightarrow
H^1(y,b;\tilde G)$ be the restriction maps. According to KW duality, their images are
each other's annihilators. It is an immediate consequence of Poincar\'e duality
and of the exactness of
$$H^1(\Omega,r;G)\rightarrow H^1(y\cup r,r;G)\rightarrow H^2(\Omega,y\cup r;G).$$
In statistical models it is used in the following form: we take a function
$f$ on $H^1(y,r;G)$ (the Boltzmann weight) and compute the partition sum
\begin{equation} Z(f)=\sum_{x\in H^1(\Omega,r;G)}f(\rho(x)).\end{equation}
Let $\hat f$ denote the Fourier
transform of $f$. We can compute \begin{equation}\tilde Z(\hat f)=\sum_{x\in H^1(\Omega,b;\tilde
G)}\hat f(\tilde\rho(x)).\end{equation} KW duality says (via Poisson summation formula) that
up to an inessential factor we have $Z(f)=\tilde Z(\hat f)$.
Let us stop to make a connection with more usual formulations. Notice that an element
of $H^1(X,Y;G)$ is the same as (the isomorphism class of) a principal $G$-bundle
over $X$ with a given section over $Y\subset X$. If $\Omega$ is a 3d ball (with
coloured surface), an element of $H^1(\Omega,r;G)$ is therefore specified by
choosing an element of $G$ for each red stain. We may imagine that there is a
$G$-valued spin sitting at each such stain and, to compute (1), we take the sum
over all their values (we overcount $|G|$ times, but it is inessential). According to KW
duality, the same result can be obtained by summing over $\tilde
G$-spins at the black stains. The spins at red (or black) stains interact through the yellow stains. If all the yellow stains are as those visible on the picture (disks with two red and two black neighbours), we have the usual two-point interactions; for disks with more neighbours we would have more-point interactions.
Finally, let us look at the picture again. It does not represent a ball and the
back yellow stain is not a disk. The Boltzmann weight for the back stain can be
understood as the specification of the boundary and periodicity conditions on
the visible surface (the $G$-bundle type together with sections over the red parts of the boundary); again there
are spins at the red stains but the neighbours of the back yellow stain are not summed
over -- they form the boundary condition.
These examples are more or less all that we would like; the general case seems
to be general beyond usual applications. But it will come handy when we consider non-abelian generalizations.
The KW duality described up to now is only the $(1,1)$-version. For the
$(k,l)$-version (for statistical models in $k+l$ dimensions) we consider yellow $(k+l+1)$-dimensional $\Omega$'s with $\partial \Omega$ in the
three colours as before (up to now only the combination $k+l$ enters). Instead
of $H^1(\Omega,r;G)$ and $H^1(\Omega,b;\tilde G)$ we take $H^k(\Omega,r;G)$ and
$H^l(\Omega,b;\tilde G)$. The claim and the proof of $(k,l)$-duality are as in the
$(1,1)$-case.
What are we going to do? First of all, expression (1) has the form of a very
simple topological field theory (with boundary coloured in red and black), described in the next section. We shall then
look at the non-abelian version. In the $(1,1)$-case classical models suggest
that the pair $G$, $\tilde G$ should be replaced by a pair of mutually dual quantum
groups. So we are faced with the difficult and somewhat arbitrary task of defining
and understanding quantum analogues of cohomology groups and of the Poisson summation
formula. But miraculously, none of these has to be done. Pictures alone (in the form of TFTs) solve
the problem and quantum groups appear. This suggests, of course, that this point
of view might be interesting in higher dimensions, the $(2,2)$ case -- the
electric--magnetic duality -- being of particular interest.
\section{KW TFTs and the squeezing property}
As we mentioned, expression (1) (and its generalization to $(k,l)$) has the
form of a TFT with boundary coloured in red and black. We understand TFT as defined by Atiyah \cite{Ati} and for definiteness we choose its hermitian version; nothing like central extensions is
taken into account. To each oriented yellow $(k+l)$-dim $\Sigma$ with black-and-red
boundary, we associate a non-zero finite-dimensional Hilbert space ${\cal H}(\Sigma)=L^2(H^k(\Sigma,r;G))$. And for
each $\Omega$ we have a linear form on the Hilbert space corresponding to $y$ --
the one given by (1). However, the normalization has to be changed slightly for
the gluing property to hold (this is only a technical problem): we set
\begin{equation} Z_\Omega(f)={1\over\mu(\Omega)}\sum_{x\in H^k(\Omega,r;G)}f(\rho(x))\end{equation}
and for the inner product
\begin{equation} \langle f,g\rangle=\mu(\Sigma)\sum_{x\in H^k(\Sigma,r;G)}\overline{f(x)}g(x).\end{equation}
Here
\begin{equation} \mu(\Omega)={|H^{k-1}(\Omega,r;G)||H^{k-3}(\Omega,r;G)|\dots\over
|H^{k-2}(\Omega,r;G)||H^{k-4}(\Omega,r;G)|\dots}\end{equation}
(and the the same for $\Sigma$). Perhaps this $\mu$ is not a number you would like
to meet in a dark forest, but this should not hide the simplicity of the thing.
The gluing property follows from the exact sequence for the triple
$r_{glued}\subset\Omega\cup r_{glued}\subset\Omega_{glued}$ ($r_{glued}$ is the red part of
$\Omega_{glued}$; $\Omega\subset\Omega_{glued}$ is achieved by separating slightly the glued
yellow surfaces). Of course, the expression for $\mu$ was actually derived from
this sequence.
This TFT reformulation of KW duality will be our starting point for non-abelian generalizations. Let us first have a look to see if we can recover the numbers $k$ and $l$ and the group $G$ from the TFT. It is enough to take yellow $(k+l)$-dim balls
as $\Sigma$'s. The ball should be painted as follows: let us choose integers $k',l'$ such that $k'+l'=k+l$; we take a
$S^{k'-1}\subset\partial\Sigma$ and paint its tubular neighbourhood in $\partial\Sigma$ in
red; the rest (a tubular neighbourhood of a $S^{l'-1}$) is in black. Let us
denote this $\Sigma$ as $\Sigma_{k',l'}$. The corresponding Hilbert space is trivial
(equal to $\bf C$) if $k'\ne k$; if $k'=k$, it is the space of functions on $G$.
The reader may try to define the Hopf algebra structure on this space using
pictures (the $(1,1)$-case is drawn in the next section).
Our TFTs are of a rather special nature, because of the excision property of relative cohomology. It gives rise to the {\it squeezing property} of our TFTs. It is
best explained by using an example. Imagine this full cylinder (the upper half of its mantle is red and the lower half is black; the invisible base is yellow):
$$\epsfxsize 5.5cm \epsfbox{cyl.eps} $$
We shall squeeze it in the middle, putting one finger on the red top and the
other on the black bottom. The result is no longer a manifold---it has a
rectangle in the middle (red from the top and black from the bottom), but it is
surely homotopically equivalent (as a pair $(\Omega,r)$, or as a pair $(\Omega,b)$) to the original cylinder. Since we use relative cohomologies, the
rectangle may be removed (it does not matter whether the cohomologies are
relative to $r$ or to $b$ (the dual picture), as the rectangle is
both red and black). The result is again a manifold of the type we admit:
$$\epsfxsize 5.5cm \epsfbox{sf2.eps}$$
Or, as another example: if our fingers are not big enough, we do not separate the cylinder into two
parts, but instead we produce a hole in the middle (the top view of the result
would be a red stain with a hole in the middle).
A bit informally the squeezing
property can be formulated as follows: if a (hyper)surface appears as a result of squeezing $\Omega$, red from one side and black from the other side, it
may be removed.
Those TFTs that satisfy the squeezing property may be considered as generalizations of relative cohomology and of KW duality. As we shall see in the next setion, in the $(1,1)$-case they yield the expected result. Here is an example of such a TFT that does not come from an abelian group. We take
a finite group $G$ and two subgroups $R,B\subset G$ such that
$RB=G$, $R\cap B=1$. We shall consider principal $G$-bundles
with reduction to $R$ over $r$ and to $B$ over $b$. If $P$ is such a thing, let
$\mu(P)$ be the number of automorphisms of $P$. If $M$ is a space with some red
and some black parts, let $P(M)$ be the set of isomorphism classes of these
things. We
set ${\cal H}(\Sigma)$ (the Hilbert space) to be the space of functions on $P(\Sigma)$ with
the inner product \begin{equation}\langle f,g\rangle=\sum_{P\in P(\Sigma)}\mu(P)\overline{f(P)}g(P)\end{equation}
and finally, if $f\in{\cal H}(y)$, we set
\begin{equation} Z_\Omega(f)=\sum_{P\in P(\Omega)}{1\over\mu(P)}f(P|_y).\end{equation}
This is surely a TFT. The squeezing property holds, because if we have a
reduction for both $R$ and $B$ (as we have on the surfaces that appear by
squeezing), these two reductions intersect in a section of the $G$-bundle. If
$R=1$ and $B=G$, this TFT describes interacting $G$-spins (as in the
introduction); the general case is more interesting, and we will meet its version in \S4.
\section{Non-abelian $(1,1)$-duality}
There are classical models (those appearing in Poisson--Lie T-duality \cite{PL})
that suggest a non-abelian generalization of $(1,1)$ KW duality. The PL T-duality
generalizes the usual $R\leftrightarrow1/R$ T-duality, replacing the two circles
(or tori) by a pair of mutually dual PL groups. Clearly, we have to replace the
pair $G$, $\tilde G$ by a pair of mutually dual quantum groups. This is not an easy
(or well-defined) task. We have to define and to {\it understand}
cohomologies with quantum coefficients.
Here is how pictures solve this problem in a very simple way: just take a TFT in three dimensions,
satisfying the squeezing property. A finite quantum group (finite-dimensional Hopf $C^*$-algebra) will appear
independently of the classical motivation. If you exchange red and black (which
gives a new TFT), the quantum group will be replaced by its dual. This is the
non-abelian (or quantum) $(1,1)$ KW duality.
Now we will draw the pictures. I learned this 3d way of representing quantum
groups at a lecture by Kontsevich \cite{Kon}; it was one of the sources of this
work. The finite quantum group itself is ${\cal H}(\Sigma_{1,1})$. The product
${\cal H}(\Sigma_{1,1})\otimes{\cal H}(\Sigma_{1,1})\rightarrow{\cal H}(\Sigma_{1,1})$ is on this picture:
$$\epsfxsize 5.5cm \epsfbox{mult.eps}$$
And here are all the operations. Coloured 3d objects are hard to draw (but not
hard to visualize!); imagine that the pictures represent balls and that their
invisible sides are completely yellow. The antipode $S$ is simply the half-turn, the
involution $*$ is the reflection with respect to the horizontal diameter, and
the rest is on the figure:
$$\epsfxsize 5cm \epsfbox{oper.eps}$$
Why is it a quantum group? Just imagine the pictures representing the axioms and
use the squeezing property in a very simple manner.
Let us make a conjecture that there is a 1--1 correspondence between finite
quantum groups and 3d TFTs satisfying the squeezing property, with trivial
(i.e. one-dimensional) ${\cal H}(\Sigma_{0,2})$ and ${\cal H}(\Sigma_{2,0})$. To support the conjecture, finite quantum groups are in 1--1 correspondence with modular functors of a certain kind (cf. \cite{KW}), clearly connected with our TFTs.
\section{Chern--Simons with coloured boundary}
Let us recall a basic analogy between symplectic manifolds and vector spaces (the aim of quantization is to go beyond a mere analogy):
\begin{center}
\begin{tabular}{|c|c|} \hline
{\em Vector}&{\em Symplectic}\\ \hline\hline
Vector space&Symplectic manifold\\ \hline
Vector&Lagrangian submanifold\\ \hline
$V_1\otimes V_2$&$M_1\times M_2$\\ \hline &\\[-2.5ex]
$V^*$&$\overline{M}$\\ \hline
Composition of linear maps&Composition of Lagrangian relations\\ \hline
\end{tabular}
\end{center}
\def{\frak g}{{\bf g}}
\def{\frak b}{{\bf b}}
\def{\frak r}{{\bf r}}
One can easily describe the symplectic analogue of the Chern--Simons TFT (see e.g. \cite{Fr}). Let ${\frak g}$ be a Lie algebra with invariant inner product. If $\Sigma$ is a closed oriented surface then the moduli space of flat ${\frak g}$-connections is a symplectic manifold (with singularities). The symplectic form is given as follows. The vector space
of all ${\frak g}$-valued 1-forms on $\Sigma$ is symplectic, with the symplectic form
\begin{equation}\omega(\alpha_1,\alpha_2)=\int_\Sigma\langle\alpha_1,\alpha_2\rangle.\end{equation}
When we restrict ourselves to flat connections, the space is no longer symplectic, but the null directions of the 2-form give just the orbits of the gauge group, so the quotient (the moduli space) is symplectic. Let us denote it by $M_\Sigma$.
We have associated a symplectic space to every oriented closed surface. Now, if $\Omega$ is an oriented compact 3-fold with boundary $\Sigma$, we should find a Lagrangian subspace $\Lambda_\Omega\subset M_\Sigma$. Indeed, $\Lambda_\Omega$ consists just of those flat connections on $\Sigma$, which can be extended to $\Omega$.
Let us make a minute extension of this construction, allowing a boundary coloured in red and black. Let ${\frak b},{\frak r}\subset{\frak g}$ be a Manin triple. We shall consider flat ${\frak g}$ connections as before, with the obvious boundary conditions---on the red part of the boundary the connection should take values in ${\frak r}$ and on the black part in ${\frak b}$. Similarly, the gauge group consists of the maps to $G$ with the same boundary conditions. This really defines a symplectic TFT for our pictures. From this symplectic TFT we obtain a symplectic analogue of quantum group (using the pictures of the previous section). One readily checks that it is the double symplectic groupoid of Lu and Weinstein \cite{LW}---the symplectic analogue of the quantum group coming from the Manin triple ${\frak b},{\frak r}\subset{\frak g}$.
For this reason, it is reasonable to conjecture that perturbative quantization of our Chern--Simons TFT with boundary will give the corresponding quantum group.
In the next section we return to the vector side of our table, to general 3d TFTs that satisfy the squeezing property. We shall see this connection with CS TFTs again, in a different guise.
\section{Pictures of the Drinfeld double}
There are lots of algebras, modules, etc., in our pictures. We shall describe
only the Drinfeld double, since it is important in PL T-duality, and also
to make a connection with Reshetikhin--Turaev invariants. Here are the unit and the
counit:
$$\epsfxsize 6cm \epsfbox{duc.eps}$$
The invisible side of the full torus on the first picture is yellow; this closed
yellow strip is the double. On the second picture it is represented as the
mantle of the cylinder (the invisible base of the cylinder is painted as the visible one).
Here is the product (the picture is yellow from the invisible side):
$$\epsfxsize 4cm \epsfbox{dpro.eps}$$
and finally the coproduct:
$$\epsfxsize 4cm \epsfbox{dcop.eps}$$
This picture requires an explanation. It represents a thick Y from which a thin
Y was removed (you can see it as the black holes in the yellow disks). The
fronts of these Y's are red and their backs are black (the invisible bottom of
the picture is yellow---it is the third double).
For completeness, the antipode is a half-turn and the involution a reflection,
both exchanging the boundary circles of the double.
\font\cyr=wncyr10
Now we know the double as a Hopf algebra, but its real treasure is the
$R$-matrix:
$$\epsfxsize 4cm \epsfbox{rmat.eps}$$
It is quite similar to the Y-picture, but this time we do not remove a thin X,
but rather two tubes connecting the top holes with the bottom ones. However, if
one tube connected the left holes and the other one the right holes, the picture
would not be very interesting. We could squeeze the X in the middle, dividing it
into two vertical cylinders. We would simply have an identity. However, in the X of the $R$-matrix, the tubes are diagonal. There are two ways for them to avoid each
other; one gives the $R$-matrix and the other its inverse.
This X has two incoming and two outgoing doubles; you can also imagine $n$
doubles at the bottom, tubes forming a braid inside and leaving the body at the
top, in the middle of $n$ other doubles (the Cyrillic letter {\cyr ZH} is good here). We directly see a representation of the braid group.
With this picture in mind, we can find the Reshetikhin--Turaev (RT) invariants coming from the double. Namely, the boundary-free part of our TFT is the Chern--Simons theory coming from the double. Here is a sketch of the proof: suppose $\Omega$ is a closed oriented 3-fold with a ribbon link. We colour each of the ribbons in red on one side and in black on the other side, blow it a little, so that the ribbon becomes a full torus removed from $\Omega$, and paint on the torus a little yellow belt. Our TFT gives us an element of ${\rm double}^{\otimes n}$ (one double for each yellow belt), where $n$ is the number of components of the link. Actually, this element is from $(\mbox{centre of double})^{\otimes n}$ (we can move a yellow belt along the torus and come back from the other side). It is equal to the RT invariant. This claim follows immediately from the definition of RT invariants: If $\Omega=S^3$, we are back in our picture of braid group, and generally, surgery along tori in $S^3$ can be replaced by gluing tori along the yellow belts.
Finally, we can get rid of red and black and instead consider $\Omega$'s with boundary consisting of yellow tori: one easily sees that ${\cal H}(\mbox{yellow torus})= \mbox{centre of double}$.
\section{Conclusion: Higher dimensions?}
There are several open problems remaining. Apart from the mentioned conjectures there is a problem with the square of the antipode: for the naive definition of TFT used in this paper, it has to be 1. One should find a less naive definition and prove in some form the claim that our pictures are equivalent to Hopf algebras.
However, in spite of these open problems, the presented picture is very simple and quite appealing. It is really tempting
(and almost surely incorrect) to suggest \begin{equation}\mbox{\it duality = TFT with the
squeezing property.}\end{equation} It would be nice to understand the basic building blocks
of these TFTs that replace quantum groups in higher dimensions. It is a purely
topological problem. It would also be nice to have a non-trivial example with
non-trivial ${\cal H}(\Sigma_{2,2})$, to see an instance of S-duality ($(2,2)$-duality) in this way.
The field of duality is vast and connections with this work may be of diverse
nature. But let us finish with a rather internal question: Why yellow, red
and black?
| proofpile-arXiv_065-7883 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Supersymmetry (SUSY) is an attractive solution to one of the most
serious fine-tunings in nature, i.e., it ensures the stability of the
electroweak scale against radiative corrections. However, the SUSY
standard model (SSM) may introduce other (less severe) fine-tunings,
since some of the parameters in the SSM and/or their phases must be
very small to avoid unwanted FCNC and CP violating processes. (These
are called SUSY FCNC problem and SUSY CP problem.)
In gauge mediated SUSY breaking model~\cite{LEGM}, the SUSY FCNC
problem can be beautifully solved. In this scheme, the mechanism to
mediate SUSY breaking to the SSM sector does not distinguish between
flavors, and the universality of the scalar mass matrices is
automatically guaranteed.
However, the SUSY CP problem still remains. In particular, in gauge
mediated model, the electric dipole moments (EDMs) of the electron and
neutron are likely to be larger than the current experimental
constraint if the possible phases in the Lagrangian are all
$O(1)$. Therefore, it is better to come up with some idea to suppress
the CP violating phase in the gauge mediated model.
In the first half of this letter, we discuss the electron and neutron
EDMs in the framework of the gauge mediated model, and derive a
constraint on the CP violating phase. As a result, we will see that
the EDMs are likely to be larger than the current experimental
constraint if the CP violating phase is $O(1)$. Then, in the second
half, we consider a mechanism to suppress this CP violating phase so
that the EDMs are within the experimental constraints.
\section{SUSY CP Problem in Gauge Mediated Model}
First, we discuss constraints on the phases in the gauge mediated
model. In the gauge mediated model, all the off-diagonal elements in
the sfermion mass matrices vanish. Furthermore, so-called
$A$-parameters are not generated at the one loop level. Therefore, CP
violating phases in these parameters are suppressed enough to be
consistent with experimental constraints.
However, (some combinations of) the phases in the gaugino masses,
$\mu$-parameter, and $B_\mu$-parameter are physical, and in general,
they can be large enough to conflict with experimental constraints. In
particular, since the mechanism to generate $\mu$- and
$B_\mu$-parameters are unknown, there is no guarantee of cancellation
between their phases.
Let us discuss this issue in more detail. The relevant part of the
Lagrangian of the SSM can be written as
\begin{eqnarray}
{\cal L} = - \int d^2\theta \mu H_1 H_2
- B_\mu H_1 H_2
- \frac{1}{2} \left( m_{G1} \tilde{B} \tilde{B}
+ m_{G2} \tilde{W} \tilde{W} + m_{G3} \tilde{G} \tilde{G} \right)
+ {\rm h.c.}
\end{eqnarray}
Here, $H_1$ and $H_2$ are the Higgs fields coupled to the down-type
and up-type quarks, and $\tilde{B}$, $\tilde{W}$, and $\tilde{G}$ are
the gauginos for U(1)$_{\rm Y}$, SU(2)$_{\rm L}$, and SU(3)$_{\rm C}$
gauge groups, respectively. In the above Lagrangian, all the
parameters $\mu$, $B_\mu$, and $m_{Gi}$ can be complex. However, by
using phase rotations of Higgs bosons, Higgsinos, and gauginos, we can
make some of them real. To be more specific, denoting\footnote
{In gauge mediated model, phases of the gaugino masses are
universal, and we denote them as $\theta_G$.}
\begin{eqnarray}
\mu = e^{i\theta_\mu} |\mu|, ~~~
B_\mu = e^{i\theta_B} |B_\mu|, ~~~
m_{Gi} = e^{i\theta_G} |m_{Gi}|,
\end{eqnarray}
physical quantities depend only on the combination
\begin{eqnarray}
\theta_{\rm phys} \equiv
{\rm Arg}(\mu B^*_\mu m_{G})
= \theta_\mu - \theta_B + \theta_G.
\end{eqnarray}
In the gauge mediated model, flavor symmetries are well preserved in
the squark and slepton mass matrices, and SUSY contributions to the CP
violation in FCNC processes are negligible. However, as discussed in
many works~\cite{edm_susy}, the EDMs of the electron and neutron are
important check points. Indeed, non-vanishing $\theta_{\rm phys}$ may
induce sizable EDMs.
\begin{figure}[t]
\centerline{\epsfxsize=0.55\textwidth\epsfbox{elec_edm.eps}}
\caption{Contours of the constant electron EDM in gauge mediated
model on $\tan\beta$ vs.~$m_{G2}$ plane. Contours are
$|d_e|/e=10^{-26}$, $10^{-25}$, and $10^{-24}$~cm, from above. Here,
we take $\sin\theta_{\rm phys}=1$, $N_5=1$, and $M_{\rm
mess}=10^{5}$~GeV (solid lines), $10^{10}$~GeV (dotted lines), and
$10^{15}$~GeV (dashed lines).}
\label{fig:de}
\end{figure}
In order to discuss the constraint on $\theta_{\rm phys}$, we
calculate the electron EDM $d_e$ in the framework of the gauge
mediated model for several values of the messenger scale $M_{\rm
mess}$. In the calculation, we take $\sin\theta_{\rm phys}=1$ and
$N_5=1$, where $N_5$ is the number of the vector-like messenger
multiplet in units of ${\bf \bar{5}}+{\bf 5}$ representation of
SU(5)$_{\rm G}$. The result is shown in Fig.~\ref{fig:de}. One should
note that $d_e$ is proportional to $\sin\theta_{\rm phys}$, and hence
we can estimate $d_e$ for other values of $\sin\theta_{\rm phys}$ by
rescaling the result given in Fig.~\ref{fig:de}. Furthermore, the EDM
of the electron is enhanced for larger values of $\tan\beta\equiv
\langle H_2\rangle/\langle H_1\rangle$. The mechanism of this
enhancement is the same as those for other leptonic penguin diagrams
such as for the muon magnetic dipole moment~\cite{g-2,PRL79-4752} and
for lepton-flavor violating processes~\cite{LFV}. In particular, the
electron EDM comes from diagrams which are very similar to those for
the muon $g-2$, and those quantities are closely related in the gauge
mediated model:
\begin{eqnarray}
d_e \simeq \frac{m_e}{2m_\mu^2} \tan\theta_{\rm phys}
\times a^{\rm SSM}_\mu,
\end{eqnarray}
where $a^{\rm SSM}_\mu=\frac{1}{2}(g_\mu-2)^{\rm SSM}$ is the SSM
contribution to the muon magnetic dipole moment.
The experimental constraint on the electron EDM is remarkably good. By
using $d_e=(0.18\pm0.12\pm0.10)\times 10^{-26}e$~cm~\cite{PRA50-2960},
we obtain a constraint on the electron EDM:
\begin{eqnarray}
|d_e| \leq 0.44 \times 10^{-26} e ~{\rm cm},
\label{de_exp}
\end{eqnarray}
where the right-hand side is the upper bound on $d_e$ at 90~\%~C.L.
With the above constraint, we can derive a bound on $\theta_{\rm
phys}$. Since $d_e$ is proportional to $\sin\theta_{\rm phys}$, the
upper bound on $|\sin\theta_{\rm phys}|$ is given by $10^{-1}$ to
$10^{-3}$, depending on the mass scale of the superparticles and
$\tan\beta$. If we adopt relatively large value of the wino mass
($m_{G2} \mathop{}_{\textstyle \sim}^{\textstyle >} 400~{\rm GeV} - 1~{\rm TeV}$, depending on
$\tan\beta$), $\theta_{\rm phys}$ can be as large as 0.1, and it may
not be a serious fine tuning. However, in this case, squarks and
gluino become relatively heavy, and we may lose the motivation for low
energy SUSY as a solution to the naturalness problem. On the other
hand, if we consider lighter wino, $\theta_{\rm phys}$ is constrained
to be less than $O(10^{-2})$, which requires more fine tuning of this
phase. In the following, we consider a solution to this
problem.\footnote
{In the gauge mediated model, the SUSY CP problem can be solved if
$B_\mu$ vanishes at the messenger scale. For this approach, see
Refs.~\cite{NPB501-297,PRL79-4752}.}
Before discussing the model to suppress $\theta_{\rm phys}$, we
briefly comment on the constraint from the neutron EDM $d_n$. We can
also obtain a constraint on $\theta_{\rm phys}$ from the neutron
EDM. However, the constraint is less severe because the experimental
constraint on $d_n$ is not as stringent as that on $d_e$, and also
because the heavier squark masses suppress the theoretical value of
$d_n$. With the same underlying parameters, the constraint on
$\theta_{\rm phys}$ is about a few times weaker from $d_n$ than from
$d_e$.
\section{Toy Model and Basic Idea}
Let us consider a toy model in which $\theta_{\rm phys}$ vanishes
automatically.
We denote $X$ as the SUSY breaking field whose scalar and
$F$-components acquire non-vanishing vacuum expectation values
(VEVs). Furthermore, $q$ and $\bar{q}$ are the vector-like messenger
fields, and SUSY breaking parameters in the SSM sector are generated
by integrating them out. In this section, we do not specify the
mechanism that generates an $F$-component for $X$, and we just adopt
the following form of the superpotential:
\begin{eqnarray}
W = X F_X^* + y_q X \bar{q} q
+ \frac{y_H}{M_*^{n-1}} X^n H_1 H_2,
\end{eqnarray}
where $F_X$ is the VEV of the $F$-component for $X$, $y_q$ and $y_H$
are complex parameters, $n$ is a fixed integer, and $M_*\simeq
2.4\times 10^{18}~{\rm GeV}$ is the reduced Planck scale.
With the above superpotential, $\mu$, $B_\mu$, and $m_{Gi}$ are given
by
\begin{eqnarray}
\mu &=& \frac{y_H}{M_*^{n-1}} \langle X^n \rangle,
\\
B_\mu &=&
\frac{n y_H}{M_*^{n-1}} \langle X^{n-1} \rangle F_X,
\\
m_{Gi} &=& \frac{g_i^2}{16\pi^2} c_i N_5
\frac{F_X}{\langle X \rangle},
\label{mG_toy}
\end{eqnarray}
where $g_i$ is the relevant gauge coupling constant for the standard
model gauge group and $c_i$ is the group theoretical factor. From
these expressions, we can easily see $\theta_{\rm phys}$ vanishes. In
other words, all the phases in the Lagrangian can be eliminated with
phase rotations of the scalars, chiral fermions, and
gauginos. Therefore, in this case, there is no CP violation in the SSM
(except for the phase in the KM matrix).
However, this scenario is not phenomenologically viable, since the
relative size of the $\mu$- and $B_\mu$-parameters is not in the
required range. The ratio of $B_\mu$ to $\mu$ is given by
\begin{eqnarray}
\frac{B_\mu}{\mu} = \frac{nF_X}{\langle X\rangle}.
\label{B/mu}
\end{eqnarray}
On the other hand, if $N_5\sim O(1)$, mass scale of the
superparticles in the SSM sector is estimated as
\begin{eqnarray}
m_{\rm SSM} \sim \frac{g_{\rm SM}^2}{16\pi^2}
\left| \frac{F_X}{\langle X\rangle} \right|,
\end{eqnarray}
where $g_{\rm SM}$ is the relevant gauge coupling constant of the
standard model gauge groups. Then, the ratio $F_X/\langle X\rangle$
has to be of the order of 10 $-$ 100~TeV, where the lower bound is
from the experimental constraint on the masses of the superparticles
while the upper bound is from the naturalness point of view. As a
result, the ratio given in Eq.~(\ref{B/mu}) is about 2 $-$ 3 orders of
magnitude larger than the phenomenologically acceptable
value~\cite{LEGM}.\footnote
{If the messenger multiplets have large multiplicity of $N_5\sim
100$, $F_X/\langle X\rangle$ can be smaller (see Eq.~(\ref{mG_toy}))
and $B_\mu/\mu$ may be in the required range. Even if $N_5\sim 100$,
perturbative picture can be valid up to the Planck scale if the
messenger scale is as high as $O(10^{16}~{\rm GeV})$. In this case,
the SUSY breaking scalar masses are significantly suppressed at the
messenger scale, but can be generated by the running effect}
Therefore, this toy model does not work although it has the
attractive feature of vanishing CP violating phase $\theta_{\rm
phys}$.
\section{Improved Model}
Now, we propose an improved model in which the ratio $B_\mu/\mu$ can
be in the required range. One possibility to suppress the ratio
$B_\mu/\mu$ is to introduce another field which also acquires a
VEV. If this new field couples to the Higgs fields, and also if it can
generate a large enough $\mu$-parameter, the ratio $B_\mu/\mu$ may be
in the required range. Of course, if the VEV of this new field has an
arbitrary phase, the SUSY CP problem cannot be solved. Therefore, the
new field has to be somehow related to the original SUSY breaking
field $X$.
In our model, we duplicate the SUSY breaking sector, and couple both
of them to the Higgs fields. Then, if the SUSY breaking field in one
sector has a larger VEV than the other, the $\mu$-parameter is
enhanced and the ratio $B_\mu/\mu$ can have a required value.
Furthermore, in order not to introduce a new phase which may spoil the
cancellation in $\theta_{\rm phys}$, we impose symmetry which
interchanges these two sectors. If this symmetry is exact, however,
the hierarchy between the VEVs of the two SUSY breaking fields cannot
be generated. Therefore, we introduce a (small) breaking parameter of
this symmetry.
There are two conflicting requirements on the breaking parameter.
First, this breaking parameter has to be large enough so that the VEV
of one SUSY breaking field is about 2 $-$ 3 orders of magnitude
enhanced relative to the other. On the contrary, if this breaking
parameter is too large, its CP violating phase may spoil the
cancellation in $\theta_{\rm phys}$.
It is non-trivial to generate a large enough hierarchy with such a
small breaking parameter. If the VEV of the SUSY breaking field is
determined by the inverted hierarchy mechanism~\cite{PLB105-267},
however, small modifications of the parameters may significantly
change the VEV of the SUSY breaking field. In particular, in this
class of models~\cite{MurDimDva,NPB510-12}, the potential of the SUSY
breaking field is lifted only logarithmically, and a small
perturbation at the Planck scale can result in a significant change of
the minimum of the potential. In this section, we use a simple model
as an example, and see how the scenario mentioned above can work.
\begin{table}[t]
\begin{center}
\begin{tabular}{cccccccc}
\hline\hline
{} &
{SU(2)$_{\rm B1}$} & {SU(2)$_{\rm B2}$} & {SU(2)$_{\rm S}$} &
{${\rm SU(2)'_{B1}}$} & {${\rm SU(2)'_{B2}}$} &
{${\rm SU(2)'_{S}}$} &
{SU(5)$_{\rm G}$}
\\ \hline
{$\Sigma$} &
{${\bf 2}$} & {${\bf 2}$} & {${\bf 1}$} &
{${\bf 1}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 1}$}
\\
{$Q$} &
{${\bf 2}$} & {${\bf 1}$} & {${\bf 2}$} &
{${\bf 1}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 1}$}
\\
{$\bar{Q}$} &
{${\bf 1}$} & {${\bf 2}$} & {${\bf 2}$} &
{${\bf 1}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 1}$}
\\
{$q_5$} &
{${\bf 2}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 1}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 5}$}
\\
{$\bar{q}_5$} &
{${\bf 1}$} & {${\bf 2}$} & {${\bf 1}$} &
{${\bf 1}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf \bar{5}}$}
\\
{$q_1$} &
{${\bf 2}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 1}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 1}$}
\\
{$\bar{q}_1$} &
{${\bf 1}$} & {${\bf 2}$} & {${\bf 1}$} &
{${\bf 1}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 1}$}
\\
{$\Sigma'$} &
{${\bf 1}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 2}$} & {${\bf 2}$} & {${\bf 1}$} &
{${\bf 1}$}
\\
{$Q'$} &
{${\bf 1}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 2}$} & {${\bf 1}$} & {${\bf 2}$} &
{${\bf 1}$}
\\
{$\bar{Q}'$} &
{${\bf 1}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 1}$} & {${\bf 2}$} & {${\bf 2}$} &
{${\bf 1}$}
\\
{$q'_5$} &
{${\bf 1}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 2}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 5}$}
\\
{$\bar{q}'_5$} &
{${\bf 1}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 1}$} & {${\bf 2}$} & {${\bf 1}$} &
{${\bf \bar{5}}$}
\\
{$q'_1$} &
{${\bf 1}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 2}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 1}$}
\\
{$\bar{q}'_1$} &
{${\bf 1}$} & {${\bf 1}$} & {${\bf 1}$} &
{${\bf 1}$} & {${\bf 2}$} & {${\bf 1}$} &
{${\bf 1}$}
\\ \hline\hline
\end{tabular}
\caption{Particle content of the model.}
\label{table:example}
\end{center}
\end{table}
In our discussion, we use a model based on ${\rm [SU(2)]^3\times
[SU(2)']^3\times SU(5)_{G}}$ symmetry as an example, where the
standard model gauge group ${\rm SU(3)_C\times SU(2)_L\times U(1)_Y}$
is embedded in ${\rm SU(5)_{G}}$ in the usual manner. (For the
original SUSY breaking model based on the inverted hierarchy mechanism
with ${\rm [SU(2)]^3\times SU(5)_{G}}$, see Ref.~\cite{NPB510-12}.) We
show the particle content of this model in Table~\ref{table:example}.
Here, ${\rm SU(2)_{S}}$ and ${\rm SU(2)'_{S}}$ are strong gauge
interactions which break supersymmetry, while ${\rm SU(2)_{B}}$'s are
introduced to stabilize the potentials for the SUSY breaking fields.
Assuming a symmetry which interchanges the ${\rm [SU(2)]^3}$ and ${\rm
[SU(2)']^3}$ sectors (which we call $Z_2^{X\leftrightarrow X'}$
symmetry), the superpotential has the following form:
\begin{eqnarray}
W &=& y_Q \Sigma \bar{Q} Q + y_5\Sigma \bar{q}_5 q_5
+ y_1 \Sigma \bar{q}_1 q_1
\nonumber \\ &&
+ y_Q (1+\epsilon_Q)\Sigma' \bar{Q}' Q'
+ y_5 (1+\epsilon_5) \Sigma' \bar{q}'_5 q'_5
+ y_1 (1+\epsilon_1) \Sigma' \bar{q}'_1 q'_1
\nonumber \\ &&
+ \frac{y_H}{M_*} {\rm det} \Sigma H_1 H_2
+ \frac{y_H}{M_*} (1+\epsilon_H) {\rm det} \Sigma' H_1 H_2,
\end{eqnarray}
where the $\epsilon$'s are the breaking parameters of
$Z_2^{X\leftrightarrow X'}$. If all $\epsilon$'s vanish, there is a
$Z_2^{X\leftrightarrow X'}$ symmetry.
The symmetry breaking parameters $\epsilon$'s may arise from a VEV of
a field $\phi$ which transforms as $\phi\rightarrow -\phi$ under
$Z_2^{X\leftrightarrow X'}$, for example. If $\phi$ has a coupling
like
\begin{eqnarray}
W \sim
y_1 (\Sigma\bar{q}_1 q_1 + \Sigma' \bar{q}'_1 q'_1)
+ \frac{\phi}{M_*}
(\Sigma\bar{q}_1 q_1 - \Sigma' \bar{q}'_1 q'_1),
\label{W_phi}
\end{eqnarray}
a small $\epsilon_1$ can be generated if $\phi$ acquires a VEV
smaller than $y_1M_*$. Similar arguments hold for other breaking
parameters. Here, we do not specify the origin of the symmetry
breaking, and just assume they are somehow generated at the Planck
scale.\footnote
{For example, a quantum modified constraint can induce a VEV of the
symmetry breaking field $\phi$~\cite{PRD56-7183}.}
In this model, SUSY is dynamically broken because of the quantum
modified constraint~\cite{IzaYanInt}. Concentrating on the flat
direction parametrized as $\Sigma\sim{\rm diag}(X,X)$ and
$\Sigma'\sim{\rm diag}(X',X')$, the superpotential
becomes~\cite{NPB510-12}
\begin{eqnarray}
W &=& y_Q \Lambda^2 X + y_5 X \bar{q}_5 q_5
+ y_1 X \bar{q}_1 q_1
\nonumber \\ &&
+ y_Q (1+\epsilon_Q) \Lambda^{\prime 2} X'
+ y_5 (1+\epsilon_5) X' \bar{q}'_5 q'_5
+ y_1 (1+\epsilon_1) X' \bar{q}'_1 q'_1
\nonumber \\ &&
+ \frac{y_H}{M_*} X^2 H_1 H_2
+ \frac{y_H}{M_*} (1+\epsilon_H) X^{\prime 2} H_1 H_2,
\label{L_Xqq}
\end{eqnarray}
where $\Lambda$ and $\Lambda'$ are the strong scales of ${\rm
SU(2)_{S}}$ and ${\rm SU(2)'_{S}}$, respectively. Due to
$Z_2^{X\leftrightarrow X'}$, we adopt $\Lambda =\Lambda'$. Because of
the $\Lambda^2 X$ and $\Lambda^{\prime 2} X'$ terms in the
superpotential, $X$ and $X'$ have VEVs in $F$-components and the SUSY
is broken.
Once $X$ and $X'$ acquire VEVs, three important parameters are given
by
\begin{eqnarray}
\mu &=& \frac{y_H}{M_*} \langle X^2 \rangle
+ \frac{y_H}{M_*} \langle X^{'2} \rangle (1+\epsilon_H),
\label{mu_model}
\\
B_\mu &=&
\frac{2y_H}{M_*} F_X \langle X\rangle
+ \frac{2y_H}{M_*}
F_X \langle X'\rangle (1+\epsilon_H) (1+\epsilon_Q^*),
\label{B_model}
\\
m_{Gi} &=&
\frac{g_i^2}{8\pi^2} c_i \frac{F_X}{\langle X\rangle}
+ \frac{g_i^2}{8\pi^2} c_i \frac{F_X}{\langle X'\rangle}
(1+\epsilon_Q^*).
\label{mG_model}
\end{eqnarray}
Then, denoting
\begin{eqnarray}
v\equiv |\langle X\rangle |,~~~
v'\equiv |\langle X'\rangle |,
\end{eqnarray}
hierarchy between $v$ and $v'$ can make the ratio $B_\mu/\mu$ to be
in the required range. This is because, for $v\ll v'$, $\mu$- and
$B_\mu$-parameters are dominated by the second term, while the gaugino
mass is determined by the first one. Adopting, for example, $|F_X|\sim
(10^6~{\rm GeV})^2$, $v\sim 10^8~{\rm GeV}$, $y_H\sim 1$, and
$v'/v\sim 10^2 - 10^3$, all the parameters in the SSM sector are in
the required range. In the following, we see how the VEVs and their
large hierarchy are generated.
At the tree level, the potential for the SUSY breaking fields are
completely flat and the minimum of the potential is
undetermined. However, once we consider the wave function
renormalization of the SUSY breaking fields, the potential has a
minimum. Denoting the wave function renormalization for $\Sigma$ and
$\Sigma'$ as $Z_\Sigma$ and $Z_{\Sigma'}$, respectively, the potential
is given by
\begin{eqnarray}
V = \frac{|F_X|^2}{Z_\Sigma} + \frac{|F_{X'}|^2}{Z_{\Sigma'}},
\end{eqnarray}
where
\begin{eqnarray}
F_X^* = y_Q \Lambda^2,~~~
F_{X'}^* = y_Q (1+\epsilon_Q) \Lambda^2
= (1+\epsilon_Q) F_X^*.
\end{eqnarray}
Therefore, the potential for the SUSY breaking field has a minimum
when $Z_\Sigma$ and $Z_{\Sigma'}$ are maximized. The minimum of the
potential can be estimated by using the renormalization group
equations (RGEs). In our discussion, for simplicity, we take account
of the effect of $g_{\rm B1}$ and $y_1$ with $g_{\rm B1}$ being the
gauge coupling constant for SU(2)$_{\rm B1}$, and neglect the effects
of other coupling constants. This approximation is motivated by the
fact that $y_1$ plays the most important role among the Yukawa
coupling constants in determining the minimum of the potential (see
Ref.~\cite{NPB510-12}). Then, $v=|\langle X\rangle|$ is determined by
solving
\begin{eqnarray}
\frac{3}{2} g_{\rm B1}^2(v) - y_1^2(v) = 0.
\label{2225_vac}
\end{eqnarray}
Similar argument holds for the potential of $X'$.
Since the scale dependence of the gauge and Yukawa coupling constants
are logarithmic, small modification of the boundary condition at the
Planck scale may result in a significant shift of the minimum of the
potential. In our analysis, we solve the RGEs numerically to see how
the minimum depends on the boundary conditions. For this purpose, we
first fix $g_{\rm B1}$ and $y_1$ at the reduced Planck scale
$M_*$. Then, neglecting other coupling constants, we run them down to
the low energy scale and find the scale $v$ where $Z_\Sigma$ is
maximized (i.e., the VEV of $X$). In Fig.~\ref{fig:vevX}, we show $v$
as a function of $y_1(M_*)$ for several values of $g_{\rm
B1}(M_*)$. As one can see, the VEV of $X$ is sensitive to $y_1(M_*)$,
and small modification of $y_1(M_*)$ results in a large shift of the
minimum of the potential. Fig.~\ref{fig:vevX} shows that $v$ and $v'$
can differ by 2 $-$ 3 orders of magnitude with a small breaking
parameter of $\epsilon_1\sim 10^{-2}-10^{-1}$, depending on $g_{\rm
B1}$. Notice that $\ln (v'/v)\sim\ln[(F_X/\langle
X\rangle)/(B_\mu/\mu)]$ is approximately proportional to
$\epsilon_1$. For example, for $10^2\leq v'/v\leq 10^3$, $\epsilon_1$
is required to be $0.02\leq\epsilon_1\leq 0.03$
($0.04\leq\epsilon_1\leq 0.06$, $0.08\leq\epsilon_1\leq 0.12$) for
$g_{\rm B1}(M_*)=0.3$ (0.4, 0.5). Therefore, in order to make the
ratio $(F_X/\langle X\rangle)/(B_\mu/\mu)$ of the order of
$10^2-10^3$, $\epsilon_1$ has to be mildly tuned at $\sim 50~\%$
level. We believe this is not a serious fine tuning.
\begin{figure}[t]
\centerline{\epsfxsize=0.55\textwidth\epsfbox{vev.eps}}
\caption{$v\equiv |\langle X\rangle |$ as a functions of
$y_1(M_*)$. $g_{\rm B1}(M_*)$ is taken to be 0.3 (solid), 0.4
(dotted), and 0.5 (dashed).}
\label{fig:vevX}
\end{figure}
Furthermore, most importantly, $\theta_{\rm phys}$ becomes suppressed
in this model. In order to see this suppression, we have to know the
phases of $\langle X\rangle$ and $\langle X'\rangle$. So far, the
phases of $\langle X\rangle$ and $\langle X'\rangle$ are not
determined, since they are related to the $R$-symmetry. In
supergravity models, however, a constant term exists in the
superpotential to cancel the cosmological constant. This constant term
does not respect the $R$-symmetry and fixes the
phases~\cite{NPB426-3}. The supergravity contributions to the
potential are written as
\begin{eqnarray}
V_{\rm SUGRA} = A_Q F_X^* X
+ A_Q F_{X'}^* (1+\epsilon_A) X' + {\rm h.c.},
\end{eqnarray}
where $A_Q$ is a complex SUSY breaking parameter which is of the
order of the gravitino mass. With this potential, for
example, the phase of $\langle X\rangle$ is determined so that the
combination $A_QF_X^*\langle X\rangle$ becomes real. Then, the relative
phase of $\langle X\rangle$ and $\langle X'\rangle$ is given by
\begin{eqnarray}
{\rm Arg} \left(\frac{\langle X'\rangle}{\langle X\rangle}\right)
\simeq
{\rm Im} (\epsilon_Q^* + \epsilon_A^*).
\label{Arg(X'/X)}
\end{eqnarray}
Therefore, these VEVs are almost aligned irrespective of their
absolute values.
By using Eq.~(\ref{Arg(X'/X)}), $\theta_{\rm phys}$ is calculated as
\begin{eqnarray}
\theta_{\rm phys} \simeq {\rm Im}\epsilon_A^*.
\label{th_phys(eps)}
\end{eqnarray}
Therefore, with the current constraint (\ref{de_exp}), the electron
EDM can be suppressed enough for mild values of $\tan\beta$ (less than
about 10) with $\epsilon_A\sim O(10^{-2})$ (see
Fig.~\ref{fig:de}). Even if all the breaking parameters are of the
same order, $\epsilon\sim O(10^{-2})$ can induce large enough
$v'/v$. For larger value of $\tan\beta$, $\epsilon_A$ as small as
$O(10^{-3})$ is required.
In fact, $\theta_{\rm phys}$ depends only on $\epsilon_A$ as shown in
Eq.~(\ref{th_phys(eps)}), while $\epsilon_1$ plays the most important
role in shifting the VEV. Therefore, if the breaking parameters may
have hierarchy, requirements on the model are more relaxed. In
particular, in the framework of supergravity, $\epsilon_A$ vanishes if
$\epsilon_Q$ vanishes and also if the K\"ahler potential respects
$Z_2^{X\leftrightarrow X'}$ symmetry. In this case, we avoid the
constraint from the EDMs, and $\epsilon_1$ can be much larger than
$\epsilon_A$. For example, if the $Z_2^{X\leftrightarrow X'}$ symmetry
breaking field $\phi$ and the Yukawa coupling $y_1$ have a non-trivial
transformation property under some symmetry (like $R$-symmetry),
$\epsilon_A$ is expected to be $O(y_1^2\epsilon_1)$, which can be
suppressed for smaller $y_1$.
In our discussion, we assumed that there is no effect of the
$Z_2^{X\leftrightarrow X'}$ symmetry breaking in the gauge kinetic
function. If there is such an effect for the strong gauge groups
SU(2)$_{\rm S}$ and ${\rm SU(2)'_{S}}$, the relative phase of
$\Lambda$ and $\Lambda'$ becomes $O(8\pi^2\epsilon/g_{\rm S}^2)$,
where $g_{\rm S}$ is the gauge coupling constant for the strong gauge
groups. Therefore, small symmetry breaking effect may induce a large
shift of the relative phase of $F_X$ and $F_{X'}$, resulting in a
large $\epsilon_A$.\footnote
{This may not happen if the coupling constants for the strong gauge
groups become non-perturbative at the Planck scale.}
Therefore, the $Z_2^{X\leftrightarrow X'}$ symmetry breaking in the
gauge kinetic function is disfavored. This effect can also be killed
if non-trivial transformation properties for some symmetry are
assigned for the symmetry breaking parameters.\footnote
{Contrary to the strong gauge groups, there may be an effect of the
$Z_2^{X\leftrightarrow X'}$ symmetry breaking in the balancing gauge
groups sector (${\rm SU(2)_B}$'s), since our result is not affected by
the phases of the strong scales of these interactions. Of course, the
hierarchy between $v$ and $v'$ is affected by this effect.}
\section{Summary}
In the first half of this letter, we calculated the EDM of the
electron in the framework of the gauge mediated model. If all the
phases in the Lagrangian are $O(1)$, the electron EDM is larger than
the current experimental constraint. If all the superpaticles have
masses of $O({\rm 100~GeV})$, for example, the CP violating phase
$\theta_{\rm phys}$ has to be smaller than $O(10^{-2})-O(10^{-3})$
depending on $\tan\beta$.
Regarding this tuning as a problem, we considered a mechanism to
suppress the CP violating phase. If the $\mu$- and $B_\mu$-parameters
originate from the same coupling to the SUSY breaking field in the
superpotential, the physical phase is cancelled out. However, the
ratio $B_\mu/\mu$ becomes too large in a naive model. Therefore, we
introduced another sector to suppress this ratio. Even with the new
field, we have seen that the smallness of the physical phase
$\theta_{\rm phys}$ can be realized by a symmetry.
Finally, we note that the strong CP problem cannot be solved in our
model. This feature is common to the case of the SSM, and some
mechanism is needed to solve this problem, like Peccei-Quinn
symmetry~\cite{PecQui}.
\section*{Acknowledgment}
The author would like to thank J.~Bagger for stimulating discussions.
He is also grateful to J.L.~Feng for useful comments and careful
reading of the manuscript. This work was supported by the National
Science Foundation under grant PHY-9513835.
| proofpile-arXiv_065-7886 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction.}\label{intro}
\subsection{A Relation Between Fluids and Geometry.}\label{flu}
The perfect fluid stress can be covariantly differentiated
to give the perfect fluid conservation equations.
In many cases these differential equations can be directly integrated
to give a {\it geometric-thermodynamic} equation,
which typically equates the Eisenhart (1924) \cite{bi:eisenhart}
Synge (1937) \cite{bi:synge} fluid index $w$
to the reciprocal of the lapse $N$.
The Eisenhart-Synge index is essentially the fluids zero temperature enthalpy.
In section \ref{sec:ecs} the index is calculated for several
equations of state.
The $\alpha-$equation of state is a $2-$parameter equation of state which
describes polytropes.
The $\beta-$equation of state is a $1-$parameter equation of state obtained
from the $\alpha-$equation of state by assuming
the second law of thermodynamics for an adiabatic process,
Tooper (1965) \cite{bi:tooper}, Zeldovich and Novikov (1971) \cite{bi:ZN}.
It gives the $\gamma-$equation of state in all cases except for
$\gamma=0$ where the pressure free ($p=0$) case is not recovered;
but rather $\mu=p\ln(\frac{p}{K})$.
For equations of state see also Eligier {\it et al} (1986) \cite{bi:EGH},
and Ehlers \cite{bi:ehlers}.
The index is not defined in the
pressure free case, thus solutions such as the
Tolman (1934) \cite{bi:tolman}-Bondi (1947) \cite{bi:bondi}
solution are not covered by description in terms of the fluid index.
\subsection{It's Application}\label{apl}
The main application of the
geometric-thermodynamic equation is to the description
of spacetime exterior to a star.
It is shown in many cases that asymptotic flat solutions do not exist.
This is taken to imply that the notion of asymptotic flatness
as usually understood is physically simplistic.
In the literature diagrams are constructed which are supposed to
represent the causal spacetime of a collapsing star.
These diagrams usually require that the
spacetime is asymptotically flat,
but the inclusion of a non-vacuum stress is often sufficient for
this requirement no longer to hold.
An example of how these diagrams can be qualitatively altered by
infinitesimal matter is given by solutions to the scalar-Einstein
equations which often have no event horizons, Roberts (1985) \cite{bi:mdr85}.
To the lowest approximation the spacetime exterior to a
star has no stress: the star exists in a vacuum.
In order to take account of the matter that
surrounds a star it is necessary to find
an approximate stress which has contributions from
planets, dust etc\dots
There seems to be no systematic way of producing a stress which
approximates such diverse forms of matter.
When relativity is applied to macroscopic
matter the stress is usually taken to be a perfect fluid,
so that this is taken to be the form of
the first order correction to the vacuum.
Specifically the stress is taken to be a spherically
symmetric perfect fluid with $\gamma-$equation of state
and the result generalized where possible.
The nature of the surface of the star is left open
as boundary conditions to interior solutions are not
discussed. The assumed equations of state are essentially of one variable
so that the pressure $p$ does not depend on the entropy $s$,
i.e.$~p=p(\mu)$ not $p=p(\mu,s)$ or in other words
they are isentropic. Stars radiate and the radiation possess entropy
whether there are equations of state that can describe this,
and if so whether they are susceptible to a similar analysis as to that
given here is left open: however it would be unusual for at the simplest
level there to be asymptotically flat spacetimes, at the next level of
complexity for there to be none, and at the full level of complexity for
asymptotically flat spacetime to reappear.
\subsection{Asymptotic Flatness.}\label{afl}
It will be shown that many spacetimes with a perfect fluid stress do not have
asymptotically flat solutions.
Throughout it is assumed that the fluid permeates the whole
spacetime and that the spacetime is of infinite extent.
Also throughout it is assumed that a simple limiting process is appropriate:
specifically as the luminosity radial coordinate tends to infinity the
spacetime becomes Minkowskian. This does not always happen, two examples
are: Krasi\'{n}ski's (1983) \cite{bi:krasinski} analysis of the Stephani
universe where $r=\infty$ labels a second center of symmetry, and type B
Bekenstein conformal scalar extensions of ordinary scalar field solutions,
Agnese and LaCamera (1985) \cite{bi:ALC}, and Roberts (1996) \cite{bi:mdr96},
which have bizarre asymptotic properties.
The result follows from the conservation
equations so that it is {\bf explicitly}
independent of the gravitational field equations used.
The conservation equations use a Christoffel symbol
or a generalization of this. The connection
depends on the metric which in turn can be thought
of as a solution to gravitational field equations.
In this {\bf implicit} sense the result can be thought of
as depending on field equations.
It might not hold if there are other fields coupled to the fluid. Similarly
asymptotically flat solutions are rare for theories with quadratic Lagrangians,
Buchdahl (1973) \cite{bi:buchdahl}. The absence
of asymptotically flat solutions might have application
to the "missing mass" problem,
see Roberts (1991) \cite{bi:mdr91} and references therein.
Some properties of exact perfect fluid solutions have been
discussed by Delgaty and Lake (1998) \cite{bi:DL}.
In Bradley {\it et al} (1999) \cite{bi:BFMP} it is shown that the Wahlquist
perfect fluid spacetime cannot be smoothly joined to an exterior
asymptotically flat vacuum region.
Boundary conditions for isolated horizons are described
in Ashtekar {\it et al} (1999) \cite{bi:ABF}.
Models of stars in general relativity have been discussed by
Herrera {\it et al} (1984) \cite{herrera},
Stergioulas (1998) \cite{bi:ster}, and Nilsson and Uggla (2000) \cite{bi:NU}.
\subsection{Sectional Contents.}\label{scn
Section \ref{sec:ecs} introduces the stress, conservation equations,
and the relationship between the enthalpy $h$ and the Eisenhart-Synge
fluid index $\omega$.
In section \ref{sec:affs} it is shown that
there are no asymptotically flat static fluid spheres unless
the fluid index $\omega\rightarrow1$ at infinity,
and using Einstein's field equations there are
no asymptotically flat static fluid spheres with
$\gamma-$equation of state.
In the non-static case for $\gamma-$equation of state there are no asymptotically
flat solutions provided that $\gamma\ne0,1$ and certain conditions
hold on the metric.
For both static and non-static cases there might be
asymptotically flat solutions for $\alpha-$polytropes.
In section \ref{sec:gte} it is shown for static spacetimes
admitting non-rotating vector $U_{\alpha}=(N,0)$
and having $\gamma-$equation of state that the lapse $N$ is
inversely proportional to the fluid index $N=1/\omega$.
For the non-static case,
subject to $\dot{N}=0$ and $\gamma(\ln(\sqrt{g^{(3)}}))_{,i}=0$,
the equation relating the lapse to the fluid index is
$\omega=N^{-1}g^{(3)\frac{1}{2}(1-\gamma)}=\mu^{\frac{\gamma-1}{\gamma}}$.
These results can be used to show that there are
no asymptotically flat fluid filling
spacetimes admitting the vector $U_{a}=(N,0)$ with $\gamma-$equation of state,
again also subject to certain conditions on the metric.
The introduction of the vector $U_{a}=(N,0)$
assumes that the fluid is non-rotating and that the spacetime admits a global
time coordinate, unlike the vacuum Einstein equations,
see for example Cantor et al (1976) \cite{bi:CFM},
and Witt (1986) \cite{bi:witt}.
In section \ref{sec:aaf} a case against asymptotic flatness is presented.
Outer solar system observations of orbital irregularities are discussed.
Non-asymptoticness on length scales greater than the solar system, such as
galaxies, is mentioned.
The time-like geodesics for an arbitrary Newtonian potential are calculated.
Modeling hypothetical galactic halos of ``dark matter'' with spherically
symmetric fluid solutions so as to produce constant galactic rotation
curves is attempted. The the rate of decay of various fields are discussed.
It is argued that most perfect fluid spheres and some conformal scalar spheres
rate of decay is in fact an increase prohibiting asymptotic flatness.
There is the possibility of experimentally testing gravitational theory by
measuring the deviation of the Yukawa potential from what would be expected
in the absence of gravitation; how this might be done is briefly discussed,
the possibility of an actual test seems remote.
Various onion models of spacetime surrounding the Sun are discussed.
It is argued that non-asymptoticness implies that a system cannot be
gravitationally isolated and that this suggests a new formulation of
Mach's principle: {\sc there are no flat regions of physical spacetime.}
The philosophy of what an ``isolated system'' entails is briefly discussed.
In section \ref{sec:ter} the
Tolman-Ehrefest (1930) \cite{bi:TE} relation is derived.
Section \ref{sec:cc} speculates on the relevance of the
geometric-thermodynamic relation to cosmic censorship.
\section{The Enthalpy and the Eisenhart-Synge fluid Index.}
\label{sec:ecs}
\subsection{Perferct Fluids.}\label{tpf}
The stress of a perfect fluid is given by
\begin{equation}
T_{\alpha\beta}=(\mu+p)~U_{\alpha}U_{\beta}+p~g_{\alpha\beta}
=nh~U_{\alpha}U_{\beta}+p~g_{\alpha\beta},~~~~~~~U_{\alpha}U_{.}^{\alpha}=-1,
\end{equation}} %\indent
where $\mu$ is the fluid density, $p$ is the pressure,
$n$ is the particle number, $h$ is the enthalpy,
and $p+\mu=nh$. The unit timelike vector $U_{a}$
defines the geometric objects
\begin{eqnarray}
h_{\alpha\beta}&=&g_{\alpha\beta}+U_{\alpha}U_{\beta},~~~
\dot{U}_{\alpha}=U_{\alpha;\beta}U_{.}^{\beta},~~~
\te=U^{\alpha}_{.;\alpha},~~~
K_{\alpha\beta}=U_{\chi;\delta}h_{\alpha.}^{~\chi}h_{\beta.}^{~\delta},\nonumber\\
\omega_{\alpha\beta}&=&h_{\alpha.}^{~\chi}h_{\beta.}^{~\delta}U_{[\chi;\delta]},~~~
\sigma_{\alpha\beta}=U_{(\alpha;\beta)}+\dot{U}_{(\alpha}U_{\beta)}-\frac{1}{3}\te h_{\alpha\beta},
\label{eq:geob}
\end{eqnarray}
called the projection tensor, the acceleration, the expansion,
the second fundamental form, the rotation, and the shear,
see for example page 83 of Hawking and Ellis \cite{bi:HE}.
The projection obeys $U_{\alpha}h^{\alpha}_{. \beta}=0$
and $\dot{U}_{\alpha}h^{\alpha}_{. \beta}=\dot{U}_{\beta}$,
also the acceleration obeys $U^{\alpha}_{.}\dot{U}_{\alpha}=0$.
Formally the second fundamental form and its associated
hypersurface only exist when the rotation vanishes.
Transvecting the stress conservation
equation $T_{\alpha.;\beta}^{~\beta}$ with $U_{.}^{\alpha}$
and $h^{\alpha}_{.\gamma}$ gives the first conservation equation
\begin{equation}
-U^{\alpha}_{.}T^{\beta}_{\alpha . ; \beta}=
\mu_{\alpha}U_{.}^{\alpha}+(\mu+p)U^{\alpha}_{.;\alpha}=
\dot{\mu}+(\mu+p)\te=0
\label{eq:ce1}
\end{equation}} %\indent
and the second conservation equation
\begin{equation}
h^{\alpha}_{. \gamma}T^{\beta}_{\alpha . ;\beta}=
(\mu+p)\dot{U}_{\alpha}+h_{\alpha.}^{~\beta}p_{\beta}=0,
\label{eq:ce2}
\end{equation}} %\indent
respectively. These equations equate
the derivatives of the vector field to the pressure and density.
From a technical point of view,
here we are investigating when these equations can be integrated.
It turns out that assuming a specific form of vector field - say hypersurface
orthogonal $U_{\alpha}=\lambda\phi_{,\alpha}$ is not directly of much use,
but rather assumptions about the form of the metric have to be made.
The first law of thermodynamics can be taken in the infinitesimal form
\begin{equation}
dp=n~dh+nT~ds,
\label{eq:1stlaw}
\end{equation}} %\indent
where $T$ is the temperature and $s$ is the entropy.
The Eisenhart\cite{bi:eisenhart}-Synge\cite{bi:synge} fluid index is defined by
\begin{equation}
\ln(\omega)\equiv\int\frac{dp}{(\mu+p)}
\label{eq:esfi}
\end{equation}} %\indent
after setting $T=0$ in \ref{eq:1stlaw} and integrating it is apparent that
up to a constant factor at zero temperature $\omega=h$. The index is also
discussed on page 84 of Hawking and Ellis \cite{bi:HE}.
\subsection{Polytropes}\label{ply}
The $\alpha-$polytrope has equation of state
\begin{equation}
p=\alpha\mu^{\beta}
\label{eq:aeq}
\end{equation}} %\indent
and has
\begin{equation}
dp=\alpha\beta\mu^{\beta-1}d\mu,
\label{eq:daeu}
\end{equation}} %\indent
or
\begin{equation}
\frac{\partial p}{\partial \alpha}=\frac{\partial p}{\partial \beta}=0,
\label{eq:paeq}
\end{equation}} %\indent
because of this the pressure is not an explicit function of two variables
$\alpha$ and $\beta$, but only one.
The index and particle number corresponding to \ref{eq:aeq} are
\begin{equation}
\omega=(1+\alpha\mu^{\beta-1})^{\frac{\beta}{\beta-1}},~~~~~
n=\mu(1+\alpha\mu^{\beta-1})^{\frac{1}{1-\beta}},
\label{eq:apoly}
\end{equation}} %\indent
The $\beta-$polytrope \cite{bi:ZN} has equation of state
\begin{equation}
p=Kn^{\gamma},
\label{eq:beq}
\end{equation}} %\indent
where $K$ is a constant and $V=1/n$ is the volume occupied by one bayron.
For an adiabatic process (no exchange of heat) the second law of
thermodynamics is
\begin{equation}
p=-\frac{\partial E}{\partial V},
\label{eq:2ndlaw}
\end{equation}} %\indent
where $E$ is the total energy density per unit mass $E=\mu/n$.
Then \ref{eq:2ndlaw} becomes
\begin{equation}
p=n^{2}\frac{\partial \mu /n}{\partial n},
\label{eq:210}
\end{equation}} %\indent
\ref{eq:beq} and \ref{eq:210} give
\begin{equation}
pn^{-2}=
K\mu_{o}^{\gamma-2}=
\frac{\partial \mu/n}{\partial n},
\label{eq:211}
\end{equation}} %\indent
which in the case $\gamma\ne1$ can be integrated to give
\begin{equation}
\mu=\frac{K}{\gamma -1}n^{\gamma},
\label{eq:int1}
\end{equation}} %\indent
where the constant of integration is taken to be zero.
Using \ref{eq:beq}, \ref{eq:int1} becomes the
equation of state of $\gamma-$polytrope
\begin{equation}
p=(\gamma-1)\mu,~~~~\gamma\ne 1,
\label{eq:geq}
\end{equation}} %\indent
which has index and particle number
\begin{equation}
\omega=\mu^{\frac{\gamma-1}{\gamma}},~~~~~
n=\gamma\mu^{\frac{1}{\gamma}},~~~~~
\gamma\ne0,1,
\label{eq:gpoly}
\end{equation}} %\indent
In the pressure free case ($\gamma=1$ in \ref{eq:geq})
the index \ref{eq:esfi} is not defined, an option is to
replace $p$ with $(\gamma-1)\mu$ in the definition \ref{eq:esfi}
and then take $\gamma=1$ to obtain $\ln(\omega)=0$ or $\omega=1$,
then the condition $n\omega=\mu+p$ gives $n=\mu$.
For the $\gamma-$equation of state the first \ref{eq:ce1} and second \ref{eq:ce2}
conservation laws can be written in terms of $\mu$,
where $\mu=\omega^{\frac{\gamma}{\gamma-1}}$, and are
\begin{equation}
\dot{\mu}+\gamma\mu\te=0,
\label{eq:gac1}
\end{equation}} %\indent
and
\begin{equation}
\gamma\mu\dot{U}_\alpha+(\gamma-1)h_{\alpha.}^{~\beta}\mu_\beta=0,
\label{eq:gac2}
\end{equation}} %\indent
respectively.
The $\gamma-$equation of state has been derived under
the assumption that $\gamma\ne1$.
Perhaps the correct $\gamma=1$ equation of state for a $\beta-$polytrope
is found by putting $\gamma=1$ in \ref{eq:211} and integrating to give
\begin{equation}
\mu=p\ln\left(\frac{p}{K}\right);
\label{eq:int2}
\end{equation}} %\indent
however the speed of sound
\begin{equation}
v_s\equiv\frac{\partial p}{\partial \mu}=\left(\ln\left(\frac{p}{K}\right)+1\right)^{-1},
\label{eq:speedsound}
\end{equation}} %\indent
is $1$ or the speed of light when $p/K=1$, it is less than the speed of
light for $p/K>1$, and it diverges as $p/K\rightarrow\exp(-1)$.
That the speed of sound can take these values
suggests that this equation of state is essentially
non-relativistic. Some writers refer to \ref{eq:int2} as dust,
others call the pressure free case $p=0$ dust.
\ref{eq:int2} has index and particle number
\begin{equation}
\omega=\left(1+\ln(\frac{p}{K})\right)^{\frac{1}{K}},~~~~~
n=p\left(1+\ln(\frac{p}{K})\right)^{\frac{K-1}{K}}.
\label{eq:216}
\end{equation}} %\indent
\section{Asymptotically Flat Fluid Spheres}
\label{sec:affs}
\subsection{Spherical Symmetry.}\label{ssy}
The line element of a spherically symmetric spacetime can be put in the form
\begin{equation}
ds^{2}=-C~dt^{2}+A~dr^{2}+B~d\Sigma^{2}.
\label{eq:ssst}
\end{equation}} %\indent
Choosing the timelike vector field
\begin{equation}
U_{a}=(\sqrt{C},0,0,0),
\label{eq:tlvf}
\end{equation}} %\indent
the rotation vanishes and the projection tensor, acceleration,
expansion, shear, and second fundamental form are
\begin{eqnarray}
h_{r.}^{~r}&=&h_{\te.}^{~\te}=h_{\phi.}^{~\phi}=1,\nonumber\\
\dot{U}_{\alpha}&=&(0,\frac{C'}{2C},0,0),\nonumber\\
\te&=&-\frac{1}{\sqrt{C}}(\frac{\dot{A}}{2A}+\frac{\dot{B}}{B}),\nonumber\\
\sigma_{rr}&=&-\frac{2A}{B}\sigma_{\te \te}=-\frac{2A}{Bsin^{2}\te}\sigma_{\phi\phi}
=\frac{1}{3}\frac{1}{\sqrt{C}}(-\dot{A}+\frac{A}{B}\dot{B}),\nonumber\\
K_{rr}&=&-\frac{1}{2}\frac{1}{\sqrt{C}}\dot{A},\nonumber\\
K_{\te \te}&=&\frac{1}{sin \te}K_{\phi \phi}=-\frac{1}{2}\frac{1}{\sqrt{C}}\dot{B},
\label{eq:L1}
\end{eqnarray}
where the overdot denotes absolute derivative with respect to $\tau$
as in $\dot{U}_{\alpha}=\frac{D U_{\alpha}}{d\tau}$,
but otherwise the overdot denote partial derivative with respect to time.
Noting that $d\mu/d\tau=dt/d\tau~\mu_{,t}=\dot{\mu}/\sqrt{C}$,
the first conservation equation \ref{eq:ce1} becomes
\begin{equation}
\dot{\mu}-(\mu+p)(\frac{\dot{A}}{2A}+\frac{\dot{B}}{B})=0,
\label{eq:34a}
\end{equation}} %\indent
and the second conservation equation \ref{eq:ce2} becomes
\begin{equation}
p'+(\mu+p)\frac{C'}{2C}=0,
\label{eq:34b}
\end{equation}} %\indent
only the $r$ component is non-vanishing in the second equation.
\subsection{The Static Case.}\label{sca
In the static case the first conservation equation \ref{eq:34a}
vanishes identically and the second conservation equation
\ref{eq:34b} integrates to give
\begin{equation}
\omega=\frac{1}{\sqrt{C}},
\label{eq:com}
\end{equation}} %\indent
the constant of integration is taken to be independent of $\te$ and $\phi$
and is absorbed into $C$, for example by redefining $t$.
For the line element \ref{eq:ssst} to be asymptotically flat it is
necessary that as $r\rightarrow\infty$, the line element \ref{eq:ssst}
becomes Minkowski spacetime in other words as r increases
$C\rightarrow 1$, $A\rightarrow 1$ and $B\rightarrow r^{2}$.
Now from \ref{eq:com}, $C\rightarrow 1$ implies that $\omega\rightarrow 1$.
Thus any static spherical fluid sphere with a well defined index not equal
to $0$ or $1$ cannot be asymptotically flat.
To see this result in particular cases first consider the $\gamma-$equation of
state. From \ref{eq:gpoly} and \ref{eq:com}
\begin{equation}
\mu=C^{\frac{\gamma}{2(1-\gamma)}},
\label{eq:37}
\end{equation}} %\indent
and as $C\rightarrow 1$, $\mu$ tends to a constant
and thus the spacetime cannot be asymptotically flat;
also the spacetime cannot be asymptotically DeSitter
as this would necessitate $\mu$ tending to a constant time $r^{2}$.
In the pressure free case, the index is not defined and there are the
asymptotically flat solutions given by Tolman \cite{bi:tolman}
and Bondi \cite{bi:bondi}.
Next consider the $\beta-$equation of state, from \ref{eq:216} and \ref{eq:com}
\begin{equation}
C=\left(1+\ln(\frac{p}{K})\right)^{-\frac{2}{K}},
\label{eq:38}
\end{equation}} %\indent
now asymptotically as $C\rightarrow 1$, $p\rightarrow K$;
however a constant value of $p$ asymptotically is not
consistent with asymptotic flatness,
therefore there are no asymptotically flat solutions.
Finally consider the $\alpha-$equation of state,
from \ref{eq:apoly} and \ref{eq:com}
\begin{equation}
C=(1+\alpha\mu^{\beta-1})^{\frac{2\beta}{1-\beta}},
\label{eq:39}
\end{equation}} %\indent
in the case $\mu\rightarrow 0$, $C\rightarrow 1$ and there
might be asymptotically flat $\alpha-$polytropic spheres.
The same results are obtained using the more general vector
\begin{equation}
U_{\alpha}=(a\sqrt{C},\sqrt{(a^{2}-1)A},0,0),
\label{eq:310}
\end{equation}} %\indent
where $a$ is a constant.
\subsection{The Non-static Case.}\label{nns}
In the non-static case it is necessary
to assume an equation of state in order to
calculate a geometric-thermodynamic relation.
The $\gamma-$equation of state is assumed.
Then either from \ref{eq:gac1} and \ref{eq:gac2} and \ref{eq:L1},
or from \ref{eq:geq} and \ref{eq:34a} and \ref{eq:34b} the first and
second conservation laws are
\begin{equation}
\dot{\mu}-\gamma\mu\left(\frac{\dot{A}}{2A}
+\frac{\dot{B}}{B}\right)=0,
\label{eq:A1}
\end{equation}} %\indent
and
\begin{equation}
(\gamma-1)\mu'+\gamma\mu\frac{C'}{2C}=0,
\label{eq:A2}
\end{equation}} %\indent
respectively. The equation
\begin{eqnarray}
d\mu&=&\dot{\mu}dt+\mu'dr\nonumber\\
&=&-\gamma\mu\left(\frac{\dot{A}}{2A}+\frac{\dot{B}}{B}\right)dt
+\frac{\gamma\mu}{\gamma-1}\frac{C'}{2C}dr,
\label{eq:A3}
\end{eqnarray}
can be integrated when
\begin{equation}
\dot{C}=0,
\label{eq:A4}
\end{equation}} %\indent
and
\begin{equation}
(AB^2)'=0,
\label{eq:A5}
\end{equation}} %\indent
to give
\begin{equation}
\mu=\omega^{\frac{\gamma}{\gamma-1}}=A^{+\frac{\gamma}{2}}B^{+\gamma}C^{\frac{\gamma}{2(1-\gamma)}},~~~
\gamma\ne0,1,
\label{eq:311}
\end{equation}} %\indent
where the constant of integration have been taken to be
independent of $\te$ and $\phi$ and is
absorbed into the line element.
The assumption $(AB^2)'=0$ is coordinate dependent and holds rarely as
for example $(AB^2)'\approx 4r^3$ for Minkowski spacetime in spherical
coordinates whereas $(AB^2)'=0$
for Minkowski spacetime in rectilinear spacetime.
Taking the limits $A,C\rightarrow 1$,
$B\rightarrow r^{2}$, for $\gamma>0$, $\mu \rightarrow$ a constant,
and for $\gamma<0$, $\mu$ diverges;
thus there are no asymptotically flat solutions. The $\alpha-$equation of
state \ref{eq:aeq} cannot be investigated without further information.
Discussion of non-existence of time dependent fluid spheres can also be found
in Mansouri (1977) \cite{bi:mansouri}.
\section{The Geometric-Thermodynamic equation in the ADM formalism.}
\label{sec:gte}
\subsection{Vanishing Shift ADM Formalism.}\label{vsf}
In the ADM (-1,+3) \cite{bi:ADM} formalism with vanishing shift the metric
is given by
\begin{equation}
g_{\alpha\beta}=(-N^{2},g_{ij}),~~~
g^{\alpha\beta}=(-N^{-2},g^{ij}),~~~
\sqrt{-g^{(4)}}=N\sqrt{g^{(3)}}.
\label{eq:41}
\end{equation}} %\indent
where $g^{(3)}$ is the determinant of the $3-$dimensional metric.
The reason the shift is taken to
vanish will become apparent later. The timelike unit vector field used here
\begin{eqnarray}
U_{\alpha}&=&(N,0),~~~~~~
U^{\alpha}=(-\frac{1}{N},0),\nonumber\\
U_{i;t}&=&-N_{,i},~~~~~~
U_{i;j}=-\frac{1}{2N}g^{(3)}_{ij,t},\nonumber\\
U_{t;t}&=&U_{t;i}=0,
\label{eq:42}
\end{eqnarray}
there are other choices such as $U_{\alpha}=(-N,0)$,
and also $U_{\alpha}=(aN,bN_{i})$
for which the unit size condition $U_{\alpha}U^{\alpha}_{.}=-1$
implies $g^{ij}N_{i}N_{j}=\frac{a^{2}-1}{b^{2}}$.
For \ref{eq:42} the rotation vanishes and the remaining
geometric objects \ref{eq:geob} are
\begin{eqnarray}
h_{ij}&=&g_{ij},~~~~~~
\dot{U}_{\alpha}=(0,\frac{N_{i}}{N}),~~~~~~
\te=-\frac{1}{N}\left(\ln(g^{(3)})\right)_{,t},\nonumber\\
\sigma_{ij}&=&-g^{(3)}_{ij,t}
+g^{(3)}_{ij}\left(\ln(g^{(3)})\right)_{,t},\nonumber\\
K_{ij}&=&K_{ji}=-g_{ij,t}.
\label{eq:43}
\end{eqnarray}
The first conservation equation \ref{eq:ce1} becomes
\begin{equation}
\mu_{,t}-(\mu+p)\left(\ln\sqrt{g^{(3)}}\right)_{,t}=0,
\label{eq:44a}
\end{equation}} %\indent
and the second conservation equation \ref{eq:ce2} becomes
\begin{equation}
p_{i}+(\mu+p)\frac{N_{,i}}{N}=0,
\label{eq:44b}
\end{equation}} %\indent
the $t$ component of the second conservation equation
\ref{eq:44b} vanishes identically.
If the shift is included in the above vector \ref{eq:42} one finds
\begin{equation}
2N^{2}U^{0}_{.i}=2NN_{,i}+(N_{k}N^{k})_{,i}
+N^{j}(2N_{j,i}-N^{k}g_{\verb+{+ik,j\verb+}+}),
\label{eq:withshift}
\end{equation}} %\indent
and further calculation proves intractable.
\subsection{Static Case.}\label{stc}
In the static case the first conservation equation vanishes identically
and the second conservation equation integrates immediately
and independently of the equation of state to give
\begin{equation}
\omega=\frac{1}{N},
\label{eq:45}
\end{equation}} %\indent
where the constant of integration has been absorbed into $N$.
\subsection{Non-static Case.}\label{nsc}
In the non-static case assume the $\gamma-$equation
of state has to be assumed to accommodate
the first conservation law \ref{eq:ce1}.
With $\gamma-$equation of state \ref{eq:geq} the conservation equations
\ref{eq:44a} and \ref{eq:44b} integrate to give
\begin{equation}
\omega=\frac{1}{N}g^{(3)\frac{1}{2}(\gamma-1)},~~~~~~~\gamma\ne0,1,
\label{eq:46}
\end{equation}} %\indent
where in place of \ref{eq:A4} and \ref{eq:A5}
\begin{equation}
\dot{N}=0,
\label{eq:A6}
\end{equation}} %\indent
and
\begin{equation}
\gamma\left(\ln\left(\sqrt{g^{(3)}}\right)\right)_{,i}=0,
\label{eq:A7}
\end{equation}} %\indent
respectively. Constants of integration have been absorbed into the line
element. Substituting the spherically symmetric values of the previous
section into \ref{eq:46} gives \ref{eq:311} times a function
of $\sin\te$ which has been taken to be absorb able there.
The equations \ref{eq:45} and \ref{eq:46}
depend on the choice of velocity vector \ref{eq:42},
for example if a geodesic velocity vector is chosen then the
acceleration vanishes and \ref{eq:45} and \ref{eq:46} do not hold.
The conditions \ref{eq:A6} and \ref{eq:A7} do not appear to have an invariant
formulation. There are three things to note. The {\it first} is that these
derivatives do not occur in the covariant derivatives of the vector field
\ref{eq:42} and hence do not occur in the geometric objects \ref{eq:43}.
The {\it second} is that \ref{eq:A4} and \ref{eq:A7} are satisfied if
\begin{equation}
\{^t_{tt}\}=0,
\label{eq:A8}
\end{equation}} %\indent
and
\begin{equation}
\{^i_{jk}\}=0,
\label{eq:A9}
\end{equation}} %\indent
respectively, as they only occur in these Christoffel symbols.
The {\it third} is that \ref{eq:A6} and \ref{eq:A7} might {\bf solely} be a
gauge condition; but \ref{eq:A6} puts on one constraint and \ref{eq:A7} puts
on three constraints totaling four, the usual number of differential
gauge constraints.
The Plebanski-Ryten (1961) \cite{bi:PR} gauge condition is
\begin{equation}
[(-g)^wg^{ab}_{..}]_{,b}=0,
\label{eq:A10}
\end{equation}} %\indent
for $w=\frac{1}{2}$ this is the harmonic gauge condition.
For $a=t$, \ref{eq:A10} is
\begin{equation}
-\frac{1}{N^2}(\ln(g^{(3)w}))_{,t}+\frac{\dot{N}}{N^3}=0.
\label{eq:A11}
\end{equation}} %\indent
For $a=x^i$, \ref{eq:A10} is
\begin{equation}
-\frac{N_{,i}}{N}+(\ln(g^{(3)w})g^{ij}_{..})_{,j}=0.
\label{eq:A12}
\end{equation}} %\indent
For $w\ne0$, \ref{eq:A6} and \ref{eq:A7} cannot be recovered,
except for Minkowski spacetime in rectilinear coordinates.
Thus the conditions \ref{eq:A6} and \ref{eq:A7} on the metric appear not to be
an example of Plebanski-Ryten gauge conditions.
It can be asked, is there a non-static geometric-thermodynamic relation
which involves familiar gauge conditions instead of metric constraints such
as \ref{eq:A6} and \ref{eq:A7}. Inspection of \ref{eq:gac1} and \ref{eq:gac2}
with arbitrary vector field instead of \ref{eq:42} does not immediately give
a choice of vector field for which application of the Plebanski-Ryten gauge
\ref{eq:A10} simplifies matters enough for the problem to be tractable.
\subsection{$\gamma$-equation of state and the ADM.}\label{gad
For the $\gamma-$equation of state \ref{eq:46} becomes
\begin{equation}
\mu=N^{\frac{\gamma}{1-\gamma}},~~~~~~~~~~~~\gamma\ne0,1,
\label{eq:48}
\end{equation}} %\indent
for the spacetime to be asymptotically flat the density $\mu$ must vanish
asymptotically implying that the lapse $N$ must vanish,
contradicting the assumption that the spacetime is asymptotically flat.
For the $\gamma-$equation of state \ref{eq:geq}, \ref{eq:45} becomes
\begin{equation}
\mu=N^{\frac{\gamma}{1-\gamma}}g^{(3)\gamma/2},~~~~~\gamma\ne0,1,
\label{eq:49}
\end{equation}} %\indent
asymptotically $\mu\rightarrow r^{2}$
and the spacetime cannot be asymptotically flat.
For $\alpha-$polytropes the static case \ref{eq:45} gives
\begin{equation}
N=\left(1+\alpha\mu^{\beta-1}\right)^{\frac{\beta}{1-\beta}},
\label{eq:410}
\end{equation}} %\indent
and in this case it is possible for $N\rightarrow 1$ and
$\mu\rightarrow 0$ simultaneously as $r \rightarrow\infty$
Thus for spacetimes where the rotation free vector
\ref{eq:42} can be introduced, and subject to the caveats mentioned above
for the non-static case:
i)there are no asymptotically flat $\gamma-$polytropes except possibly for
$\gamma=0$ or $1$,
ii)there are no asymptotically flat fluid spacetimes unless the fluid
index tends to a finite non-vanishing constant.
\section{Against Asymptotic Flatness.}
\label{sec:aaf}
\subsection{Lenght Scales.}\label{lsc}
On length scales from the outer solar system to cosmology there are
observations indicating that asymptotic flatness of the systems under
consideration are not correct. It is known that the dynamics of the outer
solar system have unexplained irregularities. For example from the figures
of Seidelmann {\it et al} (1980) \cite{bi:SKPSV} it appears that the
irregularity in Pluto's orbit is that the RA increases by about 2 arcsec more
than expected in 50 years, similarly the declination decreases by about
1 arcsec in 50 years. The irregularities are not neatly expressible by a
single quantity, as for example the orbit of Mercury was prior to general
relativity; but roughly this means that the orbit is boosted by about
2 arcsec in 50 years. This makes the construction of theories to explain the
irregularities difficult. In Roberts (1987) \cite{bi:mdr87} the effect
of a non-zero cosmological constant was investigated in order to explain the
irregularities of Pluto's orbit and it was found that
the cosmological constant would have to be about 12 orders of magnitude
bigger than the upper bound Zel'dovich (1968) \cite{bi:zeldovich} finds
from cosmological considerations.
Axenides {\it et al} (2000) \cite{AFP}
also discuss dynamical effects of the cosmological constant.
Scheffer (2001) \cite{scheffer} and Anderson {\it et al} (2001)
discuss dynamical irregularities in the paths of spacecraft.
The orbit of comets, Marsden (1985)
\cite{bi:marsden}, Rickman (1988) \cite{bi:rickman}, and Sitarski (1994)
\cite{bi:sitarski} have unexplained irregularities, for example
at least 6 comets have unexplained forces acting toward the elliptic.
Qualitatively this is exactly what would be expected from Kerr geodesics
\cite{bi:chandrasekhar} page 363,
\begin{quote}
In summary then, the bound and the marginally bound orbits must
necessarily cross the equatorial plane and oscillate about it.
\end{quote}
but qualitatively the effect is many orders of
magnitude out: on solar system length scales the Kerr modification of
Schwarzschild geometry is intrinsically short ranged.
These solar system orbital problems might
originate from the oblateness of the sun, Landgraf (1992) \cite{bi:landgraf}.
There are theories which have gravitational potential with an exponential term
and mass scale $m_p(m_H/m_P)^n$, where $m_H$ is a typical hadron mass,
$m_P$ is the Planck mass, and $n=0,1,$ and sometimes $2$.
Satellite and geophysical data for $n=2$ theories show that
they are not viable unless $m_H>10^3GeV$,
Gibbons and Whiting (1981) \cite{bi:GW}.
Other searches for an adjusted potential have been undertaken by
Jarvis (1990) \cite{bi:jarvis}.
\subsection{The Exterior Schwarzschild Solution as a Model.}\label{esm}
The exterior Schwarzschild solution is a reasonable model
of the solar system outside the sun.
A fluid solution can be argued to be a better approximation
to the matter distribution as it takes some account of interplanetary space
not being a vacuum.
Any exterior fluid spacetime would have different geodesics than
the vacuum Schwarzschild solution,
consequently the orbits of the planets would be
different from that suggested by the Schwarzschild solution:
how to calculate these geodesics for spherically symmetric spacetimes
is shown below. The magnitude of the upper limit of the effective
cosmological constant is about $\rho_{\Lambda}=10^{-16}{\rm g.~ cm.}^{-3}$,
that this is too small to explain Pluto's irregular orbit was shown in
Roberts (1987) \cite{bi:mdr87}.
Thus to explain Pluto's irregular orbit using a fluid
the critical density must be larger than $\rho_{\Lambda}$.
$\rho_{\Lambda}$ is much larger than the mean density of
interplanetary space which is of the order of $10^{-29}{\rm g.~ cm.}^{-3}$
(or $10^{-5}$ protons ${\rm cm.}^{-3}$).
The density of interplanetary matter is insignificant compared to the density
contribution from the planets, for example for Jupiter
$\rho_{{\rm Jupiter}}=\frac{3}{4\pi}M_{{\rm Jupiter}}r_{{\rm Jupiter}}^{-3}
\approx 2.10^{-4} {\rm g.~ cm.}^{-3}$, where the radius
$r_{{\rm Jupiter}}$ is the semi-major axis of the planets orbit.
This density is above $\rho_{\Lambda}$ and might be above $\rho_{C}$.
Taking a fluid to model the planets is an unusual step,
but the alternative of seeking an $n-$body solution
to the field equations is not viable because even the $2-$body
solution is not known. Looking at constant galactic rotation curves
one might try an approximation. As noted in the last paragraph
of section 5 of \cite{bi:mdr91}:
\begin{quote}
For constant circular velocities over a large distance it is necessary to have
an approximately logarithmic potential. Thus the metric will have an
approximately logarithmic term. The Riemann tensor is constructed from the
second derivatives of the metric and the square of the first derivatives
of the metric. For a logarithmic potential these will both be of the
order $r^{-2}$ and thus a linear analysis might not be appropriate.
\end{quote}
This suggests that only an approach using an exact solution will work.
One can assume that the system under
consideration can be modeled by a
static spherically symmetric spacetime with line element \ref{eq:ssst}.
Constructing the geodesics using Chandrasekhar's (1983) \cite{bi:chandrasekhar}
method, the geodesic Lagrangian is given by
\begin{equation}
2{\cal L}=-C\dot{t}^2+A\dot{r}^2+B\dot{\te}^2+B\sin^2\te\dot{\phi}^2.
\label{eq:gl}
\end{equation}} %\indent
The momenta are given by
\begin{equation}
p_a=\frac{\partial {\cal L}}{\partial \dot{x}_a}
\label{eq:mom}
\end{equation}} %\indent
and are
\begin{equation}
p_t=-C\dot{t},~~~
p_r=A\dot{r},~~~
p_\te=B\dot{\te},~~~
p_\phi=B\sin^2\te\dot{\phi}.
\label{eq:mar}
\end{equation}} %\indent
Euler's equations are
\begin{equation}
\dot{p}_a=\partial_a{\cal L}
\label{eq:Euler}
\end{equation}} %\indent
For static spacetimes with $\partial_t A=\partial_t B=\partial_tC=0$,
giving $\frac{\partial{\cal L}}{\partial t}=0$ so that the time component of the Euler equation
\ref{eq:Euler} gives $\frac{d p_t}{d \tau}=0$, integrating
\begin{equation}
-p_t=C\frac{d t}{d \tau}=E {\rm ~~a~ constant~ along~ each~ geodesic}.
\label{eq:energy}
\end{equation}} %\indent
Similarly by spherical symmetry one can take $\partial_\phi A=\partial_\phi B=\partial_\phi C=0$,
giving $\frac{\partial {\cal L}}{\partial \phi}=0$ so that the $\phi$ component of the Euler
equation \ref{eq:Euler} gives $\frac{d p_\phi}{d \tau}=0$, integrating
\begin{equation}
p_\phi=B\sin^2\te\frac{d \phi}{d \tau}= {\rm a~ constant}.
\label{eq:C86}
\end{equation}} %\indent
For the $\te$ component
\begin{equation}
\frac{\partial{\cal L}}{\partial\te}=B\sin\te\cos\te\frac{d \phi}{d \tau},
\label{eq:thcom}
\end{equation}} %\indent
the Euler equation \ref{eq:Euler} is
\begin{equation}
\frac{d}{d\tau}p_\te=\frac{d}{d\tau}B\dot{\te}
=\frac{\partial{\cal L}}{\partial\te}=B\sin\te\cos\te\frac{d\phi}{d\tau},
\label{eq:theq}
\end{equation}} %\indent
choosing to assign the value $\pi/2$ to $\te$ when $\dot{\te}$ is zero,
then $\ddot{\te}$ will also be zero; and $\te$ will remain constant
at the assigned value. The geodesic is described in an invariant plane which
can be taken to be $\te=\pi/2$. Equation \ref{eq:C86} now gives
\begin{equation}
p_\phi=B\dot{\phi}=L {\rm ~a~ constant~ along~ each~ geodesic}
\label{eq:Leq}
\end{equation}} %\indent
where $L$ is the angular momentum about an axis normal to the invariant plane.
Substituting into the Lagrangian
\begin{equation}
-\frac{E^2}{C}+A\dot{r}^2+\frac{L^2}{B}=2{\cal L}=-1 {\rm ~or~} 0,
\label{eq:sub}
\end{equation}} %\indent
where $2{\cal L}=-1{\rm ~or~}0$ depending on whether time-like or null geodesics
are being considered. Rearranging
\begin{equation}
A\dot{r}^2=-\frac{L^2}{B}+\frac{E^2}{C}+2{\cal L}.
\label{eq:rea}
\end{equation}} %\indent
Taking $r$ to be a function of $\phi$ instead of $\tau$
and using \ref{eq:Leq} gives
\begin{equation}
\left(\frac{dr}{d\phi}\right)^2=-\frac{B}{A}+\frac{B^2}{AL^2}\left(\frac{E^2}{C}+2{\cal L}\right),
\label{eq:itp}
\end{equation}} %\indent
now letting
\begin{equation}
u\equiv \frac{1}{r}
\label{eq:defu}
\end{equation}} %\indent
as in the usual Newtonian analysis
\begin{equation}
\left(\frac{du}{d\phi}\right)^2=-\frac{u^4B}{A}
+\frac{u^4B^2}{AL^2}\left(\frac{E^2}{C}+2{\cal L}\right),
\label{eq:gencase}
\end{equation}} %\indent
seeking a thermodynamic interpretation one can substitute the enthalpy $h$ for
the lapse $C=h^2$, but $A$ and $B$ are still arbitrary so that this is not
pursued. Inserting the K\"ottler (Schwarzschild solution with cosmological
constant) values of the metric
\begin{equation}
B=r^2,~~~
C=\frac{1}{A}=1-\frac{2m}{r}+\frac{\Lambda}{3}R^2,
\label{eq:Kv}
\end{equation}} %\indent
and taking $2{\cal L}=-1$ for time-like geodesics equation \ref{eq:gencase} becomes
\begin{equation}
\left(\frac{du}{d\phi}\right)^2=-u^2+2mu^3+\frac{2mu}{L^2}
-\frac{1-E^2}{L^2}-\frac{\Lambda}{3u^2L^2}-\frac{\Lambda}{3}
\label{eq:geoK}
\end{equation}} %\indent
which is equation (4) of Reference \cite{bi:mdr87},
the last term suggesting the possibility of constant rotation curves.
One can if investigate if there is any adjustment of the Newtonian potential
which will give constant geodesics, as required for galactic rotation.
Taking (c.f. Will (1993)\cite{bi:will} eq.4.6)
\begin{equation}
g_{tt}\approx -1+2U,
\label{eq:n2}
\end{equation}} %\indent
where $U$ is the Newtonian gravitational potential.
Now in assume additionally the particular form for
a spherically symmetric spacetime
\begin{equation}
B=r^2,~~~
A=\frac{1}{C}\approx \frac{1}{1-2U}\approx 1+2U
\label{eq:furt}
\end{equation}} %\indent
inserting in \ref{eq:gencase} and expanding for small $U$ everywhere
\begin{equation}
\left(\frac{du}{d\phi}\right)^2=-\frac{1-E^2}{L^2}-u^2(1+2U)+\frac{2U}{L^2}
\label{eq:expan}
\end{equation}} %\indent
In particular one might expect that constant rotation is given by the
middle term so that
\begin{equation}
-u^2(1+2U)=\alpha {\rm ~~a ~constant},
\label{eq:midal}
\end{equation}} %\indent
rearranging for $U$ we find
\begin{equation}
U=-\frac{1}{2}-\frac{\alpha^2}{2}r^2
\label{eq:res}
\end{equation}} %\indent
this suggests that the correct addition to $U$ to produce constant curves
is a function in $r^2$, this is given by the addition of a cosmological
constant and such a spacetime is given by K\"ottler's solution \ref{eq:Kv}.
One might ask what is the next simplest space-time after on with a
cosmological constant which has an $r^2$ increasing potential and
perhaps this is the interior Schwarzschild solution.
This can be thought of as modeling the halo of a galaxy with the interior
Schwarzschild solution and calculating the geodesics to see if they
give constant motion. Newtonian modeling has been done by
Binney and Tremaine (1987) \cite{bi:BT}.
For the interior Schwarzschild solution Adler, Bazin, and Schiffer,
(1975) \cite{bi:ABS} equation number 14.47 one has
\begin{eqnarray}
A&=&\frac{1}{1-\frac{r^2}{\hat{R}^2}},~~~
B=r^2,\nonumber\\
C&=&\left[\frac{3}{2}\sqrt{1-\frac{r_0^2}{\hat{R}^2}}
-\frac{1}{2}\sqrt{1-\frac{r^2}{\hat{R}^2}}\right]^2,\nonumber\\
&&{\rm for}~~~
r\le r_0,~~~
\hat{R}^2=\frac{3c^2}{8\pi\kappa\rho},
\label{eq:Schex}
\end{eqnarray}
inserting into \ref{eq:gencase} one gets
\begin{equation}
\left(\frac{du}{d\phi}\right)^2=u^2-\frac{1}{\hat{R}^2}
+\frac{1}{L^2}\left[-1+\frac{1}{u^2\hat{R}^2}
+\left(\frac{2E}{3\sqrt{\frac{u^2\hat{R}^2-u^2r^2_0}{u^2\hat{R}^2-1}}-1}\right)^2\right].
\label{eq:rotis}
\end{equation}} %\indent
The $\frac{1}{\hat{R}^2}=\frac{8\pi\kappa\rho}{3c^2}$term can be thought of as giving
constant rotation curves proportional to the halo density $v_c\propto \rho$.
What one expects from the Tully-Fisher (1977) \cite{bi:TF} relationship
is that $v_c^4\propto L\propto M$. There might be an exact solution in
Delgaty and Lake (1998) \cite{bi:DL} which will model this more closely.
\subsection{Rates of Decay.}\label{rod
In general one can ask what exact solutions to gravitational field
equations give what rate of decay. This problem could also be
studied numerically. The rate of decay of scalar fields has been
discussed in the last paragraph of the introduction of \cite{bi:mdr96}.
Including other fields one roughly gets the rates of decay:
type B conformal scalars $>$ perfect fluids $>$ the gravitational field
$>$ type O scalars $>$ electromagnetic fields $>$ type A conformal scalars
$>$ coupled and interacting fields and fluids.
The type B conformal scalars and perfect fluids are not usually asymptotically
flat. Of course, for example, one would expect there to be conformal
scalar solutions which are neither type A or B and these have unpredictable
rates of decay, so this ordering is not absolute.
\subsection{The Spacetime of Elementary Particles.}\label{els
Rates of decay are not only important on long distance scales.
As pointed out in the second paragraph of the introduction of \cite{bi:mdr96}
the exact solution of the spherically symmetric spacetime of the
Klein-Gordon-Einstein equations is not known, except in the massless case
where the static spherically symmetric field equations $R_{ab}=2\phi_a\phi_b$
have the solution
\begin{eqnarray}
ds^2&=&\exp\left(-\frac{2m}{r}\right)dt^2
-\frac{\eta^4}{r^4}\exp\left(\frac{2m}{r}\right)
{\rm cosech}^4\left(\frac{\eta}{r}\right)dr^2\nonumber\\
&&-\eta^2\exp\left(\frac{2m}{r}\right){\rm cosech}^2
\left(\frac{\eta}{r}\right)d^2\Sigma,
~~~\phi=\sigma/r
\label{eq:4}
\end{eqnarray}
where $\eta^2=m^2+\sigma^2$ and $m$ is interpreted as the mass and $\sigma$
the scalar charge, see for example, Roberts (1985) \cite{bi:mdr85}.
Only the massless exact solution is known so that the exact modification of
the shape of the Yukawa potential for the meson is not known:
it proves difficult to approximate. The spacetime of mesons has been
discussed by Fisher (1948) \cite{bi:fisher}, Ross (1972) \cite{bi:ross},
Nagy (1979) \cite{bi:nagy}, and Ho (1995) \cite{bi:ho}.
The Yukawa potential was invented,
Yukawa (1935) \cite{bi:yukawa}, Landau (1990) \cite{bi:landau},
to account for the $\pi$-meson as the exchange quantum in the force between
two nucleons; this is by analogy with electromagnetism where it is the
exchange of a photon that is the origin of the electric and magnetic forces
between electrons. The exact form of the potential is
$V=-(1/r)\exp(-r/m_\pi)$ times a function
which involves the relative spin orientations. The Yukawa potential is only
an approximation as Quantum Chromodynamics is really the theory of strong
interaction; the $\pi$-meson or pion successfully describes the residual
force, and is thought to work up to momentum transfer of about 1 GeV/c.
In strong interaction there is also evidence of a linear confining potential.
The Yukawa potential for mesons can be measured using form factors, several
mesons are needed, see for example Gross, Van Orden and Holinde (1992)
\cite{bi:VH}. The hypothetical Higgs particle also has a Yukawa potential.
In order to measure the Higg's Yukawa potential one needs to measure the
coefficients of the $H^3$ and $H^4$ terms in
\begin{equation}
V=M_{H^2/2}H^2+\left(M_{H^2/2v}\right)H^3+\left(M_{H^2/2v^2}\right)H^4,
\label{eq:sallydawson}
\end{equation}} %\indent
and verify that the coefficients are as expected. At present there are no
measurements of these terms. To measure them one would need to observe
multi-Higgs production: this is further discussed in Djouadi {\it et al}
(1999) \cite{bi:DKMZ}. In the standard model the Higgs coupling enters by
a quartic coupling: however in supersymmetric theories the quartic couplings
are connected to gauge couplings which are known, so that in supersymmetric
models it is easier to calculate the coefficients.
\subsection{What Happens on Long Distance Scales.}\label{lds
If asymptotic flatness is incorrect then what does happen on long scales?
Non-asymptotic flatness introduces the problem of what happens as the potential
goes to infinity - does it increase for ever?
What could happen is that the one body problem becomes inappropriate:
one needs a solution which takes into account two or more bodies.
In particular for the solar system if there is a growing term
in the potential one might take that it has stopped growing well
before the next star.
Torbett and Smoluchowski (1984) \cite{TS} argue that there are bodies
orbiting the Sun at $10^5~AU\approx5\times10^{-3}pc.$,
which might be a maximum orbiting distance.
Puyoo and Jaffel (1998) \cite{PJ} study the interface between the heliopause
and the interstellar medium, this is at about $10^3~AU$ and they find a
high interstellar hydrogen density of $0.24\pm0.05~{\rm g.cm.^{-3}}$,
a proton density of $0.043\pm0.005~{\rm g.cm.^{-3}}$,
a helium density of $(2.7\pm0.5)10^{-2}~{\rm g.cm.^{-3}}$, and so forth.
One consequence of these non-vanishing densities is that in gravitation,
as in quantum field theory, it becomes difficult to say what a vacuum is
and whether it has energy, see Roberts (2000) \cite{mdr2000}.
One can ask what sort of metric describes spacetime at various distances
from the Sun, and it seems that some sort of onion model is called for.
The standard picture is that the interior Schwarzschild solution is matched
to the exterior Schwarzschild solution as in Adler {\it et al} (1975) \S 14.2.,
and then match the exterior Schwarzschild solution to a Friedman model with
a specific equation of state as in Stephani (1985) \cite{bi:stephani} \S 27.3.
Perhaps there should be more than three regions.
The sun has a mean density of $\rho_{Sun}=1.409 {\rm g. cm.^{-3}}$ Allen (1962)
\cite{bi:allen}; however its density varies considerably depending on
its distance from the centre from Allen (1962) \cite{bi:allen}
table on page 163;
there is a big jump at about half its radius, which can be modeled by a dense
core, so perhaps two interior solutions are needed to describe it.
Dziembowski {\it et al} (1990) \cite{dz}
and Basu {\it et al} (2000) \cite{basu},
use inversion techniques to show that the sun has many layers with
different speeds of sound and densities.
The solar system splits up into three regions:
the inner where the general relativistic corrections to
Newtonian theory are needed, the middle where Newtonian theory works,
and the outer where a term explaining the irregularity in Pluto's orbit
is needed. Next one needs a metric to describe the effect of local stars,
then of the galaxy, and then of groups of galaxies. the Robertson-Walker
cosmological region comes next, and after this perhaps a chaotic region.
One can ask if a particle, say at $1$ parsec from the Sun is not in a
flat region what is it that causes the most deviation from flatness.
For simplicity assume that a Newtonian potential will give correct ratios
between the contributions, so that the quantity $\phi/G=M/R$ is calculated
in units of the Sun's mass over parsecs.
A parsec from the Sun is about as isolated as a particle in the nearby galaxy
could be expected to be.
The deviation from flatness of the
metric is approximately given by equation \ref{eq:n2} with $U=\phi$.
The quantities in
Allen (1962) \cite{bi:allen} \S132,133,135,136, for the masses and distances
associated with the local star system (Gould belt), the galaxy, the local
group of galaxies, the Universe are used.
Working to the nearest order of magnitude, the local star system has diameter
$1,000$ pc. and mass $1\times 10^8~M_{Sun}$, assuming the Sun is near the edge
gives the potential $M/R\approx10^5~M_{Sun}{\rm pc.}^{-1}$.
The galaxy has diameter $25$ kpc. and mass $1.1\times 10^{11}~M_{Sun}$,
but the distance of the Sun from the centre is $8.2\pm0.8$ kpc.,
using this distance $M/R\approx10^8~M_{Sun}{\rm pc.}^{-1}$.
The local group of galaxies consists of $16$ galaxies, suggesting an
approximate mass of $10^{12}~M_{Sun}$, whose centre is $0.4$ Mpc. away
giving $M/R\approx10^7~M_{Sun}{\rm pc.}^{-1}$.
Van den Berg (1999) \cite{berg} finds $35$ local group members
and mass $M_{LG}=(2.3\pm0.6)\times10^{12}M_{Sun}$;
and that the zero surface velocity, which separates the local group
from the field that is expanding with the Hubble flow,
has radius $R_0=1.18\pm0.15~Mpc.$.
The Universe has a characteristic length sale $R=c/H\approx3,000~{\rm Mpc.}$
and the mass of the observable Universe is $10^{54}g.$, again one can form a
ratio $M/R$, but it has no direct meaning because of homogeneity,
one finds $M/R\approx 10^{11}$. To compare with the potential on the surface
of the Earth note that the Earth's mean radius
$R_{Earth}\approx6\times10^3{\rm Km.}=2\times10^{-12}{\rm pc.}$ and has mass
$M_{Earth}\approx6\times10^{27}{\rm g.}=3\times10^{-6}M_{Sun}$, giving
$M/R\approx10^6~M_{Sun}{\rm pc.}^{-1}$ for the contribution from the Earth's
mass, $M/R\approx10^5~M_{Sun}{\rm pc.}^{-1}$ for the contribution from the
Sun's mass. Collecting these results together gives the ratios
\begin{equation}
10^6~~:~~10^5~~:~~1~~:~~10^5~~:~~10^8~~:~~10^7~~:~~10^{11}
\label{eq:ratios}
\end{equation}} %\indent
This suggests that either the
Newtonian approximation is not appropriate, that asymptotic flatness is not
a physical notion, or both.
For the onion model it suggests that the metric describing the effect of
local stars, the galaxy, and the local group of galaxies might not be needed
because of the Universe's higher ratio.
Another approach to what sort of notions are useful in describing stellar
systems is as follows. {\it A priori} one would not wish to exclude the
possibility that near the centre of the galaxy there are stellar systems
consisting of several stars, many planets, many asteroids and comets, lots
of dust, and which are close say only a light year away from other stellar
systems. Dynamics for such a stellar system perhaps could still be calculated
in some regions, but there are no notions of a one-body system, vacuum
field equations, or asymptotic flatness to use in an explicit manner.
\subsection{Mach's Principle.}\label{mhp}
Mach's principle can be formulated in many ways: Barbour and Pfister
(1995) \cite{bi:BP} p.530 list 21, Bondi and Samuel (1997) \cite{bi:BS}
list 10. Different formulations can lead to contradictory conclusions:
for example, Bondi and Samuel's (1997) \cite{bi:BS} Mach3 and Mach10 give
rise to diametrically opposite predictions when applied to the Lense-Thiring
effect. A Newtonian formulation has equations which can be used to describe
dynamics rather than recourse to dark matter,
Roberts (1985) \cite{bi:mdr85/2}. Lack of asymptotic flatness suggests that
a system cannot be isolated. This is unlike thermodynamics where isolated
heat baths are ubiquitous, and unlike electrostatics where the charge inside
a charged cavity can be zero. So why should a Minkowski cavity in a
Robertson-Walker universe be excluded? Field equations and junction
conditions allow this to be done, it has to be excluded by principle.
The answer is that it is different
from electrostatics as gravitation is monopolar in nature. Any departure
from homogeneity in the exterior region to a charged cavity would mean a
change in charge which would quickly attract the opposite charge and cancel
out: however in the gravitational case this does not happen, a change in
homogeneity exterior to a Minkowski cavity (I think) would quickly change
the spacetime from being flat. The above suggests a new formulation of
Mach's principle: {\sc there are no flat regions of physical spacetime.}
What happens for an initial value formulation of this is unclear:
presumably it means that a
well-defined initial surface does not develop into a surface part of which
is flat. The above statement of Mach's principle is a particular case
of the statement of Einstein (1953) \cite{bi:einstein}, Ehlers (1995)
\cite{bi:ehlers95}, and Bondi and Samuel (1997) \cite{bi:BS} Mach9
{\sc there are no absolute elements}: a flat metric is an absolute element.
\subsection{Isolated Systems.}\label{iss}
Another way of looking at asymptotic flatness is to note that it implies that
the solar system is isolated. {\bf Isolated systems} seem to be an ideal
which is appealed to in order to make problems soluble. The necessity of
addressing soluble problems is discussed in Medewar (1982) \cite{bi:medawar}.
In practice an isolated system is only an approximation, there is always
some interaction with the external world and for the assumption of an
isolated system to work this must be negligible. The assumption that
systems can be isolated appears through out science, but there appears to
be no discussion of what this involves in texts in the philosophy of
science. Three examples of isolated systems are now given.
The {\it first} is photosynthesis: one can think of each leaf
on a tree as an isolated entity with various chemical reactions happening
independent of the external world, but this is only an approximation
as the leaf exchanges chemicals with the rest of the tree so perhaps
the tree should be thought of as the isolated system, further one
can think of the entire biosphere as an isolated entity which converts
$3\times 10^{21}$ Joules per year into biomass from a total of
$3\times 10^{24}$ Joules per year of solar energy falling on the
Earth, see for example Borisov (1979) \S 1.2.1 \cite{bi:borisov}.
The {\it second} is in thermodynamics and statistical mechanics,
here the isolatabilty of systems is taken as a primitive undefined concept,
see for example Rosser (1982) \cite{bi:rosser} page 38.
The {\it third} is of experiments where a single electron is taken to be
isolated, Ekstrom and Wineland (1980) \cite{bi:EW}: the single electron
is confined for weeks at a time in a ``trap'' formed out of electric and
magnetic fields.
\section{The Tolman-Ehernfest Relation.}
\label{sec:ter}
\subsection{The Radiation Fluid.}\label{trf}
For a radiation fluid $\gamma=\frac{4}{3}$,
and by \ref{eq:gpoly} the fluid index is
\begin{equation}
\omega=(3p)^{\frac{1}{4}}.
\label{eq:51}
\end{equation}} %\indent
The Stefan-Boltzman law is
\begin{equation}
p=\frac{a}{3}T^{4},
\label{eq:52}
\end{equation}} %\indent
where $T$ is the temperature and $a$ is the Stefan-Boltzmann constant.
Thus
\begin{equation}
\omega=a^{\frac{1}{4}}T.
\label{eq:53}
\end{equation}} %\indent
Assuming the spacetime is static
and admits the rotation free vector \ref{eq:42},
equations \ref{eq:45} and \ref{eq:53} give
\begin{equation}
N=a^{-\frac{1}{4}}T^{-1},
\label{eq:54}
\end{equation}} %\indent
thus showing that the lapse $N$
is inversely proportional to the temperature $T$.
This is the Tolman-Ehrenfest (1930) \cite{bi:TE} relation.
Lapse only spacetimes have been studied by Roberts (1994) \cite{bi:mdr94}
and Schmidt (1996) \cite{bi:schmidt}.
For the non-static case \ref{eq:46} and \ref{eq:53} gives
\begin{equation}
Ng^{(3)\frac{1}{6}}=a^{-\frac{1}{4}}T^{-1}.
\label{eq:55}
\end{equation}} %\indent
\section{The Geometric-thermodynamic equation and Cosmic Censorship.}
\label{sec:cc}
\subsection{Scalar Field Solutuions.}\label{sfs}
It is known that spherically symmetric asymptotically flat solutions
to the Einstein massless scalar field equations do not posses event horizons,
both in the static case Roberts (1985) \cite{bi:mdr85}
and in the non-static case Roberts (1996) \cite{bi:mdr96}.
Massless scalar field solutions are equivalent to perfect fluid
solutions with $\gamma=2$ and
$U_{a}=\phi_{a}(-\phi_{c}\phi_{.}^{c})^{-\frac{1}{2}}$;
for the above scalar field solutions the vector field is not
necessarily timelike so that the perfect fluid correspondence
does not follow through. It can
be argued that an asymptotically flat fluid would be
a more realistic model of a collapsed object,
because a fluid provides a better representation
of the stress outside the object.
In the spherically symmetric case a global coordinate system
of the form \ref{eq:ssst} can be chosen
and a necessary condition for there to be an event horizon is that,
at a finite non-zero value of $r$, $C\rightarrow\infty$.
From \ref{eq:com}, \ref{eq:37}, \ref{eq:38}, and \ref{eq:311}
it is apparent that this only occurs from some exceptional
equations of state and values for the fluid density.
Relaxing the requirement of spherical
symmetry equations \ref{eq:49} and \ref{eq:410}
show that for there to be a null surface $N\rightarrow 0$,
or $\omega\rightarrow\infty$;
however the derivation of both \ref{eq:49} and \ref{eq:410}
requires the vector \ref{eq:42} and components of this
diverge as $N\rightarrow 0$,
also to show that \ref{eq:49} and \ref{eq:410}
hold globally it is necessary to show that the
coordinate system \ref{eq:41} can be set up globally.
The above suggests that it is unlikely that
spacetimes with a perfect fluid present have
event horizons except in contrived
circumstances.
\section{Acknowledgements.}
I would like to thank
B.G.Masden for discussion about cometary orbits and
Warren Buck,
Sally Dawson,
Helmut Eberl,
Franz Gross.
Paul Stoler,
Erik Woolgar,
and Peter Zerwas
for discussion about the Yukawa potential.
| proofpile-arXiv_065-7893 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Two types of hydrodynamic codes are currently
in use for cosmological applications:
mesh based codes pioneered by Vishniac and co-workers (Chiang, Ryu, \&
Vishniac 1989; Ryu, Vishniac, \& Chiang 1990) and
Cen and co-workers (Cen et al.\ 1990; Cen 1992)
and used by several groups including a TVD
(``Total Vaviation Diminishing") variant (Ryu et al.\ 1993)
or a PPM (``Piecewise Parabolic Method") variant (Bryan et al.\ 1995),
and, alternatively,
particle based smooth particle hydrodynamics codes (``SPH")
used by a variety of groups (Evrard 1988; Hernquist \& Katz 1989;
Steinmetz 1996; Owen et al.\ 1998a).
The latter codes can concentrate computational resources
into the highest density regions of greatest interest,
but they suffer in low density regions, at caustics,
and, due to the large computational overhead, they have relatively
small particle number and hence have relatively poor
mass resolution which can induce two body relaxation even in the
high density regions (\cite{kms96}; \cite{sw97}).
But the mesh codes also have, along with their virtues of accurate
treatment of shocks and caustics, good mass resolution, known accuracy
and convergence properties etc, quite serious weaknesses,
the primary among them being poor
spatial resolution in the high density regions.
A detailed comparison of five codes -- three independent mesh codes
and two independent SPH codes
comparing the virtues and details of the two approaches
was presented in Kang et al.\ (1994b).
Another major such comparative project
was completed recently (Frenk et al.\ 1998)
with a still wider range of codes being tested.
What is the accuracy of a mesh code in resolving structures comparable
to or smaller than the mesh size?
A quantitative assessment of this in the cosmological context was presented
by Cen (1992) for an aerodynamics-based cosmological hydrodynamic code,
which has an effective artificial viscosity of known properties.
Anninos \& Norman (1996) did some very interesting
convergence tests on X-ray clusters by varying
numerical resolution using a multi-grid eulerian hydrocode.
Bryan \& Norman (1998) examined
resolution effect on various quantities
related to simulated X-ray clusters using
PPM eulerian code (Bryan et al.\ 1995).
Owen et al.\ (1998b) have studied various scaling
properties in scale-free ($P_k\propto k^{-1}$), adiabatic SPH simulations.
In the present paper we examine the TVD shock capturing code
originally developed by Harten (1984)
and reformulated with gravity for high mach number,
cosmological applications
by Ryu et al.\ (1993).
This code has been used by Cen, Ostriker and co-workers to study
the properties of X-ray clusters of galaxies and Lyman alpha clouds,
and is being used for work on galaxy formation.
The primary result, which we find,
can be stated simply.
In general, the code smooths structure with a gaussian filter
$e^{-r^2/2 \sigma_r^2}$
such that $\sigma_r = \alpha \Delta l$ where
$\Delta l$ is the cell size and $\alpha$ is the number which
we are interested in fixing through empirical experiments.
Smoothing separately in the three
directions,
$\sigma_r^2=\sigma_x^2+\sigma_y^2+\sigma_z^2$.
Alternatively phrased, an object of true gaussian radius
$r_{true}$ will have computed radius $r_{comp}$
\begin{equation}
r_{comp}^2 = r_{true}^2 + r_{res}^2
\end{equation}
\noindent where
\begin{equation}
r_{res} = 1.18 \sigma_r; \quad\quad \sigma_r\equiv\alpha\Delta l
\end{equation}
\noindent where the coefficient $1.18$ comes from the
fact that our fitted radius (i.e., core) is defined at
a location where the density drops to half
the central value.
An important, new finding from the current study is that
the TVD shock-capturing code has different
resolutions in different regions.
We find that the code has a resolution of
$\sim 1.1$ cells (i.e., $\alpha=0.95$) near shock fronts,
while its resolution in non-shocking, high density regions is lower than
in the shock fronts, $\alpha=1.4$.
Since the scheme has been optimized for
capturing shocks (rather than, for example,
contact discontinuities),
we should not be surprised by this variation.
The paper is organized as follows:
\S 2 describes the computations to derive
the empirical resolution of the code,
\S 3 presents an application of the results
to previous published simulations using the TVD code
and \S 4 gives conclusions.
\medskip
\medskip
\section{Computations}
\medskip
An extremely difficult aspect of this problem is to design a test,
with a known,
analytically computable solution, that is also
sufficiently realistic to have a bearing on the problems of astrophysical
interest:
in this case the properties of X-ray clusters.
N. Kaiser and also S. White have pointed out to us
that, if the initially assumed spectrum of density perturbations were
a power law,
with $P_k=Ak^{-1}$ being the most appropriate choice,
then for $\Omega_0=1$, some strict
scaling relations must hold in a perfect simulation.
Specifically, if one
were to look at a given population, e.g.,
the most massive $10\%$ of the bound objects in the universe,
then (see Kaiser 1986) in an adiabatic calculation,
their characteristic sizes should
scale as $(1+z)^{-2}$ and their average temperatures
scale as $(1+z)^{-1}$.
In our recent simulations of various specific models for the
growth of structure we did not find that these scaling laws
were very well satisfied.
Examining Figures (11) and (12) of Kang et al.\ (1994a; KCOR hereafter)
we see that the expected scaling law for the temperature is satisfied
to sufficient accuracy (given the
observed statistical fluctuations due to the relatively
small computed sample of clusters),
but the cluster radius evolution is significantly less steep than
is expected.
There are a variety of potential explanations for these
facts.
Three of the most plausible ones are as follows.
\noindent 1. The actual spectrum of the studied CDM model is
not fit by $P_k\propto k^{-1}$ with sufficient precision to make
self-similarity an expected outcome.
\noindent 2. Resolution corrections
due to numerical inaccuracy are redshift dependent and account for the
departure from the expected scaling.
\noindent 3. The displayed sample was chosen to be of fixed luminosity:
$L_x(0.5<E<4.5keV) > 10^{43}$erg/s,
which is not a sample defined in a scale free way.
To see which, if any, of these explanations is true,
and to better enable us to make appropriate resolution corrections,
we computed two new simulations of a power law
spectrum $P_k=Ak^{-1}$ of
initial density perturbations
with ($512^3$ cells, $256^3$ particles)
and ($256^3$ cells, $128^3$ particles), respectively.
A simulation box of size of $80h^{-1}$Mpc comoving is
used in both simulations,
giving a cell size and nominal resolution of
($156~h^{-1}$kpc comoving, $312~h^{-1}$kpc comoving),
respectively.
Other parameters, in the familiar notation, were chosen to be
$\sigma_8=1$, $h=0.5$, $\Omega_{CDM}=0.95$, $\Omega_b=0.05$ to
correspond well to prior work.
To ensure that the ``truth" remains the same
in the two simulations, the initial realizations in the two simulations
are exactly the same, with the power spectrum
being cut off at the Nyquist frequency of the $256^3$ box
for both simulations
(A smooth but rather sharp filter,
$\cos[\pi k/2 k_{nyq,256}]^{1/4}$,
is applied to the power law
spectrum to minimize real space oscillations but maintain
the power law slope as closely as we can).
Figure 1 shows the redshift dependence of the temperature
(equally weighted) of the absolutely brightest
clusters defined in scale-free fashion:
the most massive clusters (within a radius of $1.0h^{-1}$Mpc comoving)
which contain 20\% of total mass in the universe at each epoch.
The temperature of each cluster is the X-ray emission-weighted average
over the indicated sphere.
Note that the selection method of clusters at different redshifts
used here is somewhat different
from that used in KCOR, which was not scale-free: only the bright clusters
with luminosity $L_x(0.5<E<4.5keV) > 10^{43}$~erg/s
were selected at each redshift.
Nevertheless
one sees a behavior of the temperature of the set of brightest clusters
qualitatively similar to what was shown in KCOR (Figure 13 in KCOR):
in both cases the temperature scales with redshift
approximately as expected:
$T_x={\rm const.} (1+z)^{-1}$.
We plot, in Figure 2,
$r_{100}$ versus redshift.
Here, $r_{100}$ is the average radius of
top 20\% (in mass) clusters in each model at each redshift
within which the average density of each cluster
is $100\bar\rho(z)$
[$\bar\rho(z)$ is the global mean at $z$].
As $r_{100}$ is much larger than
the cell size at all times shown,
resolution effects should be minimal.
We see that the agreement
between the simulation and the analytic prediction (Kaiser 1986)
is satisfactory.
Now let us turn to the core radii.
These are much smaller than $r_{100}$
and may be unresolved in our simulation.
Figure 3 shows the redshift dependence of the average cluster core radius
for the same set of clusters (top 20\% in mass).
Each cluster core is found by fitting the simulated cluster
emissivity profile to the following equation
\begin{equation}
j = {j_0\over [1+(r/r_{core})^2]^2}
\end{equation}
\noindent As in KCOR (see Figure 12 there)
we
see that the cluster core radius
does not scale with redshift as predicted analytically.
Comparison shows that
the departure from the expected scaling is as great
for the power law model as for the real CDM-like spectrum, indicating
that spectral curvature is not an important factor over
the redshift range ($0<z<1$) considered.
Furthermore, the scaling behaviors are similar
when we select the clusters in this powerlaw simulation
using the same criterion as in KCOR.
Thus, both explanations (1) and (3) are false
and it is likely that the problem is due to
the redshift dependence of numerical resolution.
Let us now examine the ansatz mentioned in the introduction.
We fit the computed core radius $r_{comp}$ (in comoving units)
with an equation of the form
\begin{equation}
r_{comp}^2 (z)= r_{true}^2(z)+1.39\alpha^2(z)(\Delta l)^2
\end{equation}
The first term on the right hand side represents
the true core size for the {\it actual} model computed
and the second term is $r_{res}^2$,
where $\Delta l$ is the comoving cell size of a simulation.
There are two variables to be solved at
each epoch.
Since the two simulations have identical
initial conditions, $r_{true}$ should be the same
in the two simulations for clusters selected in the same,
scale-free way.
This allows us to solve the above equation for $\alpha(z)$ at each
redshift and then $r_{true}(z)$, both of which
are displayed in Figure 4.
Two points are immediately evident.
First, the resolution of the simulation is
indeed {\it redshift dependent}, as seen in the dependence of $\alpha$
on $z$, ranging from $0.95\pm 0.05$ at redshift one to a
$1.40\pm 0.05$ at redshift zero.
Second, the derived ``true" core size, $r_{true}$, at first sight,
strongly disagrees with the naive analytical expectation,
which states $r_{true}\propto (1+z)^{-\gamma}$ (where
$\gamma$ is a constant thought to be $\sim 1.0$).
Both of these points deserve a thorough understanding.
We address the second point first.
First we note that the assumed power law spectrum has no
characteristic scale and presumably would give $r_{true}\rightarrow 0$.
However, the actual simulation
does not possess a perfect $k^{-1}$ power law spectrum.
In fact, the actual input power spectrum to the simulation
is $k^{-1}\cos (\pi k/2 k_{nyq,256})^{1/4}$ for
$k\in [0.0785,10.053]~h~$Mpc$^{-1}$ comoving
(where the lower limit is due to the limited box size
and the upper limit is, $k_{nyq,256}$,
the Nyquist frequency for the $256^3$ simulation box)
and zero otherwise.
Figure 5 shows
the linear r.m.s. density
fluctuations as a function of top-hat comoving radius
for the actual smoothed, truncated power law spectrum (solid curve),
which is used in the simulations.
An ideal, untruncated $k^{-1}$ power law spectrum would have the
the fluctuation spectrum as indicated by the dashed line.
We see that the truncated power law spectrum
introduces a natural turnover scale around $0.2-0.3h^{-1}~$Mpc
in the density fluctuation spectrum.
Therefore, in the actual simulations under consideration,
a core can only develop {\it at a size $\sim 0.2-0.3h^{-1}~$}Mpc
{\it or greater.}
This explains why the derived true core size shown in Figure 4
is constant (within the small noise) from $z=1$ to $z=0$,
simply because
the true core size for
an untruncated power law spectrum at $z\sim 0$ (with the
adopted normalization of $\sigma_8=1$)
either happens to be $\sim 0.25h^{-1}~$Mpc or
is still
smaller than $\sim 0.20-0.30h^{-1}~$Mpc.
Thus, at all higher redshifts,
the derived ``true" core size represents
what is introduced due to the
truncation of the power, resulting in a nearly constant
core size over the redshift range examined here.
As a consequence,
a more conservative approach is possible
to obtain a bound on our resolution (i.e., on $\alpha$).
If we assume that the true core
radius is zero at all redshift, i.e., $r_{true}=0$,
then the measured core radius is entirely due to
finite numerical resolution
(either in the initial conditions or
in the subsequent hydrodynamic simulations).
We find a redshift dependent bound on the resolution:
at $z=1$, $\alpha< 1.2-1.7$
and $z=0$, $\alpha< 1.5-2.0$.
Let us now turn to the first point:
is the derived value of $\alpha$ and its
redshift dependence reasonable?
It is not hard to explain results with regard to this.
A shock-capturing code such as the one examined here
is designed to resolve shocks.
In fact, the code is shown to be able to
resolve a shock in about 1-2 cell (top-hat) (Ryu et al.\ 1993),
which is consistent with what is found here in
resolving early clusters since at these early times
the regions which dominate X-ray emission
are just undergoing shocking.
On the other hand, the code is also known to
be able to resolve contact discontinuities or non-shocking,
large density gradients
at a lower resolution of 2-3 cells (top-hat) (Ryu et al.\ 1993).
This is again in agreement with the found resolution
for clusters at lower redshifts ($\sim 1.0-2.0~$cells [Gaussian]),
where shocks are far outside of cluster centers
and cumulative diffusion with time tends to smooth
the high density central cluster regions.
As a final and quite significant check we show, in Figure 6, the result
taken from Frenk et al.\ (1998),
where the solid dots are the density profile (spherically averaged)
of a cluster in a controlled volume of a CDM universe
computed by the same TVD code used here with
$N=512$ cells and nominal resolution (i.e., cell size)
of $62.5h^{-1}$kpc.
The open circles and
solid curve represent a fit to the average profile
of all the simulations from Frenk et al.\ (1998),
which is dominated by a few highest resolution simulations
in the inner region ($r<0.1h^{-1}$Mpc).
The dashed curve is the smoothed profile
of the solid curve by a gaussian with
$\sigma_r=1.65\Delta l$ (i.e., with $\alpha=1.65$).
We see the result computed in the core regions
by our code does in fact
correspond well to a gaussian smoothed
version of the true density profile if the
smoothing length is taken to be $1.65$ cells.
This particular simulated cluster is probably more advanced
than any cluster in the current simulations.
Therefore, a larger value of $\alpha$ is entirely to be expected,
but is consistent with the bound on $\alpha$ obtained above.
\medskip
\medskip
\section{Applications}
\medskip
Let us now apply the derived results on core radii
to our previous computations of X-ray clusters of galaxies
which had a box size of $85h^{-1}$Mpc and a cell size of $315h^{-1}$kpc.
From Figure 6b of Kang et al.\ (1994) for the $\Omega_0=1$
SCDM model we have obtained the luminosity-weighted average
core radius at each redshift, $r_{core,comp}$.
Then, we use equation (1) to compute $r_{core,true}$ at
each redshift, given $\alpha$ as shown in Figure 4.
Since this SCDM model has comparable amplitudes of the density
fluctuations on the relevant scales
and comparable abundance of clusters of galaxies
compared to the power law model tested here,
it seems appropriate
to directly use $\alpha$ as shown in Figure 4.
In order to make meaningful assessments we need to
have an estimate of error on the derived $\alpha$.
We obtain the error on $\alpha$ by finding
individual $\alpha$ for each pair of clusters found
in the two different resolution simulations.
For clusters selected in a self-similar way as indicated above,
we find that 4 out of 4 clusters in the high resolution
simulation have the counterparts in the low resolution simulation
at $z=0$,
5 out of 5 at $z=0.3$,
6 out of 6 at $z=0.5$,
6 out of 9 at $z=1.0$,
and 12 out of 15 at $z=2.0$.
We do not include clusters that are not paired in the two simulations
in computing the errors on $\alpha$.
We find the $1\sigma$ statistical error
of $\alpha$ to be $(0.15,0.097,0.049,0.048,0.061)$
at redshift $z=(0.0,0.3,0.5,1.0,2.0)$, respectively,
with the dispersion being $(0.25,0.19,0.11,0.11,0.20)$.
The identification of each pair of clusters
is unambiguous with 3-d r.m.s displacement being
less than one simulation cell at all epochs examined.
If a cluster's average velocity dispersion with
some large radius (a few times the core radius)
is fixed and the emissivity profile is assumed to be that as in
equation (3),
then one can show that approximately
$L_x\propto r_{core}^{-1}$.
Further assuming that the luminosity function
has a slope of $-2$, i.e., $n(>L)\propto L^{-2}$,
as indicated by both simulations and observations (see
Figures 1-4 of Kang et al.\ 1994),
we are able to correct the number of bright X-ray clusters.
Figure 7 shows the computed core radii, the corrected core radii,
the computed number of X-ray clusters brighter
than $L>10^{43}$ erg/sec,
and corrected number of X-ray clusters brighter
than $L>10^{43}$ erg/sec,
in the SCDM model, from redshift zero to one.
The errorbars on $r_{core,corr}$ and $n_{corr}$
are obtained by propagating errorr through
the following equations:
$\Delta r_{core,corr}/r_{core,corr}=\Delta \alpha/\alpha$,
$L_x\propto r_{core}$
and
$n(>L_x) \propto L_x^{-2}$
(see below for a discussion on errors).
The most significant result from this exercise
is that the apparent positive evolution of bright
X-ray clusters previously found in the SCDM model
seems due to the fact that the lower redshift clusters
are relatively more underresolved.
Correcting this redshift-dependent resolution effect
seems to show that the bright clusters
are consistent with no evolution (or weak evolution) up to redshift one,
in better agreement with observations and
semi-analytic studies (Henry et al.\ 1992).
Figure 8 shows the same results for the $\Lambda$CDM
model (Cen \& Ostriker 1994).
Since the $\Lambda$CDM model
is significantly different from the power law model computed
here, it is somewhat tricky as to how to apply $\alpha$ derived
here to the $\Lambda$CDM model.
We make the following observation.
Since a $\sigma_8\sim 0.5$ power law model
has approximately the same
cluster abundance as the $\Lambda$CDM model (e.g., Cen 1998),
it seems most appropriate
to apply the $\alpha$ at $z=1$ in the power law model
to clusters at $z=0$ in the $\Lambda$CDM model.
For clusters in the $\Lambda$CDM model at higher redshift
we simply use our best estimates of $\alpha$ by extrapolation.
Note that the corrected zero redshift luminosity
weighted X-ray core radii in the (SCDM,$\Lambda$CDM)
models are $(210\pm 45, 280\pm 60)h^{-1}$kpc,
respectively.
This errorbars on $r_{true}$
are estimated based on the errors on $r_{comp}$.
This is to be compared with observations
by Jones \& Forman (1992) of $50-200h^{-1}$kpc.
For both models we see that our previous computations
have overestimated the core radii by factors of $1.7-3.1$
and
underestimated the number of bright clusters
by a varying factor from about 3 to 10.
We were aware of the resolution issue when we wrote
Kang et al.\ (1994) and Cen \& Ostriker (1994)
and thus treated the computed numbers
of bright clusters
as lower bounds to the true numbers.
Thus, the present exercise
has the primary effect of strengthening
our previous conclusion that the COBE
normalized CDM model overpredicts the number of
bright X-ray clusters by a very large factor ($\sim 20$).
The $\Lambda$CDM model,
revised to include corrections described
here, would be approximately
consistent with observations
[note that we found a plotting error in our previous
published results: the vertical values of the simulated results
in Figure 1 of both Kang et al.\ (1994) and Cen \& Ostriker (1994)
are too large by a factor of $\ln (10)=2.3$;
our revised statement above with regard to the number of bright
clusters in the two models includes the
correction of this error].
Finally,
let us estimate the systematic errors associated with the corrected
luminosity of a cluster using the resolution correction method
described here.
Assuming that $L_x\propto r_{core}^{-1}$
and taking the form $L_{true}=(D\pm \Delta D)L_{comp}$,
we have $D=r_{comp}/r_{true}=\sqrt{1+1.18\alpha \Delta l/r_{true}}$.
Taking the $z=0$ solid square in Figure 7 as an example
(which has largest extrapolation among our results)
with $\alpha=1.4$ and $r_{true}=0.65\Delta l$,
we have $D=1.9$;
i.e., the correction (due to systematic error)
on the X-ray luminosity of clusters
at $z=0$ is as large as the computed value,
it thus appears that
systematic errors associated with extrapolated X-ray
cluster luminosity is still very large for the published
simulations.
But, we note that,
if we trust the derived values of
$r_{true}$,
then in new simulations at a dynamic range
of $768^3$ now achievable with the same box size
($L=85h^{-1}$Mpc) as the previously published ones,
the resolution correction would be small
with $D=1.38$, i.e., 38\% correction (systematic error),
and relatively reliable.
The associated errorbar on $D$ (statistical error)
would be $\Delta D/D=\Delta r_{comp}/r_{comp}
=\Delta \alpha/\alpha=0.15/1.4=10.7\%$ for
the clusters in the SCDM model at $z=0$.
The clusters in the $\Lambda$CDM model at $z=0$,
$(D,\Delta D)$ would be $(1.20,0.05)$.
Hence a $95\%$ percent upper bound (including both
systematic and statistical errors) would have
an upward correction on computed value of only $1.97$
and $1.50$, respectively, for clusters at $z=0$ in the new
simulations of SCDM and $\Lambda$CDM models.
Corrections for clusters at higher redshifts
would be still smaller.
A larger simulation box would diminish
the statistical errorbars,
while higher resolution would further reduce
systematic errorbars.
\medskip
\medskip
\section{Conclusions}
\medskip
To summarize our results,
we find an effective gaussian
smoothing length of approximately 1.7 cells except
in regions where the density
gradients are caused by shocks
(for which the TVD code is optimized)
where the smoothing length is approximately $1.1$ cells.
Density profiles can
be deconvolved with the smoothing
length when the correction is small:
$R^2_{core,true} = R^2_{core,comp}-(C\Delta l)^2$,
where $C=1.1-1.7$.
But results are not to be trusted
if the computed core radii of
clusters are less than $1.1\Delta l$.
Applying the derived resolution
effect to our previous X-ray cluster simulations
we find that our previous computations
underestimate the number of bright clusters
by a varying factor from about 3 to 10.
We estimate that the error on the corrected clusters luminosities
are still very large, thus the correction is not reliable.
In addition, the redshift evolutions of bright clusters
in the models are altered to varying extent.
Our previous conclusion that the COBE
normalized CDM model overpredicts the number of
bright X-ray clusters by a very large factor
is greatly strengthened.
Finally, we note that,
with new simulations at a dynamic range
of $768^3$ now achievable with the same box size
($L=85h^{-1}$Mpc) as the previously published ones,
the resolution correction would be small
[$(38\%,20\%)$, respectively]
and relatively reliable.
We are happy to acknowledge
support from grants NAGW-2448, NAG5-2759, AST91-08103
and ASC93-18185 and useful conversations
with L. Hernquist, N. Kaiser, U. Pen and S. White.
It is a pleasure to acknowledge Pittsburg Supercomputing Center
for allowing us to use Cray C90 supercomputer.
We would like to thank the hospitality of ITP during our stay
when this work is completed, and the financial support from ITP
through the NSF grant PHY94-07194.
Some of the computation was performed
at the Princeton SGI Origin 2000
which is supported by grant from the NCSA Alliance Center.
\vfill\eject
| proofpile-arXiv_065-7905 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Antimatter production in relativistic heavy ion collisions has been
proposed as an excellent probe of the collision dynamics and possible
phase transition to a quark-gluon plasma \cite{qgprefs}. The production of
antiprotons at AGS energies (10-15 GeV/c) is near threshold in
nucleon-nucleon collisions. Therefore multiple collisions, resonance
excitations, and collective effects may play a major role in significantly
increasing the overall production rates \cite{mean}.
Strange antibaryons, due to
their larger mass and additional antistrange quark, are even further
suppressed in initial nucleon-nucleon collisions.
However, strangeness saturation and antimatter enhancement have long been
predictions of quark-gluon plasma states. Thus, understanding the
yields of non-strange and strange antibaryons is an important tool for
distinguishing between various sources of enhanced production.
Antibaryons have a large annihilation cross section
(particularly at low relative momentum), reaching levels of hundreds of
millibarns. Thus, in these baryon rich colliding systems, there can be
significant annihilation losses before they escape the collision region.
The final experimentally measured yields represent the initial
production minus the annihilation losses. The annihilation process in
free space is well understood and experimentally parametrized;
however, this annihilation may be modified in the dense particle
environment where the initial attraction of baryon and antibaryon may
be disturbed \cite{arc_pap}.
Antideuteron production at AGS energies is actually below the energy
threshold in single nucleon-nucleon collisions and thus antideuterons are
expected to only be created through the coalescence of separately
produced antiprotons and antineutrons which are close enough together
in coordinate space and phase space at freeze-out. Because of the large energy
required for their production, antideuteron yields are an excellent
measure of the system's thermal temperature (assuming antimatter is
equilibrated in these collisions). Also, coalescence yields would yield
information about the spatial distribution of antinucleons (as a form of
two particle correlation) \cite{Nagle_prl}.
In the next section we describe the E864 spectrometer and the
antiproton data sets. In section three we present the measured
antiproton invariant multiplicities and compare them with
measurements made by other experiments. These comparisons lead
us to consider the possibility of enhanced production of strange
antibaryons in these collisions. Antimatter correlations
in the form of events with two antiprotons and antideuteron
production are discussed in section four.
\section{Experiment 864}
\subsection{The E864 Spectrometer}
Experiment 864 was designed to search for novel forms of matter
(particularly strange quark matter, or ``strangelets'')
produced in heavy ion collisions
at the Brookhaven AGS facility \cite{e864_nimpap}.
In order to conduct this search,
E864 has a large geometric acceptance
and operates at a high data rate. A diagram of the spectrometer is
shown in Figure \ref{fig:e864}.
Secondary particles produced from the Au + Pb reaction which are
within the geometric acceptance traverse two dipole magnets (M1 and M2)
and then multiple downstream tracking stations. Non-interacting beam
particles and beam fragments near zero degrees pass above the downstream
spectrometer in a vacuum chamber, thus reducing interactions which
would otherwise produce background hits in the detectors.
The experiment does not measure at zero
degrees (zero transverse momentum), but particles with at least 15
milliradians of angle pass through an exit window in the vacuum
chamber.
Charged particles are tracked using three time-of-flight (TOF)
hodoscopes and two straw tube stations. The hodoscopes yield
three independent charge measurements of dE/dx over the 1 cm
thickness of the scintillator slats and provide three
space-time points with time resolutions on the order of 120-150 ps.
The straw stations provide more accurate position information
for track projection back into the magnetic field region. Particles
are identified by their time-of-flight from the target (yielding
the velocity) and momentum. The momentum is determined by combining
the charge measurement with the rigidity (R = p/Z) from the track
projection in the bend plane into the field region. The redundant
measurements allow for excellent background rejection of
ghost tracks and tracks originating from downstream interactions.
A second particle identification measurement can be made using a hadronic calorimeter
located at the end of the spectrometer \cite{calo_nim}.
The calorimeter consists of
754 individual towers constructed from lead sheets with scintillating fibers
running almost parallel to the incident particle trajectory. The calorimeter
yields good timing information ($\sigma \approx$ 400~ps for hadrons)
and excellent hadronic energy resolution of 3.5\% + 34\%/$\sqrt{E}$ (with E in GeV).
For baryons, the calorimeter measures the particle's kinetic energy, which
when combined with time-of-flight information gives a measure of the
particle mass. For antibaryons, the energy measurement also includes the
annihilation energy of the antibaryon and its annihilation partner.
The experiment is able to perform high sensitivity searches by running at
high rate with a special ``late energy'' trigger (LET) \cite{let_nimpap}.
The time and energy signals from each of 616 fiducial calorimeter
towers are digitized in flash ADC's and TDC's and used as inputs to a lookup table, which
is programmed to select the particles of interest.
Because there are many slow neutrons and many fast high energy protons, a
simple time cut or an energy cut was determined to be insufficient
for the trigger. The late energy trigger allows for the rejection
of both of these abundant particles, while effectively
triggering on slow (mid-rapidity) particles which deposit
a large amount of energy.
An antiproton of the same momentum as
a proton or neutron will deposit an additional annihilation energy.
If the Au+Pb interaction yields no towers
firing the trigger, then a fast clear is sent out to the digitizers and the
data is not recorded.
The trigger yields an enhancement
factor for antiprotons, antideuterons and strangelets of approximately 50
(under running conditions appropriate for each species).
In order to determine the collision geometry (impact parameter)
a charged particle multiplicity counter is used.
The E864 multiplicity counter \cite{beam_nim} is an annular piece of
fast BC420 scintillator placed around the beam pipe 13 cm downstream
of the target and tipped
at an angle of $8^o$ to the vertical. It is 1 cm thick and
subtends the angular range of $16.6^o$ to $45.0^o$. The annulus is
separated into four quadrants and each quadrant is viewed by a
photomultiplier tube. The total signal measured with this counter is
proportional to the charged particle multiplicity of the collision.
The integrated signal from the sum of the four quadrants is
used to trigger on the centrality of the events by selecting events
with a signal larger than a given threshold.
\subsection{Data Sets}
The data used in this analysis was collected in two separate data taking periods.
During the fall of 1995, the late energy trigger was strobed on
the 10\% most central Au+Pb interactions with the spectrometer
magnets set for optimal acceptance for antideuterons (referred to as the ``-0.75T'' field setting).
The LET curve was set to yield an enhancement factor of
$\sim50$ for antideuterons and negative strangelets. The data set includes over
90 million recorded events, which effectively sampled approximately six
billion central interactions. From this sample, over 50,000 antiprotons were
identified. The mass distributions of antiprotons from a single rapidity and
transverse momentum bin are shown in Figure \ref{fig:mass_plots}.
In the fall 1996 run, the LET was strobed on minimum-bias (93\% of the geometric
cross section) Au+Pb
interactions and the LET curve and the spectrometer magnets were set
for optimal antiproton acceptance (referred to as the ``-0.45T'' field setting). The LET yielded an
enhancement factor for antiprotons $>50$ under these conditions. However, in
order to use the trigger effectively, the region of the
calorimeter dominated by neutrons from the interaction
had to be excluded.
This reduced the geometric acceptance by roughly a factor of two.
The data sample included 45 million recorded minimum bias interactions and
approximately 50,000 antiprotons. These data samples represent the largest
statistics for antiprotons produced in heavy ion collisions at the BNL-AGS.
In both data sets, the beam momentum was measured using the E864 spectrometer magnets and a
downstream beam counter located in the beam dump. The beam momentum of 11.5 GeV/c per nucleon
was consistent with the beam momentum reported at extraction from the accelerator once energy
losses due to material in the E864 beam line were properly accounted for. The Au beam was
incident on a 30\% Pb target for the 1995 data set, while a 10\% Pb target was used in 1996.
\section{Antiproton Invariant Multiplicities}
\subsection{E864 Measurements}
In E864 we explicitly measure the yield of antiprotons per
Au+Pb interaction as a function of centrality, and thus
we directly calculate the invariant multiplicities.
The invariant multiplicity for antiprotons is
determined as follows:
\begin{equation}
\rm {{{1} \over {2 \pi p_{t}}} {{d^{2}N} \over {dydp_{T}}}} = {{1}
\over {2\pi \overline{p_{t}} \Delta y \Delta p_{T} }} {{N_{counts}}
\over {N_{sampled}}} {{1} \over {\epsilon_{detect} \times
\epsilon_{accept} \times \epsilon_{trigger}}}
\end{equation}
The total number of antiprotons $N_{counts}$ is
determined in each separate bin in phase space and divided by the
total number of sampled Au+Pb interactions $N_{sampled}$.
The counted antiprotons include only those antiprotons which fired the LET.
Since the detector does not measure all the
antiprotons produced in a given region of phase space, the
invariant multiplicity must be corrected for the missed particles.
These missed antiprotons are the result of the experiment's finite
acceptance and various tracking and triggering efficiencies. The
acceptance $\epsilon_{accept}$ and detection efficiency
$\epsilon_{detect}$ are calculated using a GEANT \cite{GEANT} simulation of the
experiment in conjunction with real data. This simulation also included
losses due to antiproton annihilation in the target as part of the acceptance correction.
The production of antiprotons due to reinteraction of particles from the
primary interaction with target nuclei was also considered and found to
be negligible.
The LET efficiency $\epsilon_{trigger}$ is determined in
each kinematic bin. This efficiency is determined
in one of two ways: for antiprotons measured in the 1995 (``-0.75T'') 10\% central data
where the efficiency was somewhat low, a sample of antiprotons that did
not fire the trigger was used to determine the efficiency. In the
1996 (``-0.45T'') data, where the LET curve was set for higher efficiency
($\sim $ 75\%), the efficiency was determined from a Monte Carlo
of the calorimeter response.
The data from E864 is measured in a range
of $50~<~p_{T}~<~275~$MeV/c (where the limits are a function
of rapidity). The invariant multiplicities
measured in E864 are approximately flat over the $p_{T}$ range
measured, as shown for the 1996 data in Figure \ref{fig:pt_plots}.
Over such a small range in transverse
momentum, the invariant multiplicities are not expected to change
significantly. If, for example, the spectra follow a Boltzmann
distribution,
\begin{equation}
{{1} \over {2\pi p_{T}}} {{d^{2}N} \over {dydp_{T}}} \propto m_{T}
e^{-{{m_{T}} \over T_{B}}}
\end{equation}
(where $m_{T} = \sqrt{p_{T}^{2} + m^{2}}$), then with a temperature
parameter of 200~MeV the invariant multiplicity at $p_{T}=0$ is only
6\% higher than at $p_{T}~=~150$~MeV/c. For comparison with other
experiments, in each rapidity bin all
the invariant multiplicities as a function of $p_{T}$
are fit to a constant level. This level
is assigned as the invariant multiplicity at $p_{T} \approx 0$, and
an additional 6\% systematic error is assigned due to the $p_{T}=0$
extrapolation. It should be noted that strong radial flow could affect the
$p_{T}=0$ extrapolation as well. We feel that this effect
should be within the estimated systematic uncertainty since the E864 data
are quite flat as a function of $p_{T}$ down to 50 MeV/c at midrapidity.
The antiproton invariant multiplicities for 10\% most central Au + Pb
collisions from the 1995 data are given in Table \ref{tab:pbar_invar95} \cite{nagle_thesis}.
It should be noted that the statistical error in this data set
is dominated by the contribution from the trigger efficiency (due to
counting antiprotons which did not fire the trigger). The systematic
error in the 1995 data (exclusive of the 6\% due to the $p_{T}=0$ extrapolation)
is estimated to be 15\%, and is dominated by the
uncertainty in the correction for the LET trigger efficiency. Systematic
uncertainties also arise from our knowledge of the experimental acceptance
(including the effect of the collimator in the first spectrometer magnet),
track quality cuts, and the loss of tracks due to overlapping hits in the
hodoscopes.
For the 1996 data, the late-energy trigger was strobed on a minimum-bias
sample of events selected by the multiplicity counter. The resulting
multiplicity counter ADC distribution is shown in Figure \ref{fig:beam1}.
When selecting minimum bias events, it is important to consider the
effect of interactions of the beam that do not occur in the target.
Using special empty-target runs we have found that non-target interactions
contribute less than 10\% of the multiplicity distribution at low
multiplicity, while the late-energy trigger further reduces this
contamination to below 1\% (see Figure \ref{fig:beam1}). For the
1996 data, the antiproton invariant multiplicities are determined
for different regions of the minimum bias multiplicity: 100-70\% of the full
distribution, 70-30\%,
30-10\% and 10\%. These regions are shown in Figure \ref{fig:beam2}.
It is important to note that the LET rejection is a strong function
of the multiplicity, and this must be properly accounted for when
calculating the normalization in each centrality bin.
The antiproton invariant multiplicities for the four multiplicity
regions used in the 1996 data set are listed in Table \ref{tab:pbar_invar96}.
In addition, the full minimum bias invariant multiplicity from the 1996 data
set is also listed in Table \ref{tab:pbar_invar96}.
The systematic error in these data points are estimated to be 10\% (again,
exclusive of the 6\% previously described due to the $p_{T}=0$ extrapolation).
As in the 1995 data, the systematic uncertainty in the 1996 data is also dominated
by the uncertainty in the determination of the LET trigger efficiency.
However, the size of the correction is smaller for the 1996 data due to
the overall higher efficiency of the trigger setting.
Figure \ref{fig:fig_all_years} shows the 1995 (``-0.75T'')
and 1996 (``-0.45T'') antiproton invariant multiplicities at $p_{T}=0$
as a function of rapidity. The Gaussians shown are fits to the
combined 1995 and 1996 data, and are constrained to have a mean value at
midrapidity ($y=1.6$). There is excellent agreement between the
1995 and 1996 data in the rapidity range where the two data sets overlap.
The rapidity widths measured in the data for the four multiplicty
widths are $0.37\pm0.02$ (100-70\%), $0.41\pm0.02$ (70-30\%),
$0.43\pm0.02$ (30-10\%) and $0.46\pm0.02$ (10\%), indicating a broadening
of the rapidity spectrum at higher centrality.
Also shown in Figure \ref{fig:fig_all_years} (as open squares) are previously
reported antiproton results from data taken in 1994 (``-0.45T'') \cite{jlprl}.
It should be noted that the 1994 data is about
20\% higher at midrapidity than indicated by the corresponding
1995 and 1996 data. This is within the statistical and systematic
error previously quoted for the 1994 data. It is important
to note that the 1994 data was taken with an incomplete detector:
two layers of S3 were missing along with a ``plug'' designed
to reduce the occupancy in the downstream detectors due to
interactions of beam-rapidity fragments with the vacuum chamber.
The presence of the plug dramatically reduced the detector
occupancy in the 1995 and 1996 data (and hence the size of the
correction required for losses due to multiple hodoscope hits),
and the presence of the additional S3 layers provided additional
background rejection.
\subsection{Comparisons with Other Experiments}
Experiment 878 has measured antiproton yields as a function of
collision geometry in reactions of Au+Au ions at
10.8~A~GeV/c \cite{e878_prl,mike_prc}. There are two
differences between the reaction system between E878 and E864: (1) the
target in E864 is Pb and (2) the beam momentum in E864 is higher at
11.5~A~GeV/c. The target difference is quite small and is neglected
in this comparison. However, the production of antiprotons is near
threshold for nucleon-nucleon collisions at these energies and so the
beam momentum difference must be accounted for. We assume that the
ratio of antiproton yields in Au+Pb reactions at the two energies is
proportional to the ratio of antiproton yields in p+p reactions at the
two energies. Unfortunately, there is no usable data on antiproton
production in p+p reactions covering this particular energy range.
Therefore a parametrization of the production cross sections (derived
from p+p data at higher energies and p+Be data from E802 at
14.6~GeV/c \cite{arc_pap}) is employed.
Using this parameterization, one expects the ratio of the antiproton production
cross sections at the two energies to be 1.5.
The E878 invariant multiplicities are scaled up by this value.
By considering fits to higher energy p+p data that do not include the E802 p+Be data
at 14.6 GeV/c \cite{costales},
we estimate that
this energy scaling contributes an additional 15\%
systematic error on the overall normalization of the scaled E878 points.
Experiment 878 measures invariant multiplicities nominally at
$p_{T}=0$ (which is really at transverse momenta less than
$\sim$ 30-50~MeV/c). Using the procedure previously
outlined, we extrapolate the E864 measurements to $p_{T}=0$
and compare the E864 and E878 measurements in
Figure \ref{fig:fig_pbar_e878_comp}. While the two experiments agree well for low
multiplicity collisions a substantial disagreement develops
for more central collisions. For 10\% central collisions,
the E864 measurements at midrapidity are a factor of $\sim3.2$ larger
than the corresponding E878 data.
It should be noted that both experiments do not use precisely the
same definition of centrality: E864 measures the multiplicity
of charged particles produced in the collisions, while E878
measures the $\gamma$ multiplicity (mostly from $\pi^{0}$ decay). In order to properly compare
the two experiments the multiplicity ranges for both experiments must be converted
to a (somewhat model-dependent) parameter. To do this, we have chosen
to show the integrated antiproton yield at $p_{T}=0$ versus the number of ``first''
nucleon-nucleon collions in each centrality range.
In order to estimate the number of first collisions in each
multiplicity range for the E864 data, a GEANT \cite{GEANT} simulation was used
in conjunction with RQMD \cite{RQMD} Au+Pb events to generate a trigger probability
vs. impact parameter distribution for each multiplcicity region (see Figure \ref{fig:bdists}).
These distributions were then folded with distributions of the number
of first collisions vs. impact parameter from a simple Glauber
model calculation. A similar procedure was applied to the E878
data, using the results of a simulation of the E878 multiplicity array \cite{e878_mult_nim}. The results
of this exercise (shown in Figure \ref{fig:first}) demonstrates that
the E864 and E878 centrality ranges are quite similar.
In Figure \ref{fig:fig_pbar_mbias} we also compare measurements
of the minimum-bias cross section for Au+Pb collisons at 11.5 A GeV/c
with E878 and E886 \cite{e886}.
It should be noted that experiment E886 only measured antiprotons from
minimum bias collisions and thus there is no comparison as a function of centrality.
As expected by the comparison of the E864 data with E878 for
the four different centrality regions, the minimum bias
invariant multiplicities measured in E864 are substantially larger than those measured
by E878 and E886.
\subsection{Strange Antibaryon Feed-Down}
There is a scenario which can reconcile the E864 and E878 results.
Some of the antiprotons measured by the various experiments may be the
daughter product of weak decays of strange antibaryons
($\overline{\Lambda}$, $\overline{\Sigma^{+}}$,
$\overline{\Sigma^{0}}$, $\overline{\Xi^{0}}$, $\overline{\Xi^{-}}$, $\overline{\Omega}$).
This process is referred to as ``feeding
down'' from the strange antibaryons into the antiprotons. Due to the
significantly different designs of the two experiments, they have
different acceptances from these decay product antiprotons. There are
a number of antihyperon ($\overline{Y}$) feed-down channels into the antiproton:
\begin{equation}
\overline{\Lambda} \rightarrow ~~\overline{p} + \pi
^{+}~~~(\rm{65\%~B.F.})
\end{equation}
\begin{equation}
\overline{\Sigma^{0}} \rightarrow ~~\overline{\Lambda} + \gamma
\rightarrow ~~\overline{p} + \pi ^{+} + \gamma~~~(\rm{100\% \times
65\%~B.F.})
\end{equation}
\begin{equation}
\overline{\Sigma^{+}} \rightarrow ~~\overline{p} + \pi
^{0}~~~(\rm{52\%~B.F.})
\end{equation}
\begin{equation}
\overline{\Xi^{0}} \rightarrow ~~\overline{\Lambda} + \pi
^{0} \rightarrow ~~\overline{p} + \pi^{+} + \pi^{0}~~~(\rm{99\%} \times \rm{65\%~B.F.})
\end{equation}
\begin{equation}
\overline{\Xi^{-}} \rightarrow ~~\overline{\Lambda} + \pi
^{+} \rightarrow ~~\overline{p} + \pi^{+} + \pi^{+}~~~(\rm{99\%} \times \rm{65\%~B.F.})
\end{equation}
and multiple decay modes for the $\overline{\Omega}$.
The decay of the $\overline{\Sigma^{0}}$ will produce additional
$\overline{\Lambda}$'s which will be
indistinguishable from those created in the primary collision.
The decay of the $\overline{\Lambda}$ and the $\overline{\Sigma^{+}}$ will
produce $\overline{p}$'s whose production vertices do not coincide with the
location of the primary interaction between
the two nuclei. Therefore, the degree to which $\overline{p}$'s
from these decays contribute to a measurement of
$\overline{p}$ production will vary among experiments.
Due to its large acceptance, the E864 spectrometer will detect
$\overline{p}$'s from $\overline{Y}$ decay.
E864 does not have sufficient vertical resolution
to distinguish $\overline{p}$'s from $\overline{Y}$ decay based on
the vertical projection of a particle to the target,
and the analysis cuts do not preferentially reject
antiprotons from $\overline{Y}$ decay.
Therefore, the $\overline{p}$'s detected in E864 are a combination
of primary $\overline{p}$'s and $\overline{p}$'s from $\overline{Y}$ decay, in a ratio that
reflects their production ratio.
The E878 collaboration have also evaluated the acceptance of their spectrometer for
feed-down from $\overline{Y}$ decay.
At midrapidity the acceptance for $\overline{p}$'s
from $\overline{\Lambda}$ and $\overline{\Sigma^{0}}$ decay is
14\% of the spectrometer acceptance for primordial $\overline{p}$'s,
and 10\% of the $\overline{p}$ acceptance for $\overline{\Sigma^{+}}$ decays \cite{mike_prc};
the acceptance grows at higher rapidity. In what follows, we assume a uncertainty
of $\pm$1\% in the E878 acceptances for feed-down.
Since both E878 and E864 measure a different combination
of primordial $\overline{p}$ production and feed-down from
$\overline{Y}$ decay, we can in principle separate the
two components if we make two explicit assumptions:
both E864 and E878 understand their systematic errors, and the
entire difference between the two experiments can be
attributed to antihyperon feed-down.
It is important to note that in energy scaling the E878 results
we have implicitly assumed that the $\overline{Y}$'s scale
with energy in the same way as the $\overline{p}$'s.
Given an understanding of the errors involved, we can perform
a statistical analysis of the $\overline{Y}/\overline{p}$
ratio as a function of the E864 and E878 measurements
(see Figures \ref{fig:clevel1} and \ref{fig:clevel}).
This analysis results in the following limits on the
ratio of $\overline{Y}/\overline{p}$ :
\begin{equation}
\left(\:\frac{\overline{Y}}{\overline{p}}\:\right)
_{\stackrel{\scriptstyle y=1.6 }{p_{T}=0}} \approx
\left(\:\frac{\overline{\Lambda}+\overline{\Sigma^{0}}
+1.1\overline{\Sigma^{+}}}{\overline{p}}\:\right) >
\textrm{(98\% C.L.)}
\left\{\:
\begin{array}{l}
\textrm{0.02 (100-70\%)} \\
\textrm{0.10 (70-30\%)} \\
\textrm{1.0 (30-10\%)} \\
\textrm{2.3 (10\%)} \\
\textrm{0.2 (minimum bias)}
\end{array}
\:\right.
\end{equation}
while the most probable value of this ratio is $\sim3.5$ for 10\% central collisions.
The factor of 1.1 multiplying the $\overline{\Sigma^{+}}$
arises due to the different branching ratio and
acceptance for the $\overline{\Sigma^{+}}$ compared
to the $\overline{\Lambda}$.
The probability
distributions in Figure \ref{fig:clevel1} were generated
using the measured E864 and E878 invariant multiplicities
for each centraility bin. The statistical errors on these
measurements were treated as Gaussian, while systematic
errors on the measurements, energy scaling, $p_{T}=0$ extrapolation, and the
E878 acceptance for feed-down were treated as definining a limit around the measured
values.
E878 has not explicitly calculated their experimental acceptance for
the doubly strange $\overline{\Xi}$ and the $\overline{\Omega}$, and thus
they are not explicitly included in the above formula. These heavier
strange antibaryons are generally thought to be further suppressed and thus
a small contribution. However, in light of the unexpectedly large ``feed-down''
contributions from strange antibaryons, one should be careful not to
neglect their contribution to this ratio.
This comparison indicates a $\overline{Y}/\overline{p}$ ratio in
Au+Pb collisions that is significantly greater than one at midrapidity
and $p_{T}=0$. It should be noted that if the
$\overline{Y}$'s and the $\overline{p}$ are produced with different
distributions in $y$ and $p_{T}$, then the ratio of integrated yields
of these particles will differ from the ratio at central rapidity and $p_{T}=0$.
Preliminary results from Si+Au collisions based on
direct measurements of $\overline{p}$ and $\overline{\Lambda}$ production
by the E859 collaboration
also indicate a ratio of integrated yields greater than one \cite{Yeudong_Wu}.
For comparison, the $\overline{\Lambda}/\overline{p}$ ratio in pp collisions at
similar energies is $\sim 0.2$ \cite{pp_ref}.
An enhancement of antihyperons arises naturally in models that
include a QGP, and therefore enhanced antimatter and strangeness
production \cite{str_enh_refs,qgprefs}. Thermal models that use a temperature and baryon chemical
potential derived from measured particle spectra also indicate
that the $\overline{Y}/\overline{p}$ ratio could be larger than
one \cite{hgas_refs}. However, extremely large values of the
$\overline{Y}/\overline{p}$ are difficult to achieve in a thermal
model unless the freezout temperature and/or $K^{+}/K^{-}$ are
pushed beyond experimentally observed values. Transport models
such as RQMD \cite{RQMD} predict the $\overline{Y}/\overline{p}$
ratio to be less than one. Including in a cascade model conversion
reactions such as
\begin{equation}
\overline{p} + K^{+} \rightarrow \pi + \overline{\Lambda}
\end{equation}
and a lower annihilation cross section for the $\overline{\Lambda}$
relative to the $\overline{p}$ enhances the $\overline{Y}/\overline{p}$
ratio substantially \cite{gerd}. However, such a model does
not reproduce the trend with centrality seen in the E864 data.
\section{Antimatter Correlations}
\subsection{Double Antiproton Events}
In the large sample of events from the 1995 (``-0.75T'') run with a
single antiproton within the experimental acceptance, there are some
events with two identified
antiprotons in the same event. These two antiproton events give
insight into the possible correlated production of antimatter. Since
the number of individual nucleon-nucleon collisions in each Au + Pb
collision is large, if the sample of central events are similar in
nature, the production of one antiproton should have very little
relation to the production of a second antiproton.
In the 1995 data set there are approximately 43,000 antiprotons with rapidity less than
2.2, which were considered for this study. After corrections for background
contributions, we find there are 3.8 events with two antiprotons.
If we assume that the production of antiprotons is uncorrelated, we
can calculate the number of two antiproton events expected.
One can think of the nucleus-nucleus
collision as many ($n$) nucleon-nucleon collisions each with a
probability ($p$) of producing an antiproton. Since the probability ($p$) is
small and the number of collisions ($n$) is large, we
calculate the probability of producing two antiprotons in the same
event using Poisson statistics.
The probability of producing one antiproton is:
\begin{equation}
\rm{Prob(1)} = Rate_{Singles} = n \times p
\end{equation}
The probability of producing two antiprotons is:
\begin{equation}
\rm{Prob(2)} = {{\rm{Rate_{singles}}^2} \over {2}}
\end{equation}
Since we have measured the rate of single antiprotons into our
detector $\rm{Rate_{Singles}}$, we calculate the expected number of two
antiproton events at 1.8. The 90\% confidence level upper limit on
this number is five, which includes the experimentally measured value.
Given the agreement with the assumption of uncorrelated production,
there are limits we can set on the possible correlated production of
antimatter. We postulate that there are two distinct classes of
events within the 10\% central Au+Pb sample: One class of purely
hadronic reactions and one class with the formation of the quark-gluon
plasma (QGP).
In Figure \ref{fig:fig_pbar_doubles} the predicted number of
two antiproton events as a function of the fraction of QGP events
$f_{QGP}$ and the antimatter enhancement factor $\epsilon$ is shown.
The area in the
dark box is where the predicted number of two antiproton events is
greater than five and thus ruled out by the data at the 90\%
confidence level. If the QGP enhancement factor is small, the two
antiproton yield is not changed significantly. Also, if most of the
events are QGP, then regardless of the enhancement, there is no
predicted increase in the two antiproton yield. However, if there is
a large enhancement ($\epsilon >10$) and the fraction of QGP events is
between 5\% to 25\%, the yield of two antiprotons is significantly
increased. These specific scenarios are ruled out by this measurement
at the 90\% confidence level.
\subsection{Antideuteron Search}
We have performed a search for antideuterons using the 1995 data set of
central Au + Pb interactions taken at the
``-0.75 T'' magnetic field setting optimized for the
acceptance of antideuterons. After processing the data, any
tracked particle of charge negative one, rapidity $y < 2.4$,
passing all track quality $\chi^{2}$ selections, and having a
reconstructed mass in the range $1.3~<~m~<~5$~$\rm{GeV/c^{2}}$
is considered a possible antinuclei candidate. The mass
distribution of these candidates is shown in Figure \ref{fig:fig_dbar_mass}. The
distribution is well fit by an exponential and has no significant
signal at the antideuteron mass $m = 1.874$ GeV/c$^{2}$.
The experiment is able to reduce this background through an
energy measurement using our full coverage hadronic calorimeter.
The calorimeter measures the deposited kinetic energy of
hadrons in addition to the annihilation energy for antibaryons.
The background processes expected to create high mass candidates
in the tracking reconstruction are the result of neutrons
which charge exchange in the vacuum exit window or air and
produce a forward going proton traversing the downstream spectrometer.
The protons have reasonable rapidity values, but reconstruct to
erroneously large rigidities resulting from the incorrect assumption
that the particle originated at the target. These candidates
should leave significantly less energy in the calorimeter than
expected if they are actually protons compared with real antideuterons
or heavier antinuclei.
Since these candidates are all assumed to be antimatter, the
reconstructed calorimeter mass must account for the annihilation
contribution.
In studies of antiproton showers from test beam data and from
the 1995 data, it was observed that only $\approx$ 84\% of the
annihilation energy was recorded in the calorimeter. Thus, the calorimeter mass
formula is modified to reflect this loss:
\begin{equation}
\label{eqn:eqn_anti_mass}
\rm{mass} = {{E} \over {\gamma + 0.68}}.
\end{equation}
The tracking mass resolution is ${{\Delta m} \over {m}}\simeq5\%$, which
yields a $\sigma_{m}= 0.094$~$\rm{GeV/c^{2}}$ for antideuterons. The
distribution of calorimeter masses for candidates whose tracking mass
is within $\pm 2\sigma_{m}$ of the antideuteron mass ($1.687 < m <
2.061~\rm{GeV/c^{2}}$) is shown in Figure \ref{fig:fig_dbar_camass}.
One can see the peak mean value is less than $0.938$~$\rm{GeV/c^{2}}$.
Protons have a lower calorimeter mass ($<0.938$~$\rm{GeV/c^{2}}$)
when calculated using Equation \ref{eqn:eqn_anti_mass} since they do
not deposit any energy beyond their kinetic energy (there is no
annihilation energy contribution). The background candidates appear to
be protons as expected from charge exchange background. Most protons
should $\bf{\rm{not}}$ reconstruct such a large antimatter mass and
fire the late-energy trigger. However, the calorimeter energy
response has a non-Gaussian high side tail. These candidates are
protons which occupy the high side tail part of the energy response
distribution. As can be seen in the plot, the calorimeter is a
powerful tool for rejecting this proton background. A cut is then
placed on calorimeter mass being greater than 1.600~$\rm{GeV/c^{2}}$.
If one assumes that all of the observed candidates are from charge
exchange background (really protons striking the calorimeter), then
the background shape can be fit. The tracking mass distribution with
no cut on the calorimeter mass is fit to a simple exponential function
as shown in Figure \ref{fig:fig_dbar_mass}. If the candidates are all
protons striking the calorimeter, the calorimeter mass distribution
should be the same regardless of the tracking mass. Thus, one can use
the exponential function fit parameters from the tracking mass
distribution with no calorimeter cuts to describe the tracking mass
distribution with a calorimeter mass cut.
The tracking mass distribution is plotted in Figure \ref{fig:fig_dbar_mass} with a cut on the
calorimeter mass greater than 1.600~$\rm{GeV/c^{2}}$. There are ten candidates
within the $\pm 2 \sigma_{m}$ range of the antideuteron. The
background fit distribution shown in Figure \ref{fig:fig_dbar_mass} is
renormalized to the total number of counts and plotted. The
exponential fit seems a reasonable description of the distribution.
The total number of counts from the fit in the region of the
antideuteron (within $\pm 2 \sigma_{m}$) is 9.0. Thus, there is no
significant signal above background for the antideuteron.
One can then ask, how many real antideuterons would there have to be
to make a statistically significant peak above the background
distribution.
There are nine predicted background events, and thus the Poisson
statistics 90\% confidence level upper limit is 14.2. If more than
14.2 candidates were observed within the antideuteron mass range,
there is less than a 10\% chance that it is due to a statistical
fluctuation in the background events. Thus, we set the 90\%
confidence level upper limit on antideuteron production at
$N_{Poisson}-N_{Background} = 5.2$.
In order to translate this Poisson statistics limit into a total upper
limit on the production of antideuterons, the various acceptances and
efficiencies must be known.
It is also possible using a specific
production model to set the 90\% confidence level upper limit on the
invariant multiplicity in a given region of momentum space. In the
discussion that follows, we will asuume a model in which the production
in $p_{T}$ and rapidity ($y$) can be factored as:
\begin{equation}
\frac{1}{2\pi p_{T}} \frac{dN}{dydp_{T}} = A_{0}e^{-2p_{T}/<p_{T}>}e^{-(y-y_{cm})^{2}/2{\sigma_{y}}^{2}}
\end{equation}
We have assumed a rapidity width $\sigma_{y} = 0.5$ and a mean transverse momentum
$<p_{T}> = 1.00$ GeV/c.
Using this production model, we set a 90\% confidence level upper limit
on the production of antideuterons at $2.78 \times 10^{-7}$ per 10\% central
Au + Pb interaction.
One can relate the limit over all phase space to the limit on the invariant
multiplicity ($A_{0}$) at midrapidity and $p_{T}=0$.
\begin{equation}
A_{0} = {{N_{\rm{Total \space Limit}}} \over {(2\pi)^{3/2} \sigma_{y}
{{<p_{T}>^{2}} \over {4}}}}
\end{equation}
The upper limit on the invariant multiplicity at midrapidity $y=1.6$
and $p_{t}=0$ is $1.4 \times 10^{-7} \rm{GeV^{-2}c^{2}}$.
We have tested the model dependency of these upper limits and
find that with extreme ranges of production models, one can vary
the upper limits by approximately $\pm$50\%.
Using our antiproton measurements and these upper limits, we calculate
the 90\% confidence level upper limits on the
coalescence scale factor $\overline{B_{2}}$ for antideuterons. This
scale factor may be a function of where in momentum space the
measurement is made, thus we give the limit at midrapidity ($y=1.6$)
and $p_{t}=0$. Our measured invariant multiplicity for antiprotons is
$1.16 \times 10^{-2}~\rm{GeV^{-2}c^{2}}$ (from the combined 10\% central
1995 and 1996 data). The upper limit on the
invariant multiplicity for antideuterons is $1.41 \times
10^{-7}~\rm{GeV^{-2}c^{2}}$. The upper limit on the
scale factor is:
\begin{equation}
\overline{B_{2}} = { { \left[{{1} \over {2\pi p_{t}}} {{d^{2}N} \over
{dydp_{t}}}(\overline{d}) \right]} \over { \left[{{1} \over {2\pi
p_{t}}} {{d^{2}N} \over {dydp_{t}}}(\overline{p}) \right]^{2}} } \leq
1.0 \times 10^{-3}~\rm{GeV^{2}c^{-2}}
\end{equation}
This upper limit is shown as an arrow in
Figure \ref{fig:fig_dbar_coal}, along with a comparison to coalescence scale
factors measured at Bevelac and SPS energies \cite{fig16_ref1} \cite{fig16_ref2}
\cite{fig16_ref3}.
This scale factor is significantly below the global value of $1.2
\times 10^{-2}~\rm{GeV^{2}c^{-2}}$ predicted by the ``simple''
coalescence model. However, since this prescription has failed to
describe systems where the collision volume is expected to be large
compared with the deuteron/antideuteron size \cite{e814_prc},
it is not surprising that it is in disagreement with the value obtained here.
If the source distribution of antinucleons has a similar spatial
extent as the nucleon source, then the scale factor for deuterons
$B_{2}$ is expected to be the same as for antideuterons
$\overline{B_{2}}$. Recently, E864 has presented
measurements of protons and deuterons around midrapidity
and low transverse momentum. The scale factor from the
analysis of E864 light ion data \cite{Nigel_thesis} is also shown in
Figure \ref{fig:fig_dbar_coal}.
\begin{equation}
B_{2} = 1.1\pm.4 \times 10^{-3}~\rm{GeV^{2}c^{-2}}
\end{equation}
The uncertainties are dominated by systematic errors in the deuteron
and proton invariant multiplicities. This measured scale factor is at
the same level as the upper limit for the antideuteron scale factor.
We cannot determine whether $\overline{B_{2}}$ is significantly lower
than $B_{2}$. Thus, it is impossible to comment on whether the rate
of antideuteron production is smaller due to preferential surface
emission of antimatter.
If we consider the most probable value of the ratio
$(\overline{Y})/\overline{p} = 3.5$ for 10\%
central collisions, the primordial antiproton multiplicity at
midrapidity and $p_{t}=0$ should be a factor of $\sim3.3$ lower than
measured in E864. In this picture, the 90\% confidence level upper
limit on the coalescence scale factor would be
\begin{equation}
\overline{B_{2}} \leq 1.1 \times 10^{-2}~\rm{GeV^{2}c^{-2}}
\end{equation}
This value is approximately at the level measured in p + A collisions,
in accord with the ``simple'' coalescence model, where the collision
volume is expected to be quite small. Thus, if the
$\overline{Y}$ production is correctly calculated, the limit set
on antideuteron production is not very significant in the context of
coalescence models.
There have been two previous measurements of antideuteron production in
heavy ion collisions. The first was from the E858 experiment which observed
two antideuterons in minimum bias Si + Au collisions at
14.6 A GeV/c\cite{e858_ref}. They calculated a coalescence factor of
approximately $\overline{B_{2}} \leq 1.0 \times 10^{-3}~\rm{GeV^{2}c^{-2}}$.
While this value is consistent with our observation, it is difficult to
make any direct comparison since the E858 value is for minimum bias
collisions involving a much smaller projectile. The second measurement
is from experiment NA52 in Pb + Pb central collisions
at 160 A GeV/c\cite{na52_ref}. They observe a coalescence factor of
approximately $\overline{B_{2}} \approx 5.0 \pm 3.0 \times 10^{-4}~\rm{GeV^{2}c^{-2}}$.
They also find the factor for deuterons $B_{2}$ is the same within
statistical and systematic errors. While our upper limit on antideuterons
is consistent with their value, our deuteron coalescence factor is
somewhat higher. This observation is not surprising due to larger source
dimensions in the higher energy collisions studied by NA52.
\section{Conclusions}
We have presented results from Experiment 864 for antiproton production and
antideuteron limits in Au+Pb collisions at 11.5 GeV/c per nucleon. We
have measured invariant multiplicities for antiprotons above midrapidity
and at low transverse momentum as a function of collision geometry.
These measurements are within systematic errors of
our previously reported results \cite{jlprl}, and, when compared with
the results from Experiment 878, may indicate a significant
contribution to the measured antiproton yield from the decay of
strange antibaryons.
We have also studied correlated production of antimatter using
events with more than one antiproton and a search for antideuterons.
For antideuterons we see no statistically significant signal.
We set upper limits on the production at approximately
$3 \times 10^{-7}$ per 10\% highest multiplicity Au+Pb interaction.
| proofpile-arXiv_065-7913 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The Density Matrix Renormalization Group (DMRG) is a powerful
numerical Real Space RG method introduced in 1992 by S.R. White
which can be applied to a large variety of Quantum Lattice
Hamiltonians defined in 1d, quasi 1d (ladders) and large
clusters \cite{W1}.
The DMRG was originally proposed in the domain of Condensed Matter
Physics, where it has already become a standard method
specially for 1d systems,
but its range of applicability has been extended also
to Statistical Mechanics, polymers,
Chemical Physics, etc... (for a review
see \cite{Dresden}) and one may expect that it will be applied
in the near future to other domains of Physics as Quantum Field
Theory, Nuclear Physics, etc.
There are by now excelent reviews on the DMRG \cite{W2}
and other
related real space RG methods \cite{UAM}, \cite{Esco}
so it is not
the purpose of the present contribution to duplicate material
already present in the literature.
Instead we shall try to give an overview of the DMRG method
in order to explore its relationships with
Group Theory and its quantum deformation, Conformal
Field Theory (CFT) and the Matrix Product Method.
The relation between the DMRG and Quantum Groups was suggested
in \cite{q-group} by studying the RG method of
$q-$group invariant Hamiltonians.
We also review the variational approach
of the DMRG in terms of the so called
matrix product (MP) ansatzs introduced
by \"{O}stlund and Rommer \cite{OR} and suggest
a 2D versions of it which may lead to a 2D
formulation of the DMRG.
\section*{Real space RG methods: generalities}
Let us suppose we have a discrete system with $N$ sites and that
at each site there are two possible states, say
spin up and down for a spin system, or occupied and unoccupied
for a spinless fermion. The dimension of the Hilbert space
will grow as $2^N$, which makes very hard the study of
the large $N$ limit, unless some special trick is used.
The RG method provides a general systematic approach to handle
problems with a large number of degrees of freedom on the
basic assumption that only a small
number of states is needed in order to describe
the long distance physics. How to choose the {\bf{most
representative}} degrees of freedom out of a miriad
of states is the central issue of the RG method.
This can be done in several manners.
We shall introduce below some of them and establish their comparison.
The DMRG method was originally formulated as
a real space RG method although it admits also
a momentum space formulation \cite{Xiang}.
We next introduce
the basic concepts common to any real space RG method
and later on we confine ourselves to the DMRG.
The real space RG consist essentially of 3 steps: i) blocking,
ii) truncation and iii) iteration. First one divides
the system into blocks, then one finds an effective
description of these blocks in terms of intra-block and
inter-block interactions and finally one iterates the
algorithm. One can distingish between 3 types of blockings
Kadanof-blocking, Wilsonian-blocking and DMRG-blocking.
\subsection*{Kadanof Blocking}
We shall first consider the case of a linear chain
with $N$ sites. In the first step of the Kadanof blocking
one divides the system into blocks of 2 sites, thus for a
$N=16$ chain one gets,
\begin{equation}
(\bullet \; \bullet) \; ( \bullet \; \bullet) \;
(\bullet \; \bullet) \; ( \bullet \; \bullet)\;
(\bullet \; \bullet) \; ( \bullet \; \bullet) \;
(\bullet \; \bullet) \; ( \bullet \; \bullet)
\label{1}
\end{equation}
If every site describes two states then
the block $(\bullet \; \bullet)$
describes 4 states. Eq.(\ref{1}) is nothing but
a change from a 1-site basis
to a 2-sites basis and hence $\bullet \; \bullet$ is
entirely equivalent to $(\bullet \; \bullet)$.
The goal of eq.(1) is to prepare the road to perform
the first RG truncation. Indeed
out of the 4 states we may already want
to keep a smaller number, say 3, 2 or even 1.
We can represent symbolically this operation as follows,
\begin{equation}
(\bullet \; \bullet) \rightarrow (\bullet \; \bullet)' \;\,\,
(\rm{truncation})
\label{2}
\end{equation}
From a formal point of view the blocking operation
is captured by putting parenthesis $( \dots )$ around the
sites subjected to the blocking while the truncation operation
is represented by $'$ acting on the corresponding block.
Combining eqs. (\ref{1}) and (\ref{2}) the chain with $N$ sites
become after
the first blocking and truncation,
\begin{equation}
(\bullet \; \bullet)' \; ( \bullet \; \bullet)' \;
(\bullet \; \bullet)' \; ( \bullet \; \bullet)'\;
(\bullet \; \bullet)' \; ( \bullet \; \bullet)' \;
(\bullet \; \bullet)' \; ( \bullet \; \bullet)'
\label{3}
\end{equation}
If only one state is kept in (\ref{3}) then
the RG process ends up since there is already a
single state to represent the ground state of the system.
A dimerized spin chain is a typical example of this
type of states, where $(\bullet \; \bullet)'$ is
given by the singlet formed by two spin 1/2 of the chain.
In general however one keeps more than one state per block
$(\bullet \; \bullet)'$ and so one can continue
the RG method choosing $(\bullet \; \bullet)'$ as
a the new site $\bullet'$, i.e.
\begin{equation}
(\bullet \; \bullet)' \rightarrow \bullet'
\label{4}
\end{equation}
\noindent
The renormalized chain has therefore $N/2$ effective
sites $\bullet'$, which can again be blocked as in (\ref{1}),
\begin{equation}
((\bullet \; \bullet)' \; ( \bullet \; \bullet)')' \;
((\bullet \; \bullet)' \; ( \bullet \; \bullet)')'\;
((\bullet \; \bullet)' \; ( \bullet \; \bullet)')' \;
((\bullet \; \bullet)' \; ( \bullet \; \bullet)')'
\label{5}
\end{equation}
Performing
two more blockings and truncation operations we finally get
\begin{equation}
((((\bullet \; \bullet)' \; ( \bullet \; \bullet)')' \;
((\bullet \; \bullet)' \; ( \bullet \; \bullet)')')'\;
(((\bullet \; \bullet)' \; ( \bullet \; \bullet)')' \;
((\bullet \; \bullet)' \; ( \bullet \; \bullet)')')')'
\label{6}
\end{equation}
Eq.(\ref{6}) is the final step of the RG method since the
whole chain with $N$ sites has been reduced to a single
effective site whose dynamics can a priori be easily found
by solving the final renormalized effective Hamiltonian.
\subsection*{Wilsonian Blocking}
In his solution of the Kondo impurity problem Wilson \cite{wilson}
introduced a numerical RG method where a single block
is grown by adding momentum shells following the so
called onion scheme.
The real space version of this method is summarized
in the following eq.
\begin{equation}
(((((((((((((((\bullet \; \bullet)' \; \bullet)' \; \bullet)' \;
\bullet)' \; \bullet)' \; \bullet)' \; \bullet)'\;
\bullet)' \; \bullet)' \; \bullet)' \; \bullet)' \;
\bullet)' \; \bullet)' \; \bullet)' \; \bullet)'
\label{7}
\end{equation}
\noindent where we have used the same notations as for the
Kadanof blocking. While the Kadanof blocking
follows the pattern $B_{\ell} B_{\ell} \rightarrow
B'_{2 \ell}$, the Wilsonian scheme follows the pattern
$B_\ell \; \bullet \rightarrow B'_{\ell+1}$, where
$B_\ell$ denotes a block with $\ell$ sites.
It seems that the two schemes are completely unrelated.
Notice however that the number of left and right parenthesis in
eqs. (\ref{6}) and (\ref{7}) is the same, namely $N-1=15$.
What is different is the order of the brackets.
The condition for the Kadanof and Wilsonian blockings to be equivalent
can be formulated as follows,
\begin{equation}
((B_1 \; B_2)'\; B_3)' = ( B_1 \;( B_2 \; B_3)')'
\label{8}
\end{equation}
\noindent
where $B_i (i=1,2,3)$ denote
generic blocks containing one or more sites.
In particular for $N=4$ one can prove using (\ref{8})
\begin{equation}
((\bullet \; \bullet)' \; (\bullet \; \bullet)')' =
(((\bullet \; \bullet)' \; \bullet)' \; \bullet)'
\label{9}
\end{equation}
Eq.(\ref{8}) is reminiscent of the associativity of the
tensor product of representations in group theory and more
precisely in quantum group theory ( see below). This
equation certainly holds if there is no truncation
of degrees of freedom, i.e. $(B_1 \; B_2)' = (B_1 \; B_2)$
in which case it amounts to the equivalence between different
basis. In group theory the relation
between different basis is given by $6-j$ symbols.
Quantum groups are $q$-deformations of classical
groups ( $q=1$ in this notation) where some
of the commutator and addition rules are deformed
(for a review see \cite{q-book}).
The representation theory of quantum groups,
when the deformation parameter $q$ is generic
is analogue to that of classical groups.
However when $q$ is a root of unit
things change completely. First of all
there are a finite number of regular
irreps and
the tensor product of them
is also truncated while keeping the associativity
condition (\ref{8}).
The existence of an associative truncated tensor
is a common feature of the DMRG, quantum groups and Conformal
Field Theory (CFT) (more on this point below).
\subsection*{DMRG Blocking}
There are two DMRG algorithms to study open
chains. The infinite system algorithm uses
the superblock $B_{\ell} \; \bullet \; \bullet B^R_\ell$
to grow the chain from both sides according
to the Wilsonian scheme \cite{W1}. The
block $B_\ell \bullet$ is then truncated to
a new block $B'_{\ell +1}$. In this manner
the size of the system grows indefinitely
until one reaches a fixed point beyond
which the numerical results reproduce themselves.
This method is very good in computing bulk
properties of the system like the ground state
energy density. In many cases however it is more convenient
to study finite size systems whose large
distance properties can later
on be obtained trough finite size scaling techniques.
This is notably the case of gapless systems.
The DMRG algorithm used in these cases is called
the finite system method and it is extremely accurate.
The first steps of this method uses the infinite system
algorithm to grow both sides of the chain independently
until the left and right blocks are a half the size of the chain.
The chain with $N$ (even) sites
is then obtained by joining a left block $B_{N/2}$
and a right block $B_{N/2}^R$ as follows (the superindex
$R$ in $ B_{N/2}^R $ indicates that it can be obtained from the reflection
of a righ block),
\begin{equation}
((((((((\bullet \; \bullet)' \; \bullet)' \; \bullet)' \;
\bullet)' \; \bullet)' \; \bullet)' \; \bullet)'
(\bullet \; (\bullet \; (\bullet \; (\bullet \;
(\bullet \; (\bullet \; (\bullet \; \bullet)')')')')')')')'
\label{10}
\end{equation}
The superblock (\ref{10}) is used in the DMRG to
enlarge the left block from $B_{N/2}$ to $B_{N/2 +1}$
while the right block is reduced from $B^R_{N/2}$ to
$B^R_{N/2-1}$, in which case we get
\begin{equation}
(((((((((\bullet \; \bullet)' \; \bullet)' \; \bullet)' \;
\bullet)' \; \bullet)' \; \bullet)' \; \bullet)'
\bullet)' \; (\bullet \; (\bullet \; (\bullet \;
(\bullet \; (\bullet \; (\bullet \; \bullet)')')')')')')'
\label{11}
\end{equation}
If the associativity eq.(\ref{8}) would hold then
the blockings (\ref{10}) and (\ref{11}) would give
the same result for the GS of the whole chain
but of course this is not the case. The next step
is to again enlarge the left block at the expenses
of the right one until one reaches the right hand side.
There one reverses the trend and grows the right blocks
at the expenses of the left ones. After several sweeps
of this back-and-forth algorithm
the GS energy and GS wave function converge to a fixed
values which are independent of the size
of left and right blocks.
In this moment the splitting of the chain into left and right
blocks is independent of their size so that
the associativity constraint ( \ref{8})
is effectively fullfilled.
The analysis performed so far is rather formal but
helps to abstract the blocking
procedure which is common to all the real space RG methods.
As a by-product we have shown that
the blocking and the iteration procedures have to
be considered as combined strategies to achieve
the same goal which is to reduce the whole
system to a single effective site. From a formal
point of view blocking is like tensoring representations.
In this sense the RG steps can be seen as
``putting parenthesis'' around the blocks. An exact RG method
would be the one for which the final result would be independent
on the way the parenthesis are put on. These lead us to the associativity
constraint (\ref{8}), whose fullfillement is the actual goal
of any exact RG method.
\section*{The Standard RG Algorithm}
In all the real space RG methods
there is an algorithm to truncate the collection of two blocks
$B_1 B_2$ down to
a new effective block $ B'_{12}$ where
$B_1$ or $B_2$ may stand also for a single site.
The standard RG algorithm consist of the following steps
1) diagonalization of the Hamiltonian $H_{B_1 B_2}$
for the combined
block $B_1 B_2$, 2) truncation to the lowest energy
states of $H_{B_1 B_2}$ and 3) change of basis to
the new states kept and renormalization of the old
Hamiltonian.
This method leads to a lot of problems whose
origin was first pointed out in reference \cite{pinbox}
following a suggestion by Wilson. Studying
the very simple problem of a particle in a box
the authors of reference \cite{pinbox} interpreted
the bad performance of the standard RG
as been due to an incorrect treatment of the boundary
conditions applied on a block by its neighbours.
In other words the truncation $B_1 B_2 \rightarrow
(B_1 B_2)'$ has to take into account the presence
of say a third block $B_3$ in contact with the former
ones. The key idea is to consider a superblock
$B_1 B_2 B_3$ where the effect of $B_3$ into
the other two blocks can be properly
considered.
An alternative RG-solution to the particle-in-a-box problem,
which also takes into account the effect of boundary conditions
has been given in \cite{role}.
\section*{The DMRG Algorithm}
Let us choose a superblock made out of three
blocks $B_1 B_2 B_3$. The middle block
is taken to be a single site or two sites
$B_2 = \bullet$ or $\bullet \; \bullet$. Then one
constructs the Hamiltonian $H_{B_1 B_2 B_3}$
describing the dynamics of the superblock and finds out
a given state called the target state, which is usually
the ground state of the superblock
which can be written as
\begin{equation}
|\psi \rangle = \sum_{i_1, i_2, i_3} \psi_{i_1 i_2 i_3}
\; |i_1, i_2, i_3 \rangle
\label{12}
\end{equation}
\noindent where $i_1,\dots$ run from 1 up to $m_1, \dots$.
The superblock can be regarded
either as $((B_1 B_2) B_3)$ or as $(B_1 (B_2 B_3))$.
Correspondingly the target wave function
can be written in two different manners,
\begin{eqnarray}
& \psi_{i_1 i_2 i_3} = \sum_{\alpha} \,
U_{i_1 i_2, \alpha} \,D^{(12)3}_\alpha \, V_{\alpha,i_3} & \label{13} \\
& \psi_{i_1 i_2 i_3} = \sum_{\beta} \,
U_{i_1, \beta} \,D^{1(23)}_\beta \, V_{\beta, i_2 i_3} & \nonumber
\end{eqnarray}
\noindent where $U$ and $V$ are matrices which ``diagonalize'' the wavefunction
and satisfy the orthogonality conditions,
\begin{eqnarray}
& \sum_{i_1 i_2} \; U^*_{i_1 i_2, \alpha} \; U_{i_1 i_2, \alpha'}
= \delta_{\alpha, \alpha'} & \label{14} \\
& \sum_{i_3} \; V^*_{\alpha, i_3} \; V_{\alpha', i_3}
= \delta_{\alpha, \alpha'} & \label{15}
\end{eqnarray}
We have used in
(\ref{13}) the singular value
decomposition (SVD) of a matrix \cite{W1}.
$D^{(12)3}$ and $D^{1(23)}$ are the singular values of
$\psi$ regarded as a $(m_1 m_2) \times m_3$ or as a $m_1 (m_2 \times m_3)$
matrix. Eqs.(\ref{13}) are the clue of the DMRG method.
Let us imagine for a moment that $D_\alpha^{(12)3}$ and
$D^{1(23)}_\beta$ are zero for certain values of $\alpha$ and
$\beta$. In this case it is clear that we can truncate
the states of $B_1 B_2$ ( resp. $B_2 B_3$) down to a smaller set of states
$\alpha$ (resp. $\beta$)
for which $D^{(12)3}_\alpha$ ( resp. $D^{1(23)}_\beta$) is non zero
without loosing any information in order to reconstruct
the target state $\psi$. Rather than performing the SVD
of $\psi$ it is more convenient to define the density
matrices for the subsystems $(12)$ and $(23)$ inside
the whole system $(123)$ \cite{W1},
\begin{eqnarray}
& \rho_{i_1 i_2, i_1' i_2'}^{(12)} =
\sum_{i_3} \; \psi^*_{i_1 i_2 i_3} \; \psi_{i_1' i_2' i_3} & \label{16} \\
& \rho_{i_2 i_3, i_2' i_3'}^{(23)} =
\sum_{i_1} \; \psi^*_{i_1 i_2 i_3} \; \psi_{i_1 i_2' i_3'} & \nonumber
\end{eqnarray}
Now using (\ref{13}), (\ref{15}) we get,
\begin{eqnarray}
& \rho_{i_1 i_2, i_1' i_2'}^{(12)} =
\sum_{\alpha} \; U^*_{i_1 i_2, \alpha} \; \left( D^{(12)3}_\alpha \right)^2
\; U_{i_1' i_2', \alpha} & \label{17} \\
& \rho^{(23)}_{i_2 i_3, i'_2 i'_3} = \sum_{\beta}
\; V^*_{\beta,i_2 i_3} \; \left( D^{1(23)}_{\beta} \right)^2
\; V_{\beta, i'_2 i'_3} & \nonumber
\end{eqnarray}
Eqs.(\ref{17}) means that $w_\alpha^{(12)} =
(D^{(12)3}_\alpha)^2$ are the eigenvalues
of the density matrix $\rho^{(12)}$ while $U$ is the unitary matrix
which diagonalizes $\rho^{(12)}$ (similar properties hold
for the density matrix $\rho^{(23)}$.)
Let us call $m$ the number of states kept per block in a DMRG computation.
This number typically varies between 10 and 1000 depending on the computer
resources.
The DMRG algorithm consists in choosing the $m$ most
probable states $\alpha$, i.e. the states with higher value of
$w_\alpha$ ( let us sort them as
$(w_1 \geq w_2 \geq w_3 \geq \dots \geq w_{m_1 m_2}$)
This guarantees
the best posible representation of
the target state $\psi$ for every given value of $m$.
Moreover the sum
$P(m)= \sum_{\alpha=1 }^{m} w_\alpha$
of the probabilities of the $m$ states kept give a reasonable measure
of the truncation error (recall that tr$ \rho^{(12)}= \sum_\alpha
w_\alpha =1$ and hence $P(m) \leq 1$).
In many of the 1d models studied with the DMRG it turns out that
the probability $w_\alpha$ is concentrated in a few states and
that it decays exponentially fast. This implies that with small
values of $m$ one can achieve a great accuracy in representing
the target state. This is certainly the case for systems with
a finite correlation length\cite{OR,Jorge,JM}.
For systems with an infinite correlation length
one has to study finite systems and adjust the number
of states kept $m$ to the correlation length due
the finite size\cite{ABO}.
\section*{The DMRG versus Quantum Group Theory and CFT}
There are certain formal analogies between the DMRG and the
theory of quantum groups and Conformal Field
Theories (CFT) which we shall review below.
First of all
the DMRG truncation of states in $(B_1 B_2)'$ has strong
similarities with the truncated tensor product of irreps
of a $q$-Group where $q$ is a root of unit \cite{duality}. Let us choose
for example the quantum group $SU(2)_q$ which is a
$q$-deformation of the rotation group $SU(2)$.
For generic values of $q$ the representation theory
of $SU(2)_q$ is similar to that of $SU(2)$, i.e.
every irrep corresponds to an integer or half integer
spin $j=0, 1/2, \dots$ and the tensor product
of irreps satisfies the standard Clebsch-Gordan
decomposition. However, if $q$ is a root of unit,
$q= e^{2 \pi i/(k+2)} $ then there is only a finite
number of regular irreps corresponding to the spins
$j=0, 1/2, \dots, k/2$. The tensor product
of these irreps is
a truncated version of the classical CG decomposition,
\begin{equation}
(V_{j_1} \otimes V_{j_2} )'=
\oplus_{j=|j_1-j_2|}^{{\rm min}(j_1+j_2, k-j_1-j_2)}
\; V_j
\label{18}
\end{equation}
\noindent $V_j$ denotes the vector space of dimension
$2j+1$ associated to the irrep with spin $j$.
It is interesting to observe that the truncated
tensor product (\ref{18}) satisfies the associativity
condition (\ref{8}), namely \cite{duality},
\begin{equation}
((V_1 \otimes V_2)' \otimes V_3)' = ( V_1 \otimes( V_2 \otimes V_3)')'
\label{19}
\end{equation}
This eq. is a consequence of the co-associativity of the
comultiplication of the quantum group $SU(2)_q$. In more
physical terms, eq.(\ref{19}) follows from the non trivial
addition rule of angular momenta in $SU(2)_q$.
The regular irreps have positive $q$-dimension which is defined
as \cite{duality},
\begin{equation}
d_j \equiv [2j+1]_q \equiv \frac{q^{(2j+1)/2} -
q^{-(2j+1)/2}}{q^{1/2}- q^{-1/2}}
\label{20}
\end{equation}
The $q-$dimension of an irrep plays a role similar to
the eigenvalues $w_\alpha$ of the density matrix
in the sense that irreps with zero $q$-dimension
are thrown away in the tensor product just like
in the DMRG truncation.
Based on this analogy we conjecture
that a $q$-group invariant Hamiltonian, like
the XXZ open chain with $q$ a root of unit,
when studied with
DMRG methods will yield a density matrix with vanishing
eigenvalues corresponding to non regular irreps.
The DMRG truncation of these states have to
agree with the $q$-group truncation of the
non regular states \cite{q-group}.
Quantum groups with $q$ a root of unit are
intimately related to rational CFT's (RCFT).
Indeed in a RCFT
there is a finite number of primary fields $\phi_a
( a= 1, \dots, M)$,
which are in one-to-one correspondence
with the regular irreps of the associated $q$-group \cite{duality}.
Hence from the previous relation between the DMRG
and $q$-groups we may expect a relationship between
RCFT's and the DMRG. More generally
in a CFT there are
null states in the Verma modules of the primary
fields, whose norm is zero. As shown by
Belavin, Polyakov and Zamolodchikov (BPZ) the
decoupling of null vectors
leads to a set of partial differential equations
for the conformal blocks of the theory, in terms
of which one can construct all the correlators
of the theory \cite{BPZ}. It is tempting to suggest that
the BPZ decoupling of null vectors is
the field theoretical version of the DMRG truncation.
On the other hand
the analogue of the tensor product decomposition
is given by the fusion rules of the primary fields,
\begin{equation}
\phi_a \otimes \phi_b = N_{a, b}^c \; \phi_c
\label{fusion}
\end{equation}
\noindent where $N_{a,b}^c$ is an integer which counts
how many times the primary field $\phi_c$ appears into
the Operator Product Expansion (OPE) of $\phi_a$ and $\phi_b$.
The associativity of the OPE, i.e.
\begin{equation}
((\phi_a \otimes \phi_b) \otimes \phi_c) =
(\phi_a \otimes (\phi_b \otimes \phi_c))
\label{associa}
\end{equation}
\noindent
implies a non trivial eq. for the fusion coefficients $N_{a,b}^c$
namely,
\begin{equation}
\sum_d \; N_{a,b}^d \; N_{d,c}^f = \;
\sum_d \; N_{a,d}^f \; N_{b,c}^d
\label{a-fusion}
\end{equation}
An example of RCFT is given by the $SU(2)_k$ WZW model
with level $k$ \cite{su2}. The primary fields $\phi_j$ are labelled by the
spin $j=0,1/2,\dots, k/2$, while the fusion rules are given by the
eq.(\ref{18}) with the translation $V_j \rightarrow \phi_j$.
Indeed, as shown in reference \cite{duality}, there is a one-to-one
correspondence between the $SU(2)_k$ WZW model and the quantum
group $SU(2)_q$.
Another aspect of the relation between CFT, Integrable Systems and the DMRG
concerns the explanation of
the exponential decay of the
eigenvalues of the density matrix.
An approach to study this connection
is through the
relation between the DMRG density matrix and the
Corner Transfer Matrix (CTM) of Baxter first pointed out by
Nishino \cite{nishino1} in his application
of the DMRG to classical statistical mechanical
models in 2D. As shown by Baxter \cite{Baxter}
in an integrable system the eigenvalues of the CTM
have a very simple structure, i.e. they go as $a^n $
with $n$ an integer. One can recognize here the
exponential decay of the eigenvalues of the density
matrix \cite{oku},\cite{peschel}.
There are still many aspects to clarify in the relation
between the DMRG and CFT and more generally integrable
systems. This could be a fruitful subject in the near future.
\section*{The DMRG and the Matrix Product Ansatzs}
The DMRG is a variational method which generates an ansatz
for the GS state and the excited states. This implies in particular
that the DMRG ground state (GS) energy is an upper bound
of the exact GS energy. The variational ansatz
generated by the DMRG is of the matrix product (MP) type.
This fact was shown by \"{O}stlund and Rommer
in the thermodynamic limit
of the DMRG in the case of the spin 1 chain \cite{OR}. These authors
proposed that one could get very good results
for the GS energy and spin gap by using a MP ansatz
which corresponds to a small value of $m$
in the DMRG. The excitations could also be constructed
as Bloch waves on the MPM state.
To understand why the DMRG gives rise to a MP state
we return to eq.(\ref{13}). If $|i_1\rangle, |i_2\rangle$
and $|\alpha\rangle$ denote basis of the Hilbert spaces
associated to $B_1, B_2 $ and
$ (B_1 B_2)'$, then the relation between these
basis is
\begin{equation}
|\alpha\rangle = \sum_{i_1=1}^{m_1}
\sum_{i_2=1}^{m_2} \; U_{i_1 i_2, \alpha} \; |i_1\rangle \; |i_2\rangle,
\;\;\; (\alpha = 1, \dots, m)
\label{21}
\end{equation}
Since $B_2$ is usually a lattice site $\bullet$ we shall
write eq.(\ref{21}) in the following form,
\begin{equation}
|\alpha\rangle_{N} = \sum_{\beta, s} \; A^N_{\alpha, \beta}[s] \;
|\beta\rangle_{N-1} \; |s_N\rangle
\label{22}
\end{equation}
\noindent where $|s_N\rangle$ denotes the local state associated
to the site located at the $N^{{\rm th}}$ position of the chain,
while $|\alpha\rangle_N$ and $|\beta\rangle_{N-1}$ are the states kept
for the blocks of lengths $N$ and $N-1$ respectively.
$A^N_{\alpha, \beta}[s]$ is a matrix $m \times m$ for each
value of $s$.
Iterating (\ref{22}) until reaching the boundary of the chain
one gets,
\begin{equation}
|\alpha_N, \alpha_0\rangle_N = \left( A^{N}[s_N] \; A^{N-1}[s_{N-1}] \dots
A^{1}[s_1]\right)_{\alpha_N, \alpha_0} \; |s_1\rangle \dots |s_{N-1}\rangle \; |s_N\rangle
\label{23}
\end{equation}
\noindent where the matrix multiplication of the $A^n[s_n]$ matrices
is implicit. $|\alpha_N, \alpha_0\rangle$ is a collection of states
of an open chain with $N$ sites labelled by the pair
$(\alpha_N, \alpha_0)$. For a closed chain with periodic
boundary conditions the ansazt becomes
\begin{equation}
|\psi\rangle_N = {\rm Tr} (A^{N}[s_N] \; A^{N-1}[s_{N-1}] \dots
A^{1}[s_1]) \; |s_1\rangle \dots |s_{N-1}\rangle \; |s_N\rangle
\label{24}
\end{equation}
A further simplication of (\ref{24}) is to
assume that all the matrices $A^n[s_n]$
are independent on $n$, i.e. $A^n[s_n] = A[s_n]$ ( for all $n$).
This assumption
can be justified
in the thermodynamic limit of the DMRG
where it reaches a fixed point \cite{OR}.
However for finite dimensional systems and specially
for open BC's there will be a non trivial dependence
of $A^n[s_n]$ on $n$. In this sense the DMRG gives
a non homogenous MP ansatz.
In eq.(\ref{22}) we may want that the states
$|\alpha\rangle$ form an orthonormal set of states
given that both $|\beta\rangle$ and $|s\rangle$ are orthonormal
sets. This implies the following eq. on $A^N[s]$
\begin{equation}
\sum_{\beta,s} (A^N_{\alpha, \beta}[s])^* \;
A^N_{\alpha', \beta}[s] = \;\; \delta_{\alpha, \alpha'}
\label{25}
\end{equation}
\noindent which is nothing else than the eq. (\ref{14}).
This eq. expresses the fact that $A^N$ relates orthonormal basis.
But recall that it is not simply a change of basis
because we are truncating states, i.e. $m < m_1 m_2$.
Given the MP ansatzs (\ref{23}) and (\ref{24}) for open
and closed chains respectively we can use a standard variational
method to find the amplitudes $A[s]$ which minimize
the energy of the ansatz. In references \cite{Jorge,JM}
it was shown that when $N$ is large these minimization
procedure is similar to the one of the DMRG and that
in fact there is a hidden density matrix even though
the algorithm did not try to follow the DMRG method.
One way to see this is if one define
the following transfer matrix,
\begin{equation}
T_{\alpha \alpha', \beta \beta'}^N =
\sum_s \; (A^{N}_{\alpha, \beta}[s])^* \; A_{\alpha', \beta'}^N[s]
\label{26}
\end{equation}
\noindent $T^N$ is a $m^2 \times m^2$ matrix which serves
to relate matrix elements of operators between states
with lengths $N$ and $N-1$, for example
\begin{equation}
_N \langle \alpha| {\cal O} | \alpha'\rangle_N =
\sum_{\beta, \beta'} \, T_{\alpha \alpha', \beta \beta'}^N
\; \; _{N-1}\langle \beta| {\cal O}| \beta'\rangle_{N-1}
\label{27}
\end{equation}
\noindent We are assuming in (\ref{27}) that
the operator ${\cal O}$ does not act on the $N^{\rm th}$ site.
The normalization condition (\ref{25}) implies that
$T$ has a right eigenvector with eigenvalue
1 given by $\delta_{\alpha, \alpha'}$, namely
\begin{equation}
\sum_{\beta \beta'} T_{\alpha \alpha', \beta \beta'}^N
\delta_{\beta, \beta'} =
\delta_{\alpha, \alpha'}
\label{28}
\end{equation}
It then follows that $T$ has a left eigenvector
with eigenvalue 1, i.e.
\begin{equation}
\sum_{\alpha \alpha'} \rho_{\alpha \alpha'}^N
T_{\alpha \alpha', \beta \beta'}^N =
\rho_{\beta, \beta'}^N
\label{29}
\end{equation}
In references \cite{Jorge,JM}
it was shown that $\rho_{\alpha, \alpha'}^N$ can be identified
with a density matrix and that one is really minimizing the
expectation value of the Hamiltonian
in the following mixed
state,
\begin{equation}
\rho^N = \sum_{\alpha \alpha'} \rho^{N}_{\alpha \alpha'}
|\alpha \rangle_N \; _N\langle \alpha'|
\label{30}
\end{equation}
From this point of view
the collection of states $|\alpha\rangle_N$ of the MP method
can be interpreted as the most probable ones that contribute
to the GS wave function of a system with $N + 1 + N =2N +1$ sites.
The important conclusion to be learn from the previous considerations
is that the MP ansatz leads in a natural way to the DMRG algorithm.
This may be interesting regarding further generalizations of
the DMRG to higher dimensions.
\section*{Matrix Products Ansatzs in 2D}
The DMRG algorithm can be generalized to
ladders (i.e. collections of a finite number of chains),
and large
clusters.
This has been done obtaining remarkable
results which are difficult to obtain with other
algorithms \cite{wns}.
However it has also been shown that the efficiency
of the DMRG disminish with
the width of the system \cite{Liang}. The DMRG algorithm appears
to be essentially one dimensional in the sense that
the RG steps follow a linear pattern no matter
whether the system is 1D or higher dimensional.
Of course any higher dimensional system can be converted into
a 1D system by allowing non local interactions.
However locality seems to be the key of the great
performance of the DMRG in 1D. Hence a truly
higher dimensional version of the DMRG should
try to keep locality as a guideline.
That this is in principle possible is suggested
by reference \cite{otsuka} where a DMRG algorithm
is given for a Bethe lattice, whose dimensionality is
actually infinite.
Also recently, Niggemann et al. \cite{niggemann} have
constructed a two-dimensional tensor product to
describe the ground state of a 2D quantum system.
Similarly, Nishino and Okunishi have proposed a
Density Matrix algorithm for 3D classical statistical
mechanics models \cite{nishino3D}.
Another approach is suggested by the equivalence of the
Matrix Product approach and the DMRG for 1d or quasi-1d
systems. We do not know at the moment what is the
formulation of the DMRG in 2D but we do know that
in 2D there are MP states which were first constructed
by Affleck, Kennedy, Lieb and Tasaki (AKLT) \cite{AKLT}. These
states are valence bond solid states where one
connects local spins through local bonds.
When trying to generalize the 1D MP states
to 2D we find that there are
two possible types
of MP states which can be conveniently named
as vertex-MP and face-MP states, using standard
Statistical Mechanics terminology \cite{Baxter}.
\subsection*{Vertex-Matrix Product ansatzs}
A vertex model in 2D Statistical Mechanics (SM)
is a model defined on a square
lattice and such that the lattice variables $i,j, \dots$ live
on the links while the interaction takes place on the vertices
\cite{Baxter}. The Bolzmann weight thus depends on 4 variables
$W_{i j k l}$ and the whole partition function is obtained
by multiplying the Bolztmann weights of all vertices and
then summing over the values
of the lattice variables. The 6 vertex and 8 vertex
models are the cannonical examples of these types of models
which have been shown to be integrable.
Motivated by these vertex models we shall define
a vertex-MP state
in terms of a set of amplitudes
\begin{equation}
A_{\alpha, \beta}^{\gamma, \delta}[s] ,\;\;
(\alpha\,\beta, \dots = 1, \dots, m; \;\; s=1, \dots,m_s)
\label{2D1}
\end{equation}
\noindent
where the labels
$\alpha, \beta, \gamma, \delta$ are associated with the links of the
square lattice while $s$ labels the quantum state, e.g. spin,
associated to the vertex where the 4-links $\alpha, \beta, \gamma,\delta$
meet. $A_{\alpha, \beta}^{\gamma, \delta}[s]$ is a sort of
Boltzmann weight of a vertex model. The vertex-MP wave function
$\psi(s_1, s_2, \dots, s_N)$ can be obtained by multiplying
all the Boltzmann weights
$A_{\alpha_i, \beta_i}^{\gamma_i, \delta_i}[s_i]$
and contracting and suming over the links variables according to the same
pattern of a vertex model in Statistical Mechanics \cite{Baxter}.
Hence the value of the wave function
$\psi(s_1, s_2, \dots, s_N)$ is given by the partition function
of a vertex model where the Boltzmann weights depend on the value
of the local states $s_i$.
This construction for the square lattice is equivalent to the so
called ``vertex-state representation'' of Niggemann et al. for
the hexagonal lattice \cite{niggeman}.
This construction resembles the one proposed by Laughlin
concerning the Fractional Quantum Hall effect (FQHE) \cite{hall}.
More explicitely, Laughlin proposed in \cite{hall} a
variational wave function
$\psi_m(z_1, \dots, z_N)$ for the ground state of the
$N$ electrons in the lowest Landau level of a
FQHE with filling factor $\nu=1/m$. The norm $|\psi_m|^2$
of the Laughlin wave function can be interpreted as the
Boltzman weight of a classical one component plasma
constitued by $N$ negative charges of magnitude $m$
in a uniform background of positive charges. The charge
neutrality of the plasma guarantees its stability.
In our case we also have an associated Statistical Mechanical
model given by the Boltzman weights of a vertex model.
If we compute the norm of the wave function
$\psi(s_1, \dots, s_N)$, we can perform the summation
over the ``spin'' indices $s$, in which case the
the norm $\langle\psi| \psi \rangle$ of the vertex-MP state
is given by
the partition function of another vertex model whose
Boltzmann weights are defined as,
\begin{equation}
R_{\alpha \alpha', \beta \beta'}^{\gamma \gamma', \delta \delta'}
= \sum_s \; A_{\alpha, \beta}^{\gamma, \delta}[s] \;
A_{\alpha', \beta'}^{\gamma', \delta'}[s]
\label{2D2}
\end{equation}
This $R$ matrix is the 2D version of the
$T$ matrix defined in (\ref{26}).
The computation of the norm of
$|\psi\rangle$ can
be in general a difficult task. However, if the
model defined by the weights (\ref{2D2}) turns
out to be integrable, then we could find the
exact norm in the thermodynamic limit.
The face-MP models can be defined in a similar manner
by a set of variational parameters as in (\ref{2D1}) where
now the variables $\alpha, \dots$ are now associated to the
vertices of the squares while the quantum variable $s$
is associated to the face whose vertices are
$\alpha, \beta, \gamma, \delta$.
This is similar to the face or Interaction Round a Face models (IRF) in
Statistical Mechanics \cite{Baxter}.
Hence in 2D there are two generic ways to produce MP ansatzs
which are in fact the straightforward generalization of the
1D MP ansatzs. These two generalizations
suggest to use some well
know models as the 6-vertex model to test some of the ideas
presented above.
In summary, we have tried to show in this
contribution some interesting connections
among seemingly unrelated methods in
condensed matter and field theory.
Much remains to be done along this direction.
\subsection*{Acknowledgements}
We would like first of all
to thank J. Dukelsky, T. Nishino, S. R. White, D.J. Scalapino
and J.M. Rom\'an for their collaboration and many discussions on the
several aspects related to the work presented in these lectures.
G.S. would like to thank the organizers of the
workshop ``Recent Developments in Exact Renormalization Group''
held at Faro, A. Krasnitz, Y. Kubyshin, R. Neves,
R. Potting and P. S\'{a}
for their kind invitation to lecture in this meeting and for
their warm hospitality.
We acknowledge support from the DGES under contract PB96-0906.
| proofpile-arXiv_065-7922 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Introduction}
The nature of dark matter in the Universe remains a challenging question.
Even if new measurements will confirm that we live in a low
$\Omega_{\text{matter}}$ universe ($\Omega_{\text{matter}}\sim 0.3 - 0.4 $)
\cite{perlm98,riess98} a considerable amount of nonbaryonic
dark matter is needed.
WIMPs (weakly interacting massive particles) are among the most
discussed candidates
\cite{jung96}, being well motivated from early universe physics
\cite{kolb94} and supersymmetry \cite{hab85}.
WIMP detection experiments can decide whether WIMPs
dominate the halo of our Galaxy. For this reason, considerable
effort is made towards direct WIMP search experiments which look
for energy depositions from elastic WIMP--nucleus scattering
\cite{smith90}. Germanium experiments designed for the search for neutrinoless
double beta decay were among the first to set such kind of limits
\cite{ahlen87,caldwell88}.
The Heidelberg--Moscow experiment gave the most stringent upper limits on
spin--independent WIMP interactions \cite{beck94} until recently.
The present best limits on the WIMP--nucleon cross section
come from the DAMA NaI Experiment \cite{bern97}.
The Heidelberg--Moscow experiment operates five enriched
$^{76}$Ge detectors with an active mass of 10.96 kg in the Gran
Sasso Underground Laboratory. It is optimized for the search for the
neutrinoless double beta decay of $^{76}$Ge in the energy region of 2038
keV. For a detailed description of the experiment
and latest results see \cite{heidmo}.
In this paper we report on results from one of the enriched
Ge detectors, which took data in a period of 0.249 years
in a special configuration developed for low energy measurements.
A lower energy threshold and a smaller background counting rate
has been achieved with the same detector as used in 1994
\cite{beck94}, mainly due to the lower cosmogenic activities in the
Ge crystal and in the surrounding copper after four years without
activation.
\section*{Experimental setup}
The utilized detector is a coaxial, intrinsic p--type HPGe detector
with an active mass of 2.758 kg. The enrichment in $^{76}$Ge is
86\%. The sensitivity to spin--dependent interactions
becomes negligible, since $^{73}$Ge, the only stable Ge isotope
with nonzero spin, is deenriched to 0.12\% (7.8\% for natural Ge).
The detector has been in the Gran Sasso Underground Laboratory since
September 1991; a detailed description of its background can be found in
\cite{heidmo}.
The data acquisition system allows an event-by-event
sampling and pulse shape measurements.
The energy output of the preamplifier is divided and amplified with two
different shaping time constants, 2 $\mu$s and 4 $\mu$s. The fast 2
$\mu$s signal
serves as a stop signal for the 250 MHz flash analogue to digital
converter (FADC) which records the
pulse shape of each interaction.
The best energy resolution and thus lowest energy
threshold is obtained with the 4 $\mu$s shaped signal.
A third branch of the energy output is shaped with 3 $\mu$s and
amplified to record
events up to 8 MeV in order to identify radioactive impurities
contributing to the background. The spectra are measured with 13bit ADCs,
which also release the
trigger for an event by a peak detect signal.
Further triggers are vetoed until the
complete event information, including the pulse shape, has been recorded.
To record the pulse shape (for details see \cite{laura-nim})
the timing output of the preamplifier is
divided into four branches, each signal being integrated and
differentiated in filtering amplifiers (TFAs) with different time
constants. The TFAs are used since the charge current is
integrated within the preamplifier. The signals are amplified
to record low as well as high--energetic pulses.
\section*{Data analysis}
An energy threshold of 9 keV has been
reached. This rather high value is due to the large detector size and
a 50 cm distance between FET and detector. Both effects lead to a higher
system capacitance and thus to an enhancement of the baseline
noise.
We calibrate the detector with a standard $^{152}$Eu--$^{228}$Th source.
The energy resolution at
727 keV is (2.37 $\pm$ 0.01) keV. In order to determine the energy
resolution at 0 keV
the dependence of the full width at half maximum (FWHM) on the energy
is approx imated with an empirical function
$y=\sqrt{a + b\,x + c\,x^2}$ (y = resolution, x = energy) in a
$\chi ^2$ fit. The best fit ($\chi ^2$/DOF = 0.09) is
obtained for the parameters a = 3.8, b = 2.2$\times 10^{-3}$, c = 5$\times 10^{-7}$.
The zero energy
resolution is (2 $\pm$ 0.01) keV.
To determine the energy threshold, a reliable energy calibration at low
energies is required. The lowest energetic line observed in the
detector was at 74.97 keV (K$_\alpha$ line of $^{208}$Pb).
This is due to the rather thick copper shield of the
crystal (2 mm) which absorbs all low energy $\gamma$ lines.
Thus an extrapolation of the energy calibration to low energies is
needed, which induces an error of (1--2) keV.
Another possibility is to use a precision pulser in order to
determine the channel of the energy spectrum which corresponds to zero
voltage pulser output. Since the slope of the energy calibration is
independent of the intercept and can be determined with a calibration
source, this method yields an accurate value for the intercept
of the calibration.
The same method to determine the offset of the calibration is also used
by \cite{beck94} and \cite{reusser91}. The pulser calibration reduces
the extrapolated 9 keV threshold systematically by (1--2) keV. In order to
give conservative results, we use the 9 keV threshold for data analysis.
In Fig.~\ref{burst} the energy deposition as a function of time is
plotted. Accumulation of events (bursts) with energy depositions up to
30 keV can be seen. They are irregularly distributed in time with
counting rate excesses up to a factor of five.
These events can be generated
by microphonics (small changes in the detector capacitance due to
changes in temperature or mechanical vibrations) or by a raised
electronic noise on the baseline.
Although rare, they lead to an enhancement
of the count rate in the low energy region. A possibility to deal
with microphonics would be to exclude
the few days with an enhanced count rate from the sum spectrum like
in \cite{ahlen87}. This would lead, however, to unwanted measuring time
losses. Another time filtering method is applied in the following.
The complete measuring time is divided into 30-minute intervals and
the probability for N events to occur in one time interval is
computed for energy depositions between 9--850 keV. In the histogram in
Fig.~\ref{poisson} the physical events are Poisson distributed. The mean
value of the distribution is (2.65 $\pm$ 0.06) events/30 min and
$\sigma$=(1.67$\pm$ 0.05). The cut is set at the value N = 2.65 +
3$\,\sigma \approx $ 8. With this cut less than 0.01\% of the
undisturbed 30 minutes intervals are rejected. The initial
exposure of the measurement was 0.7 kg$\,$yr,
after the time cut the exposure is 0.69 kg$\,$yr. In this
way, more than 98\% of the initial data are used. A similar method to reduce
the microphonic noise in the low energy region was also applied by \cite{beck94}.
Another way to reject microphonic events would be to analyze the
recorded pulse shapes of each event. In a former experiment we
showed \cite{laura-nim} that pulse shapes of nuclear recoil and
$\gamma$ interactions are indistinguishable within the timing
resolution of Ge detectors. Thus a reduction of $\gamma$--ray
background based on pulse shape discrimination (PSD) is not possible.
Consequently $\gamma$ sources can be employed to calibrate a PSD
method against microphonics, which was shown to reveal a different
pattern in the pulse shape. Since such a pulse shape analysing method
is still under development, we use only the Poisson--time--cut method
in this paper.
Figure \ref{sumspec} shows the sum spectrum after the time cut.
The background counting rate in the energy region between 9 keV
and 30 keV is 0.081 cts/(kg$\,$d$\,$keV) [between 15 keV and 40 keV:
0.042 cts/(kg$\,$d$\,$keV)]. This is about a factor of
two better than the background level reached by \cite{beck94} with
the same Ge detector. Table~\ref{table1} gives the number of counts
per 1 keV bin for the energy region between (9--50) keV. The dominating
background contribution in the low--energy region from the U/Th natural
decay chain can be identified in Fig.~\ref{sumspec} via the 352 keV and
609 keV lines (the continuous beta spectrum from $^{210}$Bi originates from this chain).
\section*{Dark Matter Limits}
The evaluation for dark matter limits on the WIMP--nucleon cross section
$\sigma_{\rm scalar}^{\rm W-N}$ follows the conservative
assumption that the whole experimental spectrum consists of
WIMP events.
Consequently, excess events from calculated
WIMP spectra above the experimental spectrum in any energy region
with a minimum width of the energy resolution of the detector
are forbidden (to a given confidence limit).
The parameters used in the calculation of expected WIMP spectra are
summarized in Table ~\ref{tab:parameter}. We use formulas given
in the extensive reviews \cite{rita,lewin} for a truncated
Maxwell velocity distribution in an isothermal WIMP--halo
model (truncation at the escape velocity, compare also \cite{freese}).
Since $^{76}$Ge is a spin zero nucleus, we give cross section limits for
WIMP--nucleon scalar interactions only. For these, we used the
Bessel form factor (see \cite{lewin} and references therein) for the
parametrization of coherence loss, adopting a skin thickness of 1 fm.
Another correction which has to be applied for a semiconductor
ionization detector is the ionization efficiency. There exist
analytic expressions \cite{smith90} for this efficiency, especially for
germanium detectors and multiple experimental results measuring this
quantity (see \cite{laura-nim} and references therein). According to our
measurements \cite{laura-nim} we give a simple relation between visible
energies and recoil energies:
$E_{\text{vis}} = 0.14\,E_{\text{recoil}}^{1.19}$.
This relation has been checked for consistency with the relation
from \cite{smith90} in the relevant low energy region above our threshold.
After calculating the WIMP spectrum for a given WIMP mass, the scalar
cross section is the only free parameter which is then used to fit the
expected to the measured spectrum (see Fig.~\ref{spectrum-wimp})
using a one--parameter
maximum--likelihood fit algorithm. According to the underlying
hypothesis (see above) we check during the fit for excess events above
the experimental spectrum (for a one--sided 90\% C.L.) using a sliding,
variable energy window. The minimum width of this energy window
is 5 keV, corresponding to 2.5 times the FWHM of the
detector (6$\sigma$ width).
The minimum of cross section values obtained via these multiple fits of
the expected to the measured spectrum gives the limit.
Figure ~\ref{spectrum-wimp} shows a comparison between the measured
spectrum and the calculated WIMP spectrum for a 100 GeV WIMP mass.
The solid curve represents the fitted WIMP spectrum using a minimum width of
5 keV for the energy window. The minimum is found in the energy region
between 15 keV and 20 keV.
The dashed line is the result of the fit
if the energy window width equals the full spectrum width. It is easy to see
that in this case the obtained limit would be much too conservative,
leading to a loss of the information one gets from the measured
spectrum.
\section*{Conclusions}
The new upper limit exclusion plot in the
$\sigma_{\text{scalar}}^{\text{W-N}}$ versus M$_{\text{WIMP}}$
plane is shown in Fig.~\ref{dm_limits}. Since we do not use any
background subtraction in our analysis, we consider our limit to be
conservative.
We are now sensitive to WIMP masses greater than 13 GeV and to
cross sections as low as 1.12$\times$10$^{-5}$ pb
(for $\rho=0.3$ GeV/cm$^{3}$).
At the same time we start to enter the region (evidence--contour)
allowed with 90\% C.L. if the preliminary analysis of
4549 kg days of data by the DAMA NaI Experiment \cite{DAMA}
are interpreted as an evidence for an annual modulation
effect due to a spin independent coupled WIMP.
Should the effect be confirmed with much higher statistics
(20 000 kg days are now being analyzed by the DAMA Collaboration
\cite{rita98}) it could become crucial to test the region
using a different detector technique and a different target material.
Also shown in the figure are recent limits from the CDMS Experiment
\cite{cdms98}, from the DAMA Experiment \cite{bern97},
as well as expectations for new dark matter experiments like
CDMS \cite{cdms98}, HDMS \cite{hdms97} and for our recently proposed
experiment GENIUS \cite{genius97}.
Not shown is the limit from the UKDM Experiment \cite{ukdm}
which lies somewhere between the two germanium limits.
After a measuring period of 0.69 kg$\,$yr with one of the enriched
germanium detectors of the Heidelberg--Moscow experiment,
the background level decreased to 0.0419
counts/(kg$\,$d$\,$keV) in the low energy region.
The WIMP--nucleon cross section limits for spin--independent
interactions are the most stringent limits obtained
so far by using essentially raw data without background subtraction.
An improvement in sensitivity could be reached after a longer measuring
period. Higher statistics would allow the identification of the various
radioactive background sources and would open the possibility of a
quantitative and model--independent background description via a Monte
Carlo simulation. Such a background model has already been established
for three of the enriched Ge detectors in the Heidelberg--Moscow
experiment
and has been successfully applied in the evaluation of the
2$\nu\beta\beta$ decay \cite{heidmo}. A subtraction of background
in the low energy region in the form of a phenomenological straight line
based on a quantitative background model for the full energy region
(9 keV -- 8 MeV) would lead to a further improvement
in sensitivity. Background
subtractions for dark matter evaluations of Ge experiments were
already applied by \cite{garcia}.
Another way to reject radioactive background originating from multiple
scattered photons would be an active shielding in the immediate
vicinity of the measuring crystal. This method is applied in our
new dark matter search experiment, HDMS (Heidelberg Dark Matter
Search) \cite{hdms97}. HDMS is situated in the Gran Sasso Underground
Laboratory and started operation this year.
\acknowledgments
The Heidelberg--Moscow experiment was supported by the
Bundesministerium f\"ur Forschung und Technologie der Bundesrepublik
Deutschland, the State Committee of Atomic Energy of Russia and the
Istituto Nazionale di Fisica Nucleare of Italy. L.B. was supported by
the Graduiertenkolleg of the University of Heidelberg.
| proofpile-arXiv_065-7929 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Let me begin by thanking the Local Organising Committee for kindly
inviting me to attend the Symposium and for large amount the time and
effort that went into making my arrival in Protvino at all possible. I
should also like to take this opportunity to congratulate them for
succeeding, notwithstanding the obvious difficulties, in attaining a
balanced mix of informative and instructive talks from both
experimentallists and theorists alike.
\subsection{General Outline}
Given the generously broad title assigned to me, I have attempted to
at least touch upon those subjects not already covered by other
plenary or parallel speakers. From the schedule, it emerged that the
following areas were among those least represented in the theoretical
talks (although some have been partially covered in the experimental
presentations):
\begin{itemize} \itemsep0pt \parskip0pt
\item \underbar{global $g_1$ data analysis}---renormalisation scheme
dependence, positivity at LO and NLO, and hyperon $\beta$-decays;
\item \underbar{transverse spin}---twist-three, single-spin
asymmetries, transversity, and inequalities;
\item \underbar{orbital angular momentum}---evolution and gauge
dependence.
\end{itemize}
Thus, following a few introductory notes on factorisation formul\ae\
and global spin sum rules, I shall endeavour to give a flavour of the
present stage of development of the above topics. That said, the use
and importance of inequalities in transverse-spin variables will only
fully emerge as and when precise data become available. Moreover, the
problem of gauge invariance in orbital angular momentum is
particularly technical and perhaps of little phenomenological
significance as yet. Therefore, while recently arousing increasing
theoretical interest, these last two topics will not be covered here.
Finally, to avoid unnecessary repetition, I shall omit detailed
definitions wherever possible, which may be found in either earlier
talks or the original literature.
\subsection{Factorisation Formul\ae}
Schematically, the cross-section for the hadronic process $AB\to{}CX$ is
\begin{equation}
\label{eq:factform}
F_A(x_A) \; \otimes \;
F_B(x_B) \; \otimes \;
d\hat\sigma \; \otimes \;
D_C(z_C),
\end{equation}
where $F_i(x_i)$ are partonic densities, $d\hat\sigma$ is the partonic
hard-scattering cross-section, and $D_i(z_i)$ is a fragmentation
function. The symbols $\otimes$ represent convolutions in $x_i$ and
$z_i$, the partonic longitudinal momentum fractions; and there is an
implicit sum over parton flavours and types. Each term in the above
expression has an expansion in both $\alpha_s$ and
twist.\footnote{Twist may usefully be viewed as simply a convenient
labelling or ordering of the power-suppressed contributions in the
asymptotic limit.}
Cross-section (\ref{eq:factform}) simplifies considerably in certain
cases: \emph{e.g.}, when one or more of the partons is replaced by a photon
(or weak boson), or if the final state is unobserved and is therefore
to be summed over. It is also important to recall that spin does not
represent an obstacle to the factorisation procedure nor to
application of the above formula: the quantities relating to polarised
particles are merely replaced by their spin-weighted counterparts
(single-spin asymmetries are slightly more involved, requiring some
form of angular weighting). It is instructive to recall the following
aspects of the formula:
\begin{itemize} \itemsep0pt \parskip0pt
\item radiative corrections induce logarithmic scale dependence in all
factors (expressed via an $\alpha_s$ expansion);
\item factorisation is carried out ``twist-by-twist'';
\item it is already more complicated at twist 3, in that diagrams
\emph{na{\"\i}vely} higher-order in $\alpha_s$ can contribute even
at leading order;
\item twist-3 cross-sections are constructed with one and only one of
the terms calculated at twist 3; the rest are calculated at twist 2,
as usual.
\end{itemize}
The third point above is a common source of error: na{\"\i}vely, one
might expect twist-3 effects to be due only to explicitly
higher-dimension terms, \emph{e.g.}, the quark mass. However, it is now known
that the dominant twist-3 contributions come from diagrams with an
extra partonic leg,\cite{Twist3} associated with an \emph{apparent}
extra power of $\alpha_s$. Moreover, relations involving twist-2
contributions require that the factor of $\alpha_s$ be absorbed into
the correlation functions,\cite{Twist3} thus promoting such
contributions to truly leading order in $\alpha_s$. Hence, the
\emph{only} suppression asymptotically is the typical $M/p_T$
associated with twist 3, which means that one might reasonably
\emph{expect} such effects to be large: \emph{e.g.}, even for $p_T\sim10$\,GeV
(assuming the natural mass scale $M$ to be of the order of the nucleon
mass) the asymmetries should be of order 10\%.
A complication now emerging, with the realisation that large twist-3
single-spin asymmetries (SSA) may exist, is that there are many
possible sources. At next-to-leading twist (\emph{i.e.}, three), all terms but
one in the above factorisation product are taken at leading twist
(two) and just one term at the next contributing twist (three). Thus,
we are faced with the problem of isolating the true source among
several possibilities, which might all turn out to contribute.
\subsection{Global Sum Rules}
Another important and intuitive decomposition is that of the $z$-axis
projection of the total nucleon spin:
\begin{equation}
J_z^p
= \frac12
= \frac12 \Delta \Sigma + \Delta g + L_z^{q+g},
\end{equation}
together with the twin sum rule for the transverse projection:
\begin{equation}
J_T^p
= \frac12
= \frac12 \Delta_T \Sigma + \Delta_T g + L_T^{q+g}.
\end{equation}
I include the transverse-spin sum rule merely as a reminder of its
existence. There are extra subtleties here: for example, the
densities, $\Delta_T\Sigma$, have twist-3 contributions (absent for
longitudinal polarisation).
Difficulties in the definitions of partonic densities are caused by
both scheme and gauge dependence:
\begin{description} \itemsep0pt \parskip0pt
\item{(\emph{i})\hphantom{i}} renormalisation ambiguities mix
$\Delta\Sigma$ and $\Delta{g}$ at NLO;
\item{(\emph{ii})} the separation into spin and orbital components is
gauge dependent.
\end{description}
To some extent, the problem of gauge dependence is circumvented by the
natural axial-gauge choice in factorisation proofs and formul\ae.
However, the problem of identifying operators with meaningful physical
quantities is fraught with ambiguity. Much attention has recently been
paid to the orbital angular momentum case; \cite{GI-OAM} for lack of
space the reader is referred to the literature.
\section{Global \protect\autobf{$g_1$} data analysis}
\subsection{Positivity in Parton Densities}
The experimental asymmetry is expressed (at leading twist) as
\begin{equation}
A_1
\equiv
\frac{\sigma_{1/2}-\sigma_{3/2}}
{\sigma_{1/2}+\sigma_{3/2}}
=
\frac{g_1(x,Q^2)}{F_1(x,Q^2)}.
\end{equation}
Thus, $g_1$ is bounded by $F_1$: $|g_1|{\leq}F_1$. Now, at the
partonic level, $F_1$ and $g_1$ are defined in terms of sums and
differences of helicity densities:
\begin{equation}
f = f^\uparrow+f^\downarrow,
\qquad
\Delta f = f^\uparrow-f^\downarrow.
\end{equation}
Therefore, the positivity of $f^{\uparrow,\downarrow}$ \emph{would}
lead to a useful bound:
\begin{equation}
|\Delta f(x,Q^2)| \leq f(x,Q^2).
\end{equation}
However, beyond LO there is no guarantee of positivity; even quark
helicity flip is possible (via the ABJ axial anomaly).
In this respect, note that, \emph{in principle}, even the
\emph{un}-polarised densities could become negative (owing to virtual
corrections). Only in the na{\"\i}ve parton model (or LO) are the
densities positive definite (by definition). At NLO, the ambiguities
inherent in the choice of renormalisation scheme make negative
densities possible in particular schemes. To understand this, recall
that \emph{physical} quantities correspond to partonic densities
multiplied by coefficient functions (a power series in $\alpha_s$);
beyond LO, partonic densities themselves should not be thought of as
physically measurable quantities. In fact, positivity is partially
rescued by the fact that if higher-order corrections became large
enough to change the sign, perturbation theory would not be valid.
\subsection{Positivity Beyond LO}
Including the NLO corrections, the inequalities take on the following
form (moment-by-moment but suppressing $N$ and $Q^2$ for
clarity): \cite{Altarelli:1998nb,Forte:1998x1}
\begin{equation}
\frac{
\left|
\left( 1 + \frac{\alpha_s}{2\pi} \Delta C^d_\Sigma \right) \Delta\Sigma
+ \frac{\alpha_s}{2\pi} \Delta C^d_g \Delta g
\right|
}
{
\left( 1 + \frac{\alpha_s}{2\pi} C^d_\Sigma \right) \Sigma
+ \frac{\alpha_s}{2\pi} C^d_g g
}
\leq 1,
\end{equation}
using DIS as the natural \emph{defining} process for quark densities.
And
\begin{equation}
\frac{
\left|
\left( 1 + \frac{\alpha_s}{2\pi} \Delta C^h_g \right) \Delta g
+ \frac{\alpha_s}{2\pi} \Delta C^h_\Sigma \Delta \Sigma
\right|
}
{
\left( 1 + \frac{\alpha_s}{2\pi} C^h_g \right) g
+ \frac{\alpha_s}{2\pi} C^h_\Sigma \Sigma
}
\leq 1,
\end{equation}
using Higgs production as a possible \emph{defining} process for the
gluon density.\cite{Altarelli:1998nb,Forte:1998x1} This is actually a
\emph{gedanken} experiment, in which one imagines producing a Higgs
particle via a gluon-proton collision. The bounds so derived are shown
for two example moments in fig.~\ref{fig-far2}. Such bounds may be
useful to pin down the shape of $\Delta{g(x)}$, see
fig.~\ref{fig-far3}.
\begin{figure}[htb]
\epsfig{figure=fig-far2b.eps,width=55mm}\hfill
\epsfig{figure=fig-far2c.eps,width=55mm}\relax
\caption{\label{fig-far2}
The LO (dashed lines) and NLO (solid lines) positivity bounds on
$\Delta\Sigma(N)$ and $\Delta{g(N)}$ for $Q^2=1$\,GeV$^2$ and $N=2$,
5, from Altarelli \etal.\protect\cite{Altarelli:1998nb}}
\end{figure}
\begin{figure}[htb]
\epsfig{figure=fig-far3a.eps,width=55mm}\hfill
\epsfig{figure=fig-far3b.eps,width=55mm}
\caption{\label{fig-far3}
The maximal gluon density at $Q^2=1$\,GeV$^2$ obtained from LO
(dashed lines) and NLO (solid lines) positivity bounds, using
polarized quark densities from two fits of Altarelli
\etal.\protect\cite{Altarelli:1996nm} The corresponding best-fit
polarized gluon density is also shown (dot-dashed).}
\end{figure}
At present, $\Delta{g(x)}$ is essentially determined via scaling
violations alone, which fix only the low moments with any precision,
since $|\gamma_{qg}|\ll|\gamma_{qq}|$ for large $N$ (see
fig.~\ref{fig-far4}):
\begin{figure}[htb]
\centering
\epsfig{figure=fig-far4.eps,width=55mm}
\caption{\label{fig-far4}
The LO anomalous dimensions $\gamma_{qq}(N)$ and $\gamma_{qg}(N)$
(respectively, the top and bottom curves at small $N$) as a function
of $N$, from Forte \etal.\protect\cite{Forte:1998x1}}
\end{figure}
\begin{equation}
\frac{d}{dt} \, g_1^{\rm singlet}
=
\frac{\langle e^2\rangle}{2} \frac{\alpha_s}{2\pi}
\left[\rule[0pt]{0pt}{2ex}
\gamma_{qq} \Delta\Sigma + 2n_f \gamma_{qg}\Delta g
\right]
+ \mathrm{O}(\alpha_s^2).
\end{equation}
\subsection{More on Positivity}
Analysis of the evolution of individual spin components shows the
problem to be partially ``self-curing''; \cite{Bourrely:1998x1} at LO
the IR-singular terms (with the usual $+$ prescription) lead to
\begin{equation}
\frac{dq(x)}{dt} =
\frac{\alpha_s}{2\pi}
\left[
\int_x^1 dy \frac{q(y)}{y} P\left(\frac{x}{y}\right)
- q(x) \int_0^1 dz P(z)
\right].
\end{equation}
The second term cannot change the sign of $q(x)$ as it is diagonal in
$x$: as $q(x)$ approaches zero, so too does the very term driving it
toward the sign change. Full $q$-$g$ mixing leads to (for example)
\begin{eqnarray}
\frac{dq_+(x)}{dt}
&=&
\frac{\alpha_s}{2\pi}
\left[
P_{++}^{qq}\left(\frac{x}{y}\right) \otimes q_+(y)
+ P_{+-}^{qq}\left(\frac{x}{y}\right) \otimes q_-(y)
\right.
\nonumber\\
& &
\quad
\left.
+ P_{++}^{qg}\left(\frac{x}{y}\right) \otimes g_+(y)
+ P_{+-}^{qg}\left(\frac{x}{y}\right) \otimes g_-(y)
\right].
\end{eqnarray}
Again, the only singular terms in this and the three companion
equations are diagonal (in parton type too) and therefore cannot spoil
positivity.
The result survives to NLO order,\cite{Bourrely:1998x1} except for a
small violation in the $qg$ and $gq$ kernels at $x\sim0.7$, which, it
is conjectured, might be cured by an appropriate choice of $\gamma_5$
scheme. Thus, positivity may be a useful addition to the data fitters'
armoury. However, care is required to avoid those schemes in which it
could cause undesirable bias in fit results.
\subsection{Renormalisation Scheme Choice}
In order to analyse data, a certain amount of theoretical input is
necessary. Thus, there are several other issues (some of which are
apparently exquisitely theoretical) requiring careful examination
since they can in fact have a significant impact on the outcome of
global data fits involving parton evolution:
\begin{itemize} \itemsep0pt \parskip0pt
\item definition of polarised gluon and singlet-quark densities,
\item small-$x$ extrapolation,
\item choice of initial parametrisation.
\end{itemize}
Though mathematically acceptable, a peculiarity of the $\overline{\rm MS}$ scheme
is that some soft contributions are included in the Wilson coefficient
functions, rather than being absorbed into the parton densities.
Consequently, the first moment of $\Delta\Sigma$ is not conserved and
it is difficult to compare the DIS results on $\Delta\Sigma$ with
constituent quark models at low $Q^2$. To avoid such oddities, Ball
\etal.\cite{Ball:1996td} have introduced the so-called Adler-Bardeen
(AB) scheme, now a common choice. The AB scheme involves a minimal
modification of the $\overline{\rm MS}$ scheme; the polarised singlet quark
density is fixed to be scale independent at one loop:
\begin{equation}
a_0(Q^2) \; = \;
\Delta\Sigma \, - \, n_f \frac{\alpha_s(Q^2)}{2\pi} \, \Delta g(Q^2).
\end{equation}
Other factorisation schemes will not alter $\Delta{g(Q^2)}$ greatly
but may, in contrast, cause $\Delta\Sigma$ to vary considerably. As a
result, the values of $a_0(\infty)$ and $\Delta\Sigma$ will be very
different. Recall that $\Delta{g(Q^2)}$ grows as $1/\alpha_s(Q^2)$.
Of course, the difference between any two schemes lies in the
(unknown) higher-order terms. Thus, comparison of results between two
schemes (\emph{e.g.}, AB and $\overline{\rm MS}$) could also shed light on the importance
of the NNLO corrections.
Zijlstra and van Neerven \cite{Zijlstra:1994x1} have pointed out that
the AB scheme described above is just one of a family of schemes
keeping $\Delta{q_{NS}}$ scale independent.
\begin{equation}
\pmatrix{\Delta\Sigma\cr \Delta g}_a =
\pmatrix{\Delta\Sigma\cr \Delta g}_{\overline{\rm MS}}
+ \frac{\alpha_s}{2\pi} \pmatrix{0 &z(x;a)_{qg}\cr
0 &0 } \otimes
\pmatrix{\Delta\Sigma\cr \Delta g}_{\overline{\rm MS}},
\end{equation}
where $z_{qg}(x;a)=N_f[(2x-1)(a-1)+2(1-x)]$. The AB scheme
corresponds to $a=2$; Leader \etal.\cite{Leader:1998x1} propose yet
another scheme they call the JET scheme, in which $a=1$. In this
scheme all hard effects are absorbed into the coefficient functions
and the gluon coefficient is as it appears in $pp{\to}JJ+X$.
The transformation between the $\overline{\rm MS}$ and JET schemes is then given
by the following (suppressing the $Q^2$ dependence):
\begin{eqnarray}
\Delta \Sigma^{(n)}_{\rm JET}
&=&
\Delta \Sigma^{(n)}_{\overline{\rm MS}} +
\frac{n_f\alpha_s}{2\pi} \frac{2}{n(n+1)} \Delta g^{(n)}_{\overline{\rm MS}},
\\
\Delta g^{(n)}_{\rm JET}
&=&
\Delta g^{(n)}_{\overline{\rm MS}}.
\end{eqnarray}
For example, such a transformation indicates that the polarised
strange sea, $\Delta{s}$, will be different in the two schemes. Of
course, AB and JET are the same for $n=1$. The analogous
transformation of the coefficient functions and anomalous dimensions
from the $\overline{\rm MS}$ to the AB scheme is given by replacing the factor
$2/n(n+1)$ with $1/n$. Thus, the ABJ anomaly, far from being an
obstacle, may provide a route to parton definitions of a physically
intuitive and meaningful form.
\subsection{Small-\protect\autobf{$x$} Extrapolation}
The main problem with regard to parametrisation is the extrapolation
$x{\to}0$. As shown by De R\'ujula \cite{DeRujula:1974x1} and later
studied by Ball and Forte,\cite{BFR} PQCD evolution leads to the
following \emph{un}-polarised small-$x$ asymptotic behaviour:
\begin{eqnarray}
g &\sim& x^{-1}
\sigma^{-1/2}e^{2\gamma\sigma-\delta\zeta}
\left(1 +
\sum_{i=1}^n \epsilon^i\rho^{i+1}\alpha_s^i \right)
\\
\Sigma &\sim& x^{-1}
\rho^{-1}\sigma^{-1/2}e^{2\gamma\sigma-\delta\zeta}
\left(1 +
\sum_{i=1}^n \epsilon_f^i\rho^{i+1}\alpha_s^i \right),
\end{eqnarray}
$\xi=\log{x_0/x}$,
$\zeta=\log{\left(\alpha_s(Q_0^2)/\alpha_s(Q^2)\right)}$,
$\sigma=\sqrt{\xi\zeta}$, $\rho=\sqrt{\xi/\zeta}$, and the
$\epsilon^i$ terms indicate $i$-th order corrections. In the
\emph{un}-polarised case, the leading singularity is carried by
gluons, which drive the singlet quark evolution. However, all
polarised singlet anomalous dimensions are singular and therefore
gluons and quarks ``mix''. Moreover, the asymptotic predictions hold
only for \emph{non}-singular input densities: a singular starting
point is preserved. It follows that the structure functions $xF_1$ and
$F_2$ rise at small $x$ more and more steeply as $Q^2$ increases,
though, for all finite $n$, never as steeply as a power of $x$.
All other parton densities $f$ ($f=q_{NS}$, $\Delta q_{NS}$,
$\Delta\Sigma$, $\Delta g$) behave as
\begin{equation}
f \sim
\sigma^{-1/2}e^{2\gamma_f\sigma-\delta_f\zeta}
\left( 1
+ \mbox{$\sum_{i=1}^n$} \epsilon_f^i\rho^{2i+1}\alpha_s^i
\right).
\end{equation}
These last are less singular than the unpolarized singlet densities by
a power of $x$, while the higher-order corrections are more important
at small $x$ since the exponent $i+1$ is replaced by $2i+1$; because
the leading small $N$ contributions to the anomalous dimensions at
order $\alpha_s^{i+1}$ are $\left(\alpha_s/(N-1)\right)^i$ in the
unpolarized singlet case, but $N\left(\alpha_s/N^2\right)^i$ for the
non-singlet and polarized densities. Altarelli \etal.\ obtain better
fits using a logarithmic form (rather than a power). Although this is
reminiscent of evolution effects and is compatible with Regge theory
too, no conclusions can be drawn from such results. As a final
comment, fits generally give good overall agreement with PQCD
evolution.
\subsection{Input Sea Symmetry Assumptions}
To fit data, assumptions for the sea polarisation are usually
necessary; a common choice is flavour symmetry:
$\Delta\bar{s}=\Delta\bar{u}=\Delta\bar{d}$. To test such a
hypothesis, Leader \etal.\cite{Leader:1998x1} note that if one allows
$\Delta\bar{s}=\Delta\bar{u}=\lambda\Delta\bar{d}$, then the data (via
$\beta$-decay couplings) fix $\Delta{q_{3,8}}$, $\Delta\Sigma$ and
$\Delta{G}$. Thus, while
\begin{equation}
\Delta\bar{s} = \frac{1}{6} ( \Delta\Sigma - \Delta q_8 ),
\end{equation}
and therefore $\Delta\bar{s}$ clearly does not vary with $\lambda$.
On the other hand,
\begin{eqnarray}
\Delta u_v
&=&
\frac12 [ \hphantom-
\Delta q_3
+ \Delta q_8
- 4(\lambda-1)\Delta\bar{s}],
\\
\Delta d_v
&=&
\frac12 [
- \Delta q_3
+ \Delta q_8
- 4(\lambda-1)\Delta\bar{s}],
\end{eqnarray}
so, \emph{valence densities are sensitive to sea assumptions}.
However, the dependence on $\lambda$ can only arise via scaling
violation and hence is weak, as seen in the analysis (indeed, it is
likely an artifact). Results for $\Delta\bar{s}$ should not change
significantly as the input value of $\lambda$ varies, thus testing the
analysis stability.
\subsection{Input Non-Singlet Shape Assumptions}
In order to reduce the number of free parameters in the fitting
procedure, a further assumption sometimes adopted is
\begin{equation}
\Delta q_3(x,Q^2) \propto \Delta q_8(x,Q^2).
\end{equation}
While compatible with evolution (both are non-singlet densities), it
cannot be at all justified as a starting point: allowing the two
densities, $\Delta q_3(x,Q^2)$ and $\Delta q_8(x,Q^2)$, to vary
independently, significant differences are found,\cite{Leader:1998x1}
see fig.~\ref{fig-lss4} (recall the $u(x)-d(x)$ difference). Thus,
such an assumption will certainly distort parameter values and errors
obtained. Note too that the data do indeed constrain valence
densities well.
\begin{figure}[htb]
\centering
\epsfig{figure=fig-lss4.eps,width=55mm}
\caption{\label{fig-lss4}
The ratio $\Delta{q_3(x,Q^2)}/\Delta{q_8(x,Q^2)}$, from Leader
\etal.\protect\cite{Leader:1998x1}}
\end{figure}
\subsection{Hyperon Data Input}
Together with the Bjorken sum-rule input, $a_3=F+D$, the
``hypercharge'' equivalent, $a_8=3F-D$, is also needed. The measured
baryon-octet $\beta$-decays can provide the extra information:
assuming SU$(3)_f$ symmetry, all hyperon semi-leptonic decays (HSD)
may be described moderately well in terms of the Cabibbo mixing angle
and precisely the two parameters required, $F$ and $D$.
The precision of the HSD data is better than presently needed for DIS
analyses. However, since SU(3) violation is typically of order 10\%,
one worries that the extracted values of the two parameters could
suffer the same order of shift.
There exist SU(3)-breaking analyses
returning $F/D\simeq0.5$ (\emph{cf.}\ the standard value: 0.58), but the poor
$\chi^2$ of all such fits casts doubt on their validity. Pure SU(3)
fits to the hyperon semi-leptonic decays are also often used. These
typically return $F/D\simeq0.575$, but again the fits are very poor:
$\chi^2\simeq2/$DoF. On the other hand, SU(3) breaking fits with only
one new parameter, give much better agreement:
$\chi^2\simeq1/$DoF.\cite{PGR-HSD} These fits return $F/D\simeq0.57$,
which is also stable with respect to the SU(3) breaking approach
adopted.
\section{Tests of Perturbation Theory}
One of the many ways to test PQCD is to compare $\alpha_s$ as
extracted in different processes. A particularly suggestive method
used of late is to show the order-by-order agreement in, \emph{e.g.}, the
Bjorken sum-rule (see fig.~\ref{fig:EKplot}). While data are
unambiguous, modulo the usual experimental uncertainties, such a plot
is misleading from a theoretical viewpoint: as commented from the
floor at this symposium, the na{\"\i}ve interpretation would
\emph{not} be convergence of the perturbation series to the correct
value, rather an imminent crossing and possible premature divergence.
Although PQCD perturbation series are generally held to be asymptotic,
this is clearly not what is being displayed here.
The problem lies in the use of a fixed-order $\alpha_s$: it is simply
incorrect to use a fixed-order extraction of $\alpha_s$ in
variable-order predictions. In the majority of cases the perturbative
expansion displays monotonic behaviour (at least for the few known
terms), just as the Bjorken series. Hence, as the order of
perturbation theory used for extraction increases, the value of
$\alpha_s$ obtained decreases. Thus, taking the world mean $\alpha_s$
to be (on average) second order, a first-order extraction would
provide a relatively larger value and third and fourth orders,
progressively smaller. Correct order-by-order comparison would then
lead to the shifts indicated in fig.~\ref{fig:EKplot} and therefore
milder convergence.
\begin{figure}[htb]
\centering
\begin{picture}(200,120)(0,-10)
\LinAxis ( 0, 0)(200, 0)(6, 1, 3, 0, 0.5)
\Line ( 0,100)(200,100)
\LinAxis ( 0, 0)( 0,100)(5, 1,-3, 0, 0.5)
\LinAxis (200, 0)(200,100)(5, 1, 3, 0, 0.5)
\Text ( 33,-10)[]{QPM}
\Text ( 67,-10)[]{$\alpha_s^1$}
\Text (100,-10)[]{$\alpha_s^2$}
\Text (133,-10)[]{$\alpha_s^3$}
\Text (167,-10)[]{$\alpha_s^4$}
\Text (-10, 0)[]{0.8}
\Text (-10, 20)[]{0.9}
\Text (-10, 40)[]{1.0}
\Text (-10, 60)[]{1.1}
\Text (-10, 80)[]{1.2}
\Text (-10,100)[]{1.3}
\SetScale {0.2}
\SetWidth {2.5}
\SetScaledOffset(0,-800)
\DashLine( 1,1260)(999,1267){20}
\Vertex (167, 960){10}
\Line (167, 876)(167,1044)
\Vertex (333,1073){10}
\Line (333, 978)(333,1168)
\Vertex (500,1122){10}
\Line (500,1021)(500,1223)
\Vertex (667,1154){10}
\Line (667,1047)(667,1260)
\Vertex (833,1178){10}
\Line (833,1064)(833,1288)
\LongArrow(303,1073)(303,1103)
\LongArrow(637,1154)(637,1124)
\LongArrow(803,1178)(803,1128)
\end{picture}
\caption{\label{fig:EKplot}
The order-by-order comparison of data and theory for the Bjorken
sum-rule: the data points are corrected for QCD and the dashed line
is $g_A/g_V$. The arrows indicate the directions and relative
magnitudes of the shifts indicated in the text.}
\end{figure}
\section{Transverse Polarisation}
Transverse spin has many facets, I now turn to the status of PQCD
approaches to the long-standing puzzle of single-spin asymmetries and
also mention some recent developments in transversity. Inequalities
are important here too, as constraints on model builders' input
densities for predictive purposes \cite{Trans-Ineq} but again, being
technical in nature, I shall not discuss them further.
\subsection{Single-Spin Asymmetries}
A most interesting aspect of transverse spin is the large amount of
SSA data: measured effects reach the level of 40--50\% in a wide range
of processes.\cite{Bravar:1998x1} Ever since Kane
\etal.,\cite{Kane:1978nd} it has been realised that at twist 2 in LO
massless PQCD such effects are zero. At NLO, the effects due to
imaginary parts of loop diagrams are found to be at most of order 1\%.
However, since the pioneering work of Efremov and
Teryaev,\cite{Efremov:1985ip} it is now well understood that twist-3
effects naturally lead to such asymmetries. To calculate the SSA in
prompt-photon production Qiu and Sterman \cite{Qiu:1991pp} have
exploited their idea of taking the necessary imaginary part from
\emph{soft propagator poles} arising in extra partonic legs inherent
to twist-3 contributions. Since then Efremov
\etal.\cite{Efremov:1995dg} have performed calculations for the pion
asymmetry, as too have Qiu and Sterman,\cite{Qiu:1998ia} and Ji
\cite{Ji:1992x2} has examined the purely gluonic contributions. The
results are all very encouraging.
\subsection{Factorisation in Higher-Twist Amplitudes}
A difficulty in such calculations is the large number (several tens)
of PQCD diagrams encountered. Recently I have shown
\cite{Ratcliffe:1998pq} that, in the pole limit of interest, the
contributions simplify owing to a factorisation of the soft insertion
from the rest of the amplitude, see fig.~\ref{fig:qfactor}.
\begin{figure}[htb]
\centering
\begin{picture}(280,100)(0,120)
\multiput(0,110)(0,0){1}{
\Text ( 0, 62)[]{Pole}
\Text ( 0, 50)[]{Part}
\Line ( 90,100)( 90, 75)
\Gluon ( 20, 75)( 20, 35){-2}{6} \Gluon( 90, 75)( 90, 35){2}{6}
\Line ( 20, 35)( 20, 10) \Line ( 90, 35)( 90, 10)
\Line ( 20, 35)( 90, 35)
\DashLine( 75,100)( 75,10){7}
\Line ( 60, 72)( 63, 75) \Line ( 60, 78)( 63, 75)
\Text ( 60, 65)[]{$k$}
\Line (280,100)(280, 75)
\Gluon (210, 75)(210, 35){-2}{6} \Gluon(280, 75)(280, 35){2}{6}
\Line (210, 35)(210, 10) \Line (280, 35)(280, 10)
\Line (255, 75)(280, 75)
\Line (210, 35)(235, 35) \Line (255, 35)(280, 35)
\DashLine(245,100)(245,10){7}
}
\Line ( 20,210)( 20,185) \Gluon( 50,210)( 50,185){2}{4}
\Line ( 20,185)( 90,185)
\Vertex( 35,185){2}
\Text (115,165)[l]{$\displaystyle=
\qquad-i\pi\frac{k.\xi}{k.p}\quad\times$}
\Text ( 54,202)[]{$\quad x_g$}
\Line (210,210)(210,185)
\Line (210,185)(235,185)
\end{picture}
\caption{\label{fig:qfactor}
Representation of the amplitude factorisation in the case of a soft
external gluon line $x_g{\to}0$. The solid circle indicates the
propagator from which the imaginary piece is extracted, and $\xi$ is
the polarisation vector of the gluon entering the factorised
vertex.}
\end{figure}
The remaining $2{\to}2$ helicity amplitudes are known; thus,
calculation of any such process reduces to simple products of known
helicity amplitudes with the above ``insertion'' factors (including
modified colour factors). The factorisation described, also leads to
more transparent expressions clarifying why large SSA's may be
expected: there is no reason for suppression (kinematic mismatch
\emph{etc.}), beyond their higher-twist nature.
\subsection{Unravelling Higher Twist in Single Spin Asymmetries}
A complication that has emerged is that the possible mechanisms for
producing such asymmetries are numerous (even when restricted to the
purely PQCD processes described, not to mention the problem of
intrinsic $k_T$ \cite{Murgia:1998x1}). Thus, it is now important to
analyse the possible processes in which SSA's are allowed and to
identify those with differing origins, in the hope of eliminating some
candidates and finally arriving at the true source. A step in this
direction has been taken by Boros \etal.,\cite{Boros} see
table~\ref{tab:Boros}, and an interesting discussion has also been
presented at this symposium by Murgia.\cite{Murgia:1998x1}
\begin{table}[htb] \scriptsize
\centering
\vskip-4pt
\caption{\label{tab:Boros}
Predictions for other SSA's under different hypotheses for the
origins of the pion asymmetry, from Boros
\etal.\protect\cite{Boros}}
\vskip4pt
\begin{tabular}{|c|c|c|c|c|}
\hline
& \multicolumn{4}{c|}{if $A_N$ observed in $p(\uparrow)+p\to\pi+X$
originates from \rule[-1ex]{0pt}{4ex}...}
\\
\cline{2-5}
\parbox[c][10ex][c]{32ex}{\centering process}
& \parbox[c][10ex][c]{15ex}{\centering quark distribution function}
& \parbox[c][10ex][c]{15ex}{\centering elementary scattering process}
& \parbox[c][10ex][c]{15ex}{\centering quark fragmentation function }
& \parbox[c][10ex][c]{15ex}{\centering orbital motion and surface effect}
\\
\hline
\parbox[c][9ex][c]{32ex}{\centering $l + p(\uparrow) \to l +
\left(\begin{array}{c}\pi^\pm\\ K^+ \end{array}\right) + X$}
& \parbox[c][9ex][c]{15ex}{\centering $A_N=0$ \\ wrt jet axis}
& \parbox[c][9ex][c]{15ex}{\centering $A_N=0$ \\ wrt jet axis}
& \parbox[c][9ex][c]{15ex}{\centering $A_N\neq0$ \\ wrt jet axis}
& \parbox[c][9ex][c]{15ex}{\centering $A_N=0$ \\ wrt jet axis}
\\
\cline{2-5}
\parbox[c][9ex][c]{32ex}{\centering current fragmentation region\\
for large $Q^2$ and large $x_B$}
& \parbox[c][9ex][c]{15ex}{\centering $A_N\neq0$ \\ wrt $\gamma^\star$ axis}
& \parbox[c][9ex][c]{15ex}{\centering $A_N=0$ \\ wrt $\gamma^\star$ axis}
& \parbox[c][9ex][c]{15ex}{\centering $A_N\neq0$ \\ wrt $\gamma^\star$ axis}
& \parbox[c][9ex][c]{15ex}{\centering $A_N=0$ \\ wrt $\gamma^\star$ axis}
\\
\hline
\parbox[c][13ex][c]{32ex}{\centering $l + p(\uparrow) \to l +
\left(\begin{array}{c} \pi^\pm\\ K^+ \end{array}\right) + X $ \\[1mm]
target fragmentation region\\
for large $Q^2$ and large $x_B$}
& \parbox[c][13ex][c]{15ex}{\centering $A_N\neq0$}
& \parbox[c][13ex][c]{15ex}{\centering $A_N=0$}
& \parbox[c][13ex][c]{15ex}{\centering $A_N\neq0$}
& \parbox[c][13ex][c]{15ex}{\centering $A_N=0$}
\\
\hline
\parbox[c][12ex][c]{32ex}{\centering $p + p(\uparrow) \to
\left(\begin{array}{c} l\bar{l}\\ W^\pm \end{array}\right) + X$\\[1mm]
$p(\uparrow)$ fragmentation region}
& \parbox[c][12ex][c]{15ex}{\centering $A_N\neq0$}
& \parbox[c][12ex][c]{15ex}{\centering $A_N\approx0$}
& \parbox[c][12ex][c]{15ex}{\centering $A_N=0$}
& \parbox[c][12ex][c]{15ex}{\centering $A_N\neq0$}
\\
\hline
\end{tabular}
\end{table}
\subsection{Transversity}
Despite the advantage of being twist-2 and therefore unsuppressed,
transversity cannot contribute to inclusive DIS as it requires quark
helicity flip. However, Jaffe \etal.\cite{Jaffe:1998hf} have recently
proposed its measurement via twist-two quark-interference
fragmentation functions, see fig.~\ref{fig-jjt1}.
\begin{figure}[htb]
\centering
\epsfig{figure=fig-jjt1.eps,width=55mm}
\caption{\label{fig-jjt1}
Diagram for double-pion production via $\sigma$, $\rho$ in DIS.}
\end{figure}
This is remeniscent of the so-called handedness property but appears
more direct to interpret.
FSI between, \emph{e.g.}, $\pi^+\pi^-$, $K\overline{K}$, or $\pi{}K$, produced
in the current fragmentation region in DIS on a transversely polarised
nucleon may probe transversity. The point is that the pions may be
produced through intermediate resonant states:
$\sigma[(\pi\pi)^{I=0}_{l=0}]$ and $\rho[(\pi\pi)^{I=1}_{l=1}]$
produced in the current fragmentation region. Two new interference
fragmentation functions can then be defined: $\hat{q_I}$,
$\delta\hat{q_I}$; the subscript $I$ stands for interference. The
final asymmetry, requiring no model input, has the following form:
\cite{Jaffe:1998hf}
\begin{eqnarray}
{\cal A}_{\bot\top}
\equiv
\frac{d\sigma_\bot-d\sigma_\top}{d\sigma_\bot+d\sigma_\top}
&=&
-\frac{\pi}{4} \frac{\sqrt6(1-y)}{[1+(1-y)^2]}
\cos\phi \sin\delta_0 \sin\delta_1 \sin\left(\delta_0-\delta_1\right)
\nonumber\\
&&
\hspace*{1ex}
\times \;
\frac{\sum_a e_a^2 \delta q^a(x)\, \delta \hat{q}_I^a(z)}
{\sum_a e^2_a q_a(x)
\left[ \sin^2\delta_0 \hat{q}_0^a(z)
+ \sin^2\delta_1 \hat{q}_1^a(z) \right]},
\end{eqnarray}
where $\hat{q}_0$ and $\hat{q}_1$ are spin-averaged fragmentation
functions for the intermediate $\sigma$ and $\rho$ states, respectively.
Note that the target only need be polarised; the asymmetry is obtained
either by flipping the nucleon spin or via the azimuthal asymmetry.
This approach can also be extended to generate a (double) helicity
asymmetry, which could probe valence-quark spin
densities.\cite{Jaffe:1998pv}
\section{Orbital Angular Momentum}
It has long been well understood that angular-momentum conservation in
parton splitting processes implies non-trivial $Q^2$ evolution of
partonic orbital angular momentum (OAM),\cite{Ratcliffe:1987dp} in
line with that of the gluon spin. Ji \etal.\cite{Ji:1996x1} have shown
that this leads to asymptotic sharing of total angular momentum
identical to that of the linear momentum fraction. Recently, Teryaev
\cite{Teryaev:1998x2} has rederived the PQCD evolution equations for
OAM in a semi-classical approach in terms of the spin-averaged and
spin-dependent kernels.
Generation of OAM, balancing the gluon spin, accompanies $q{\to}qg$
splitting. The net effect is obtained subtracting the probabilities of
a gluon with negative and positive helicities. The same combination
(modulo sign) appears in the $gq$ spin-dependent kernel, with momentum
fraction $1{-}x$:
\begin{equation}
P^{LS}_{qq}(x) + P^{LS}_{gq}(1-x) = -\Delta P_{gq}(1-x).
\end{equation}
The trick is that the ratio of the quark and gluon OAM can be found
via classical reasoning. Before splitting, suppose the quark momentum
has only a $z$ component, the final parton momenta are in the $x$-$z$
plane. By momentum conservation, the $x$ components of $q$ and $g$
momenta are equal (up to a sign) and the $z$ components of OAM are
thus
\begin{equation}
L_z^q= P_x r^q_y, \qquad L_z^g= -P_x r^g_y,
\end{equation}
where the spatial non-locality of quark and gluon production,
$r^{q,g}$, has been introduced. OAM $x$ components are also generated:
\begin{equation}
L_x^q= -P^q_z r^q_y, \qquad L_x^g= -P^g_z r^g_y.
\end{equation}
Conservation of the $x$ component of OAM, $L_x^q=-L_x^g$, leads to
\begin{equation}
\frac{r^q_y}{r^g_y} = -\frac{P_z^g}{P_z^q},
\end{equation}
and substitution into the $L_z$ eq.\ finally gives the partition:
\begin{equation}
\frac{L_z^q}{L_z^g} = \frac{P_z^g}{P_z^q} = \frac{1-x}{x},
\end{equation}
precisely as Ji \etal.\cite{Ji:1996x1} found by explicit calculation.
Notice also that the whole problem of defining the relevant operators
has been neatly circumvented.
\section{Conclusions}
In conclusion then, all theoretical aspects of spin physics continue
to benefit from interest and study. Moreover, the rewards for the
effort put into this sector are an ever-deeper understanding of
hadronic structure (and may indeed represent one of the few real keys)
while the various phenomenological puzzles are steadily coming under
control.
Areas where there is more to be learnt are transverse spin (including
transversity) and orbital angular momentum. The former, being linked
to hadronic mass scales may provide important clues to the nature of
chiral-symmetry breaking while there is also still much to explain of
the known phenomenology, and transversity may yet have a r\^ole in
single-spin asymmetries. OAM is as intriguing as it is elusive
experimentally; its contribution the proton spin is yet to be measured
and, were it to be found large, one would like to understand the
implications for the standard SU(6) picture of the nucleon.
\section*{References}
| proofpile-arXiv_065-7932 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus
-.2ex}{2.3ex plus .2ex}{\large\bf}}
\def\subsection{\@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus
-.2ex}{1.5ex plus .2ex}{\normalsize\bf}}
\makeatother
\makeatletter
\def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\newcommand{\sect}[1]{\setcounter{equation}{0}\section{#1}}
\@addtoreset{equation}{section}
\renewcommand{\arabic{section}.\arabic{equation}}{\thesection.\arabic{equation}}
\makeatother
\def1.20{1.20}
\newcommand{\eqn}[1]{(\ref{#1})}
\newcommand{\ft}[2]{{\textstyle\frac{#1}{#2}}}
\newsavebox{\uuunit}
\sbox{\uuunit}
{\setlength{\unitlength}{0.825em}
\begin{picture}(0.6,0.7)
\thinlines
\put(0,0){\line(1,0){0.5}}
\put(0.15,0){\line(0,1){0.7}}
\put(0.35,0){\line(0,1){0.8}}
\multiput(0.3,0.8)(-0.04,-0.02){12}{\rule{0.5pt}{0.5pt}}
\end {picture}}
\newcommand {\unity}{\mathord{\!\usebox{\uuunit}}}
\newcommand {\Rbar} {{\mbox{\rm$\mbox{I}\!\mbox{R}$}}}
\newcommand{K\"ahler}{K\"ahler}
\def{\overline \imath}{{\overline \imath}}
\def{\overline \jmath}{{\overline \jmath}}
\def{\rm Im ~}{{\rm Im ~}}
\def{\rm Re ~}{{\rm Re ~}}
\def\relax{\rm I\kern-.18em P}{\relax{\rm I\kern-.18em P}}
\def{\rm arccosh ~}{{\rm arccosh ~}}
\def\bf E_{7(7)}{\bf E_{7(7)}}
\begin{document}
\font\cmss=cmss10 \font\cmsss=cmss10 at 7pt
\def\twomat#1#2#3#4{\left(\matrix{#1 & #2 \cr #3 & #4}\right)}
\def\vrule height1.5ex width.4pt depth0pt{\vrule height1.5ex width.4pt depth0pt}
\def\relax\,\hbox{$\inbar\kern-.3em{\rm C}$}{\relax\,\hbox{$\vrule height1.5ex width.4pt depth0pt\kern-.3em{\rm C}$}}
\def\relax\,\hbox{$\inbar\kern-.3em{\rm G}$}{\relax\,\hbox{$\vrule height1.5ex width.4pt depth0pt\kern-.3em{\rm G}$}}
\def\relax{\rm I\kern-.18em B}{\relax{\rm I\kern-.18em B}}
\def\relax{\rm I\kern-.18em D}{\relax{\rm I\kern-.18em D}}
\def\relax{\rm I\kern-.18em L}{\relax{\rm I\kern-.18em L}}
\def\relax{\rm I\kern-.18em F}{\relax{\rm I\kern-.18em F}}
\def\relax{\rm I\kern-.18em H}{\relax{\rm I\kern-.18em H}}
\def\relax{\rm I\kern-.17em I}{\relax{\rm I\kern-.17em I}}
\def\relax{\rm I\kern-.18em N}{\relax{\rm I\kern-.18em N}}
\def\relax{\rm I\kern-.18em P}{\relax{\rm I\kern-.18em P}}
\def\relax\,\hbox{$\inbar\kern-.3em{\rm Q}$}{\relax\,\hbox{$\vrule height1.5ex width.4pt depth0pt\kern-.3em{\rm Q}$}}
\def\relax\,\hbox{$\inbar\kern-.3em{\rm 0}$}{\relax\,\hbox{$\vrule height1.5ex width.4pt depth0pt\kern-.3em{\rm 0}$}}
\def\relax{\rm I\kern-.18em K}{\relax{\rm I\kern-.18em K}}
\def\relax\,\hbox{$\inbar\kern-.3em{\rm G}$}{\relax\,\hbox{$\vrule height1.5ex width.4pt depth0pt\kern-.3em{\rm G}$}}
\font\cmss=cmss10 \font\cmsss=cmss10 at 7pt
\def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}}
\def\ZZ{\relax\ifmmode\mathchoice
{\hbox{\cmss Z\kern-.4em Z}}{\hbox{\cmss Z\kern-.4em Z}}
{\lower.9pt\hbox{\cmsss Z\kern-.4em Z}}
{\lower1.2pt\hbox{\cmsss Z\kern-.4em Z}}\else{\cmss Z\kern-.4em
Z}\fi}
\def\relax{\rm 1\kern-.35em 1}{\relax{\rm 1\kern-.35em 1}}
\def{\rm d}\hskip -1pt{{\rm d}\hskip -1pt}
\def{\rm Re}\hskip 1pt{{\rm Re}\hskip 1pt}
\def{\rm Tr}\hskip 1pt{{\rm Tr}\hskip 1pt}
\def{\rm i}{{\rm i}}
\def{\rm diag}{{\rm diag}}
\def\sch#1#2{\{#1;#2\}}
\def\relax{\rm 1\kern-.35em 1}{\relax{\rm 1\kern-.35em 1}}
\font\cmss=cmss10 \font\cmsss=cmss10 at 7pt
\def\alpha} \def\b{\beta} \def\d{\delta{\alpha} \def\b{\beta} \def\d{\delta}
\def\epsilon} \def\c{\gamma{\epsilon} \def\c{\gamma}
\def\Gamma} \def\l{\lambda{\Gamma} \def\l{\lambda}
\def\Lambda} \def\s{\sigma{\Lambda} \def\s{\sigma}
\def{\cal A}} \def\cB{{\cal B}{{\cal A}} \def\cB{{\cal B}}
\def{\cal C}} \def\cD{{\cal D}{{\cal C}} \def\cD{{\cal D}}
\def{\cal F}} \def\cG{{\cal G}{{\cal F}} \def\cG{{\cal G}}
\def{\cal H}} \def\cI{{\cal I}{{\cal H}} \def\cI{{\cal I}}
\def{\cal J}} \def\cK{{\cal K}{{\cal J}} \def\cK{{\cal K}}
\def{\cal L}} \def\cM{{\cal M}{{\cal L}} \def\cM{{\cal M}}
\def{\cal N}} \def\cO{{\cal O}{{\cal N}} \def\cO{{\cal O}}
\def{\cal P}} \def\cQ{{\cal Q}{{\cal P}} \def\cQ{{\cal Q}}
\def{\cal R}} \def\cV{{\cal V}}\def\cW{{\cal W}{{\cal R}} \def\cV{{\cal V}}\def\cW{{\cal W}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\let\la=\label \let\ci=\cite \let\re=\ref
\def\crcr\noalign{\vskip {8.3333pt}}{\crcr\noalign{\vskip {8.3333pt}}}
\def\widetilde{\widetilde}
\def\overline{\overline}
\def\us#1{\underline{#1}}
\def\relax{{\rm I\kern-.18em E}}{\relax{{\rm I\kern-.18em E}}}
\def{\cal E}{{\cal E}}
\def{\cR^{(3)}}{{{\cal R}} \def\cV{{\cal V}}\def\cW{{\cal W}^{(3)}}}
\def\relax{{\rm I}\kern-.18em \Gamma}{\relax{{\rm I}\kern-.18em \Gamma}}
\def\IA{\IA}
\def{\rm i}{{\rm i}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\nonumber{\nonumber}
\begin{titlepage}
\setcounter{page}{0}
\begin{flushright}
SISSA REF 130/98/EP
SWAT/211
\end{flushright}
\vskip 26pt
\begin{center}
{\Large \bf N=8 BPS black holes preserving 1/8 supersymmetry}
\vskip 20pt
{\large M. Bertolini$^a$, P. Fr\`e$^b$ and M. Trigiante$^c$}
\vskip 20pt
{\it $^a$International School for Advanced Studies ISAS-SISSA and INFN \\
Sezione di Trieste, Via Beirut 2-4, 34013 Trieste, Italy}
\vskip 5pt
{\it $^b$Dipartimento di Fisica Teorica, Universit\`a di Torino and INFN \\
Sezione di Torino, Via P. Giuria 1, 10125 Torino, Italy}
{\it $^c$Department of Physics, University of Wales Swansea, Singleton Park \\
Swansea SA2 8PP, United Kingdom}
\end{center}
\begin{abstract}
In the context of $N=8$ supergravity we consider BPS black--holes
that preserve $1/8$
supersymmetry. It was shown in a previous paper that, modulo
$U$--duality transformations of $E_{7(7)}$ the most general solution of
this type can be reduced to a black-hole of the STU model.
In this paper we analize this solution in detail,
considering in particular its embedding in one of the possible
Special K\"ahler manifold
compatible with the consistent truncations to $N=2$ supergravity,
this manifold being the moduli space of the $T^6/{\ZZ ^3}$ orbifold,
that is: $SU(3,3)/SU(3)\times U(3)$.
This construction requires a crucial use of the Solvable Lie
Algebra formalism. Once the goup-theoretical analisys is done, starting from
a static, spherically symmetric ans\"atz, we find an exact solution
for all the scalars (both dilaton and axion-like) and for
gauge fields, together with their already known charge-dependent
fixed values, which yield a $U$--duality invariant entropy.
We give also a complete translation
dictionary between the Solvable Lie Algebra and the Special K\"ahler
formalisms in order to let comparison with other papers on similar issues
being more immediate. Although the explicit solution is given in a
simplified case where the equations turn
out to be more manageable, it encodes all the features of the more general
one, namely it has non-vanishing entropy and the scalar fields have a
non-trivial radial dependence.
\end{abstract}
\vskip 30pt
\begin{flushleft}
{\footnotesize
e-mail: teobert@sissa.it, fre@to.infn.it, m.trigiante@swansea.ac.uk}
\end{flushleft}
\vspace{2mm} \vfill \hrule width 3.cm
\vskip 0.2cm
{\footnotesize
Supported in part by EEC under TMR contracts ERBFMRX--CT96--0045 and
ERBFMRX--CT96--0012}
\vskip 20pt
\end{titlepage}
\section{Introduction}
\label{introgen}
In the last three years there has been a renewed interest in the black-hole
solutions of D=4 supergravity theories and, more in general, in black $p$-brane solutions
of supergravity theories in higher dimensions. Among these solutions, of particular interest
in the study of superstring dualities are those preserving a fraction of the original
supersymmetries, which have been identified with the BPS saturated perturbative and
non--perturbative states of superstring theory. This interpretation \cite{duffrep,kstellec}
has found strong support with the advent of D-branes \cite{pol}, allowing the direct construction
of the BPS states. Indeed, although solutions of the classical low--energy supergravity theory,
their masses, which saturate the Bogomolnyi bound (BPS saturated solutions), are
protected from quantum corrections when the supersymmetry is
high enough. This property promotes them to solutions of
the whole quantum theory, and thus they represent an important tool in
probing the non--perturbative regime of superstring theories.
\par
This paper investigates the most general BPS saturated black-hole solution of $D=4$ supergravity
preserving 1/8 of the $N=8$ supersymmetry, completing a programme started in \cite{mp1,mp2}.
The basic result of \cite{mp1} was to show that the most general $1/8$ black-hole solution of
$N=8$ supergravity is a $STU$ model solution, namely a solution
where only 6 scalar fields (3
dilaton-like and 3 axion-like) and 8 charges (4 electric and 4 magnetic)
are switched on. This
solution is the most general modulo $U$--duality transformations.
As it is
well known (\cite{huto2,huto1}), the quantum $U$--duality group is the discrete version of the
isometry group $U$ of the scalar coset manifold $U/H$ of $N=8$ supergravity.
Once a solution is
found, acting on it with a $H=SU(8)$ transformation one generates the general charged black-hole
and then acting with a $U=E_{7(7)}$ transformation one
generates the most general solution,
namely that with fully general asymptotic values of the scalar fields.
In the context of $N=8$ supergravity one of the results of
\cite{mp1} was the identification of the minimal content
of dynamical fields and
charges that a $1/8$ black--hole solution should have,
in order for its entropy to
be non vanishing (regular solution).
Nevertheless in that paper only a particular dilatonic
solution was worked out explicitly.
This solution had zero entropy since not only the dilatons but also the
axions are
part of the minimal set of fields necessary to describe a regular
$1/8$ black--hole. In \cite{kal1} a
very special solution of this kind was found, namely the double-extreme one,
in which all scalar fields are taken to be constant and equal to the fixed values they anyhow must
get at the horizon \cite{fer}. In the present paper we will consider a more general solution,
namely a dynamical solution (i.e. {\em not} double-extreme) and with regular horizon (i.e. with
non-vanishing entropy). The solution, corresponding to a specific configuration of scalar fields
and charges, is obtained by performing $U$ and $H$ transformations in such a way that the
quantized and central charges are put into the normal frame.
Other regular solutions have been considered in various other papers like \cite{stu+,stu-}. The aim
of the present paper is, however, to consider the BPS generating solution that is the one depending
on the least number of charges from which the {\it all} $U$-duality orbit may be reconstructed
through the action of the $U$-duality group.
A resum\'e of the essential properties of
the generating BPS solutions in arbitrary dimensions $4\leq D\leq 9$ can be found in \cite{hull}.
In the context of toroidally compactified type II supergravity, the only regular black-hole
solutions are the 1/8 supersymmetry preserving ones while 1/2 and 1/4 black-holes, whose general
form has been completely classified in \cite{mp2}, have zero horizon area.
As it has been
extensively explained in \cite{mp1} and will be summarized in the following,
a $1/8$ supersymmetry
preserving $N=8$ solution can be seen as a solution within
a consistent truncation $N=8 \, \to \, N=2$ of the supergravity theory.
In this truncation one needs specific choices of both the Hyperk\"ahler
and the Special K\"ahler manifold, describing the hyper
and vector multiplets, respectively.
Following
the same lines of \cite{mp1,mp2} we will consider one of the possible non-trivial $N=2$ embeddings
of the $STU$ model solution. This will be carried on with the essential aid of the Solvable Lie
Algebra (SLA from now on) approach to supergravity theories, which is
particularly useful to define a general method for the systematic
study of BPS saturated black-hole solutions of supergravity.
For a review on the solvable Lie algebra method see \cite{mario}.
We give the details on our use of the Solvable Lie algebra in
Appendix A.
\par
The BPS saturated states are characterized by the property
that they preserve a fraction of the
original supersymmetries. This means that there is a suitable projection operator
$\relax{\rm I\kern-.18em P}^2_{BPS} =\relax{\rm I\kern-.18em P}_{BPS}$ acting on the supersymmetry charge $Q_{SUSY}$, such that:
\begin{equation}
\left(\relax{\rm I\kern-.18em P}_{BPS} \,Q_{SUSY} \right) \, \vert \, \mbox{BPS }\rangle = 0\, .
\label{bstato}
\end{equation}
Since the supersymmetry transformation rules are linear in the first
derivatives of the fields, eq.(\ref{bstato}) is actually a {\it system of first
order differential equations} that must be combined with the second
order field equations of supergravity. The solutions common to both
system of equations are the classical BPS saturated states.
In terms of the gravitino and dilatino physical fields $\psi_{A\mu}$,
$\chi_{ABC}$, $A,B,C=1,\ldots,8$, equation (\ref{bstato})
is equivalent to
\begin{equation}
\delta_\epsilon \psi_{A\mu}=\delta_\epsilon \chi_{ABC}=0
\label{kse}
\end{equation}
whose solution is given in terms of the Killing spinor $\epsilon_A(x)$
subject to the supersymmetry preserving condition
\begin{eqnarray}
\gamma^0 \,\epsilon_{A} & =& \mbox{i}\, \relax\,\hbox{$\inbar\kern-.3em{\rm C}$}_{AB}
\, \epsilon^{B} \quad ; \quad A,B=1,\dots ,n_{max} \nonumber \\
\epsilon_{A} & =& 0; \quad A=n_{max}+1,\dots ,8 \nonumber
\end{eqnarray}
where $n_{max}$ is twice the number of unbroken supersymmetries. Eq.(\ref{kse}) has the
essential
feature of breaking the original $SU(8)$ automorphism group of the supersymmetry algebra to the
subgroup $\hat H=Usp( \,n_{max})\times SU(8-\,n_{max})\times U(1)$.
Eqs.(\ref{kse}) will then provide different conditions on scalar
fields transforming in different representations of $\hat H$.
In other words the scalar manifold $E_{7(7)}/SU(8)$ of the original $N=8$ theory will decompose
into submanifolds spanned by scalar fields on which the Killing spinor
equations impose different kind of conditions. This decomposition, as it was shown in \cite{mp1}
cannot be described as a decomposition of the isometry group $E_{7(7)}$ into the isometry groups
of the submanifolds, but may be described by using the SLA formalism, i.e. expressing
$E_{7(7)}/SU(8)$
and its submanifolds as group manifolds generated by suitable solvable Lie algebras. In this
description the scalars are, as it will be explained more in detail in the sequel, parameters
of the generating solvable algebra, and, according to the decomposition of $SU(8)$ into $\hat H$,
the solvable algebra $Solv_7$ generating $E_{7(7)}$ will decompose into the direct sum of the
solvable algebras generating the submanifolds whose scalar fields transform in representations
of $\hat H$.
In the case at hand, namely a $1/8$ supersymmetry preserving solution, we have $n_{max} = 2$
and
$Solv_7$ must be decomposed according to the decomposition of the isotropy subgroup:
$SU(8) \longrightarrow SU(2)\times U(6)$. We showed in \cite{mp1} that the corresponding
decomposition of the solvable Lie algebra is the following one:
\begin{equation}
Solv_7 = Solv_3 \, \oplus \, Solv_4
\label{7in3p4}
\end{equation}
where the rank three Lie algebra $Solv_3$ defined above describes the
$30$--dimensional scalar sector of $N=6$ supergravity, while the rank four
solvable Lie algebra $Solv_4$ contains the remaining forty scalars
belonging to $N=6$ spin $3/2$ multiplets. Both manifolds $ \exp \left[ Solv_3 \right]$ and
$ \exp \left[ Solv_4 \right]$ have also,an $N=2$ interpretation since we have:
\begin{eqnarray}
\exp \left[ Solv_3 \right] & =& \mbox{homogeneous special K\"ahler}
\nonumber \\
\exp \left[ Solv_4 \right] & =& \mbox{homogeneous quaternionic}
\label{pincpal}
\end{eqnarray}
so that the first manifold can describe the interaction of $15$ vector multiplets, while the
second can describe the interaction of $10$ hypermultiplets. Indeed if we decompose the $N=8$
graviton multiplet in $N=2$ representations we find:
\begin{equation}
\mbox{N=8} \, \mbox{\bf spin 2} \,\stackrel{N=2}{\longrightarrow}\,
\mbox{\bf spin 2} + 6 \times \mbox{\bf spin 3/2} + 15 \times \mbox
{\bf vect. mult.}
+ 10 \times \mbox{\bf hypermult.}
\label{n8n2decompo}
\end{equation}
In order to end up with an $N=2$ consistent truncation one has to consider $K \subset Solv_3$
and $Q \subset Solv_4$
such that $[K,Q]=0$. The more simple case is to take $K=Solv_3$ and $Q=0$ while the first
non trivial one corresponds to take a one-dimensional quaternionic manifold for $Q$ and
the corresponding compatible Special K\"ahler manifold for $K$ that it has been shown in \cite{mp1}
to be $SU(3,3)/ SU(3) \times U(3)$. This is the case we will consider. In \cite{mp1}, via a
group-theoretical investigation of the structure of eq. (\ref{kse}) and of the above decomposition,
it has been found the answer to the question of {\em how many scalar fields are essentially
dynamical, namely cannot be set to constants up to U--duality transformations}.
Introducing the decomposition \eqn{7in3p4} it has been found that the $40$ scalars
belonging to $Solv_4$ are constants independent of the radial variable $r$. Only the $30$ scalars
in the K\"ahler algebra $Solv_3$ can be radially dependent. The result in this case is that
$64$ of the scalar fields are actually constant while $6$ are dynamical. Moreover $48$
charges are annihilated leaving $8$ non-zero charges transforming in the representation
$(2,2,2)$ of $[Sl(2,\relax{\rm I\kern-.18em R})]^3$. Up to $U$--duality transformations the most general $N=8$ black-hole
is actually an $N=2$ black--hole corresponding to a very specific choice of the special
K\"ahler manifold, namely $ \exp[ Solv_3 ]$ as in eq. (\ref{pincpal}). More precisely, the main
result of \cite{mp1} is that the most general $1/8$ black--hole solution of $N=8$ supergravity
is related to the group $[SL(2,\relax{\rm I\kern-.18em R})]^3$, namely the general solution is actually
determined by the $STU$ model studied in \cite{kal1} and based on the solvable subalgebra:
\begin{equation}
Solv \left( \frac{SL(2,\relax{\rm I\kern-.18em R})^3}{U(1)^3} \right) \, \subset \,
Solv \left( \frac{SU(3,3)}{SU(3) \times U(3)} \right)
\label{rilevanti}
\end{equation}
\par
The real parts of the 3 complex scalar fields parametrizing $[SL(2,\relax{\rm I\kern-.18em R})]^3$ correspond to the
three Cartan generators of $Solv_3$ and have the physical interpretation of radii of the torus
compactification from $D=10$ to $D=4$. The imaginary parts of these complex fields are
generalised theta angles.
\par
The paper is organized as follows: in section 2 we give the general structure
of the 1/8 SUSY preserving solution in the SLA context as an $STU$ model solution
embedded in $SU(3,3)/SU(3)\times U(3)$. In section 3 we write down in a algebraic
way the killing spinor equation using the SLA formalism and we show how they match
with those obtained via the more familiar Special K\"ahler formalism. In section 4 we
discuss the structure and the main properties of the most general solution while in section
5, in order to make a concrete and more manageable example, we give the explicit solution in
the simplified case $S=T=U$. Although simpler, this solution encodes all non-trivial features
of the most general one. Section 6 contains some conclusive remarks.
\section{Embedding in the $N=8$ theory and solvable Lie algebras}
As previously emphasized, the most general $1/8$ black--hole solution
of $N=8$ supergravity
is, up to $U$--duality transformations, a solution of an $STU$ model suitably
embedded in the original $N=8$ theory. Therefore, in dealing with the $STU$ model we would like
to keep trace of this embedding. To this end, we shall use, as anticipated, the mathematical tool
of SLA which in general provides a suitable and simple description of the embedding of a
supergravity theory in a larger one. The SLA formalism is very useful in order to give a
geometrical and a quasi easy characterization of the different dynamical scalar fields belonging
to the solution. Secondly, it enables one to write down the somewhat heavy first order differential
system of equations for all the fields and to compute all the geometrical quantities appearing in
the effective supergravity theory in a clear and direct way.
Instead of considering the $STU$ model embedded in the whole $N=8$ theory with scalar manifold
${\cal M}=E_{7(7)}/SU(8)$, it suffices to focus on its $N=2$ truncation with scalar manifold
${\cal M}_{T_6/Z_3}=[SU(3,3)/SU(3)\times U(3)]\times {\cal M}_{Quat}$ which describes the
classical
limit of type $IIA$ Supergravity compactified on $T_6/Z_3$, ${\cal M}_{Quat}$ being the quaternionic
manifold $SO(4,1)/SO(4)$ describing $1$ hyperscalar. Within this latter simpler
model we are going to construct the $N=2$ $STU$ model as a consistent truncation.
The embedding of the $STU$ scalar manifold ${\cal M}_{STU}=(SL(2,\relax{\rm I\kern-.18em R})/U(1))^3$
inside ${\cal M}_{T_6/Z_3}$ and the latter within ${\cal M}$ is described in detail in terms of SLA
in \cite{mp1}. In this paper
it was shown that up to $H=SU(8)$ transformations, the $N=8$ central charge
which is a $8\times 8$ antisymmetric complex matrix can always be brought to
its {\it normal} form in which it is skewdiagonal with complex eigenvalues $Z,Z_i$, $i=1,2,3$
($|Z|>|Z_i|$). In order to do this one needs to make a suitable
$48$--parameter $SU(8)$ transformation on the central charge.
This transformation may be seen as
the result of a $48$--parameter $E_{7(7)}$ duality
transformation on the $56$ dimensional charge vector and on the
$70$ scalars which, in the expression of the central charge,
sets to zero $48$ scalars (24 vector scalars and 24 hyperscalars
from the $N=2$ point of view) and $48$ charges.
Taking into account that there are 16 scalars parametrizing
the submanifold $SO(4,4)/SO(4)\times SO(4)$, $SO(4,4)$ being the centralizer
of the normal form, on which the eigenvalues of the central charge do not depend at all, the central
charge, in its normal form will depend only on the $6$
scalars and $8$ charges defining an $STU$ model.
The isometry group of ${\cal M}_{STU}$ is $[SL(2,\relax{\rm I\kern-.18em R})]^3$, which is the
{\it normalizer} of the normal form, i.e.
the residual $U$--duality which can still act non trivially
on the $6$ scalars and $8$
charges while keeping the central
charge skew diagonalized. As we shall see, the $6$ scalars of the
$STU$ model consist of $3$ axions $a_i$ and $3$ dilatons $p_i$,
whose exponential
$\exp{p_i}$ will be denoted by $-b_i$.\par
In the framework of the $STU$ model, the central charge eigenvalues $Z(a_i,b_i,p^\Lambda,q_\Lambda)$
and $Z_i(a_i,b_i,p^\Lambda,q_\Lambda)$ are, respectively
the local realization on moduli space of
the $N=2$ supersymmetry algebra central charge and of the $3$
{\it matter} central charges associated with the $3$ matter vector fields.
The BPS condition for a $1/8$ black--hole is that the ADM mass should equal
the modulus of the central charge:
\begin{equation}
M_{ADM}=|Z(a_i,b_i,p^\Lambda,q_\Lambda)|.
\end{equation}
At the horizon the field dependent central charge $|Z|$ flows to its
minimum value:
\begin{eqnarray}
|Z|_{min}(p^\Lambda,q_\Lambda)&=&
|Z(a_i^{fix},b_i^{fix},p^\Lambda,q_\Lambda)|\nonumber\\
0 & = & \frac{\partial}{\partial a_i }|Z|_{a=b=fixed} \, = \,
\frac{\partial}{\partial a_i }|Z|_{a=b=fixed}
\end{eqnarray}
which is obtained by extremizing it with
respect to the $6$ moduli $a_i,b_i$. At the horizon the
other eigenvalues $Z_i$ vanish. The
value $|Z|_{min}$ is related to the Bekenstein Hawking entropy of the
solution and it is expressed in terms of the quartic
invariant of the $56$--representation of $E_{7(7)}$,
which in principle depends on all the $8$
charges of the $STU$ model.
Nevertheless there is a residual
$[U(1)]^3\in [SL(2,\relax{\rm I\kern-.18em R})]^3$ acting on the $N=8$ central charge matrix
in its normal form. These three gauge pameters
can be used to reduce the number
of charges appearing in the quartic invariant (entropy)from 8 to 5.
We shall see how these 3 conditions may be implemented
on the 8 charges at the level of the first order BPS equations
in order to obtain the $5$ parameter
generating solution for the most general $1/8$ black--holes in $N=8$ supergravity. This generating
solution coincides with the solution generating the orbit of $1/2$ BPS black--holes in the
truncated $N=2$ model describing type $IIA$ supergravity compactified on $T_6/Z_3$. Therefore,
in the framework of this latter simpler model, we shall work out the $STU$ model and construct
the set of second and first order differential equations defining our solution. In \cite{bis} it
has been considered the type IIB counterpart of the same model. There, however, the effective
$N=2$
supergravity theory was simpler because there were 10 hypermultiplets (which are constant in the
solution) and no vector multiplets, the only vector in the game being the graviphoton.
\subsection{The $STU$ model in the $SU(3,3)/SU(3)\times U(3)$ theory and solvable Lie algebras}
As it was shown in \cite{mp1} the hyperscalars do not contribute to the dynamics of our BPS
black--hole, therefore, in what follows, all hyperscalars will be set to zero and we shall forget
about the quaternionic factor ${\cal M}_{Quat}$ in ${\cal M}_{T_6/Z_3}$. The latter will then be
the scalar manifold of an $N=2$ supergravity describing $9$ vector multiplets coupled with the
graviton multiplet. The $18$ real scalars span the manifold
${\cal M}_{T_6/Z_3}=SU(3,3)/SU(3)\times U(3)$, while
the $10$ electric and $10$ magnetic charges associated with the $10$ vector fields transform under
duality in the ${\bf 20}$ (three times antisymmetric)
of $SU(3,3)$. As anticipated, in order to show how the $STU$ scalar manifold ${\cal M}_{STU}$ is
embedded in ${\cal M}_{T_6/Z_3}$ we shall use the SLA description.
Apparently a great variety of scalar manifolds in extended supergravities in different dimensions
are non--compact Riemannian manifolds ${\cal M}$ admitting a solvable Lie algebra description, i.e.
they can be expressed as Lie group manifolds generated by a solvable Lie algebra $Solv$:
\begin{equation}
{\cal M}\,=\, \exp{(Solv)}
\label{solvrep}
\end{equation}
For instance non--compact homogeneous manifolds of the form $G/H$ ($H$ maximal compact
subgroup
of $G$) always admit a solvable Lie algebra representation and $Solv$ is defined by the so called
{\it Iwasawa decomposition}.
A solvable algebra $Solv$ is defined as an algebra for which the $k^{th}$
Lie derivative vanishes for a finite $k$:
\begin{eqnarray}
{\cal D}^{(k)}(Solv)\, &=&\, 0\nonumber\\
{\cal D}^{(n)}(A)\, &=&\,[{\cal D}^{(n-1)}(A),{\cal D}^{(n-1)}(A)]\nonumber\\
{\cal D}^{(1)}(A)\, &=&\,[A,A]
\end{eqnarray}
In the solvable representation of a manifold (\ref{solvrep}) the local
coordinates of the manifold are the parameters of the generating Lie algebra,
therefore adopting this parametrization of scalar manifolds in supergravity
implies the definition of a one to one correspondence between the scalar fields and the
generators of $Solv$ \cite{RR,solv}.\par
Special K\"ahler manifolds and Quaternionic manifolds admitting such a description have been
classified in the $70$'s by Alekseevskii \cite{alek}. The simplest example of solvable Lie algebra
parametrization is the case of the two dimensional manifold ${\cal M}=SL(2,\relax{\rm I\kern-.18em R})/SO(2)$ which
may be described as the exponential of the following solvable Lie algebra:
\begin{eqnarray}
SL(2,\relax{\rm I\kern-.18em R})/SO(2)\, &=&\,\exp{(Solv)}\nonumber\\
Solv\, &=&\,\{\sigma_3,\sigma_+ \}\nonumber\\
\left[\sigma_3,\sigma_+\right]\, &=&\,2\sigma_+\nonumber\\
\sigma_3\, =\,\left(\matrix{1 & 0\cr 0 & -1}\right)\,\,&;&\,\,\sigma_+\, =\,
\left(\matrix{0 & 1\cr 0 & 0}\right)
\label{key}
\end{eqnarray}
From (\ref{key}) we can see a general feature of $Solv$, i.e. it may always be expressed as the
direct sum of semisimple
(the non--compact Cartan generators of the isometry group)
and nilpotent generators, which in a suitable basis are represented
respectively by diagonal and upper triangular matrices. This property, as we
shall see, is one of the advantages of the solvable Lie algebra description
since it allows to express the coset representative of an homogeneous manifold
as a solvable group element which is the product of a diagonal matrix and the
exponential of a nilpotent matrix, which is a polynomial in the parameters.
The simple solvable algebra represented in (\ref{key}) is called {\it key}
algebra and will be denoted by F. The scalar manifold of the $STU$ model is
a special K\"ahler manifold generated by a solvable Lie algebra which is the sum of $3$ commuting
key algebras:
\begin{eqnarray}
{\cal M}_{STU}\,&=&\, \left(\frac{SL(2,\relax{\rm I\kern-.18em R})}{SO(2)}\right)^3\,=\,
\exp{(Solv_{STU})}\nonumber\\
Solv_{STU}\,&=&\, F_1\oplus F_2\oplus F_3\nonumber\\
F_i\,=\,\{h_i,g_i\}\qquad&;&\qquad \left[h_i,g_i\right]=2g_i\nonumber\\
\left[F_i,F_j\right]\,&=&\,0
\end{eqnarray}
the parameters of the Cartan generators $h_i$ are the dilatons of the theory,
while the parameters of the nilpotent generators $g_i$ are the axions.
The three $SO(2)$ isotropy groups of the manifold are generated by the three
compact generators $\widetilde{g}_i=g_i-g^\dagger_i$. \par
${\cal M}_{T_6/Z_3}$ is an $18$--dimensional Special K\"aler manifold generated by a solvable
algebra whose structure is slightly more involved:
\begin{eqnarray}
{\cal M}_{T_6/Z_3}\, &=&\,\frac{SU(3,3)}{SU(3)\times U(3)}\,=\, \exp{(Solv)}\nonumber\\
Solv\, &=&\,Solv_{STU} \oplus\, {\bf X}\, \oplus\,
{\bf Y}\, \oplus\, {\bf Z}\nonumber\\
\label{stuembed}
\end{eqnarray}
The $4$ dimensional subspaces ${\bf X},{\bf Y},{\bf Z}$ consist of nilpotent
generators, while the only semisimple generators are the $3$ Cartan generators
contained in $Solv_{STU}$ which define the rank of the manifold.
The algebraic structure of $Solv$ together with the
details of the construction of the $SU(3,3)$ generators in the representation
${\bf 20}$ can be found in Appendix A.
Eq. (\ref{stuembed}) defines the embedding of ${\cal M}_{STU}$ inside
${\cal M}_{T_6/Z_3}$, i.e. tells which scalar fields have to be put to zero in
order to truncate the theory to the $STU$ model. As far as the embedding of the isotropy
group $SO(2)^3$ of ${\cal M}_{STU}$ inside the ${\cal M}_{T_6/Z_3}$
isotropy group $SU(3)_1\times SU(3)_2\times U(1)$ is concerned, the $3$ generators of
the former ($\{\widetilde{g}_1,\widetilde{g}_2,\widetilde{g}_3\}$ )
are related to the Cartan generators of the latter in the following way:
\begin{eqnarray}
\widetilde{g}_1\, &=&\, \frac{1}{2}\left(\lambda +\frac{1}{2}\left(H_{c_1}-H_{d_1}+
H_{c_1+c_2}-H_{d_1+d_2}\right)\right)\nonumber\\
\widetilde{g}_2\, &=&\, \frac{1}{2}\left(\lambda +\frac{1}{2}\left(H_{c_1}-H_{d_1}
-2(H_{c_1+c_2}-H_{d_1+d_2})\right)\right)\nonumber\\
\widetilde{g}_3\, &=&\, \frac{1}{2}\left(\lambda +\frac{1}{2}
\left(-2(H_{c_1}-H_{d_1})+(H_{c_1+c_2}-H_{d_1+d_2})\right)\right)
\label{relcart}
\end{eqnarray}
where $\{c_i\}, \{d_i\}$, $i=1,2$ are the simple roots of $SU(3)_1$ and
$SU(3)_2$ respectively, while $\lambda$ is the generator of $U(1)$.
In order to perform the truncation to the $STU$ model,
one needs to know also which of the $10$
vector fields have to be set to zero in order to be left with the $4$ $STU$ vector fields. This
information is given by the decomposition of the ${\bf 20}$
of $SU(3,3)$ in which the vector of magnetic and electric charges transform, with respect to
$[SL(2,\relax{\rm I\kern-.18em R})]^3$:
\begin{equation}
{\bf 20}\,\stackrel{SL(2,\relax{\rm I\kern-.18em R})^3}{\rightarrow}\,{\bf (2,2,2)} \oplus 2\times \left[{\bf (2,1,1)}\oplus
{\bf (1,2,1)}\oplus {\bf (1,1,2)}\right]
\label{chargedec}
\end{equation}
Skewdiagonalizing the $5$ Cartan generators of $SU(3)_1\times SU(3)_2\times U(1)$ on the ${\bf
20}$
we obtain the $10$ positive weights of the
representation as $5$ components vectors $\vec{v}^{\Lambda^\prime}$
($\Lambda^\prime=0,\dots,9$):
\begin{eqnarray}
\{C(n)\}\,&=&\, \{\frac{H_{c_1}}{2},\frac{H_{c_1+c_2}}{2},\frac{H_{d_1}}{2},\frac{H_{d_1+d_2}}{2},
{\lambda}\}\nonumber\\
C(n)\cdot \vert v^{\Lambda^\prime}_x \rangle \,&=&\, v_{(n)}^{\Lambda^\prime} \vert
v^{\Lambda^\prime}_y \rangle \nonumber\\
C(n)\cdot \vert v^{\Lambda^\prime}_y \rangle \,&=&\, -v_{(n)}^{\Lambda^\prime} \vert
v^{\Lambda^\prime}_x \rangle
\end{eqnarray}
Using the relation (\ref{relcart}) we compute the value of the weights $v^{\Lambda^\prime}$
on the three generators $\widetilde{g}_i$ and find out which are the
$4$ positive weights $\vec{v}^\Lambda$ ($\Lambda=0,\dots,3$)
of the ${\bf (2,2,2)}$ in (\ref{chargedec}). The weights
$\vec{v}^{\Lambda^\prime}$ and their eigenvectors $\vert v^{\Lambda^\prime}_{x,y} \rangle$ are listed
in Appendix A.\par
In this way we achieved an algebraic recipe to perform the truncation to the $STU$ model:
setting to zero all the scalars parametrizing the $12$ generators ${\bf X}\, \oplus\,{\bf Y}\,
\oplus\, {\bf Z}$ in (\ref{stuembed}) and the $6$ vector fields corresponding to the weights
$v^{\Lambda^\prime}$, $\Lambda^\prime=4,\dots,9$. Restricting the action of the $[SL(2,\relax{\rm I\kern-.18em R})]^3$
generators ($h_i,g_i,\widetilde{g}_i$) inside $SU(3,3)$ to the $8$ eigenvectors
$\vert v^{\Lambda}_{x,y}\rangle $($\Lambda=0,\dots,3$) the embedding of $[SL(2,\relax{\rm I\kern-.18em R})]^3$ in
$Sp(8)$ is automatically obtained
\footnote{In the $Sp(8)$ representation of the U--duality group
$[SL(2,\relax{\rm I\kern-.18em R})]^3$ we shall use the non--compact Cartan generators
$h_i$ are diagonal. Such a representation will be
denoted by $Sp(8)_D$, where the subscript ``D'' stands for ``Dynkin''. This notation has been
introduced in \cite{mp1} to distinguish the representation $Sp(8)_D$ from $Sp(8)_Y$
(``Y'' standing for ``Young'') where on the contrary the Cartan generators of the compact isotropy
group (in our case $\widetilde{g}_i$) are diagonal.
The two representations are related by an
orthogonal transformation.}.
\section{First order differential equations: the algebraic approach}
Now that the $STU$ model has been constructed out the original
$SU(3,3)/SU(3)\times U(3)$ model, we may address the problem of writing down
the BPS first order equations. To this end we shall use the geometrical intrinsic approach defined
in \cite{mp1} and eventually compare it with the Special K\"ahler geometry formalism. \par
The system of first order differential equations in the background fields is
obtained from the Killing spinor conditions (\ref{kse}). The expressions of the gravitino and gaugino
supersymmetry transformation are:
\begin{eqnarray}
\delta_{\epsilon} \psi_{A\vert \mu}\,&=&\,\nabla_{\mu}\epsilon_A-\frac{1}{4}T^-_{\rho \sigma}
\gamma^{\rho \sigma}\gamma_{\mu}\epsilon_{AB}\epsilon^B\nonumber\\
\delta_{\epsilon}\lambda^{i\vert A}\,&=&\,{\rm i}\nabla_{\mu}z^i\gamma_{\mu}\epsilon_A+
G^{-\vert i}_{\rho \sigma}\gamma^{\rho \sigma}\epsilon^{AB}\epsilon_B
\end{eqnarray}
where $i=1,2,3$ labels the three matter vector fields and $A,B=1,2$ are
the $SU(2)$ R-symmetry indices.
Following the procedure defined in \cite{mp1,mp2,pietro},
in order to obtain a system of first order differential equations out of the
killing spinor conditions (\ref{kse}) we make the following ans\"atze
for the vector fields:
\begin{eqnarray}
\label{strenghtsans}
F^{-\vert \Lambda}\,&=&\, \frac{t^\Lambda}{4\pi}E^-\nonumber\\
t^\Lambda(r)\,&=&\, 2\pi(p^\Lambda+{\rm i}\ell ^\Lambda (r))\nonumber\\
F^{\Lambda}\,&=&\,2{\rm Re}F^{-\vert \Lambda}\,\,;\,\,
\widetilde{F}^{\Lambda}\,=\,-2{\rm Im}F^{-\vert \Lambda}\nonumber\\
F^{\Lambda}\,&=&\,\frac{p^{\Lambda}}{2r^3}\epsilon_{abc}
x^a dx^b\wedge dx^c-\frac{\ell^{\Lambda}(r)}{r^3}e^{2\cal {U}}dt
\wedge \vec{x}\cdot d\vec{x}\nonumber\\
\widetilde{F}^{\Lambda}\,&=&\,-\frac{\ell^{\Lambda}(r)}{2r^3}\epsilon_{abc}
x^a dx^b\wedge dx^c-\frac{p^{\Lambda}}{r^3}e^{2\cal {U}}dt\wedge \vec{x}\cdot d\vec{x}
\end{eqnarray}
where
\begin{eqnarray}
E^-\,&=&\,\frac{1}{2r^3}\epsilon_{abc}
x^a dx^b\wedge dx^c+\frac{{\rm i}e^{2\cal {U}}}{r^3}dt\wedge \vec{x}\cdot d\vec{x}\,=\,\nonumber\\
&&E^-_{bc}dx^b\wedge dx^c+2E^-_{0a}dt\wedge dx^a \nonumber\\
4\pi\,&=&\,\int_{S^2_\infty}E^-_{ab} dx^a\wedge dx^b \nonumber\\
\end{eqnarray}
Integrating on a two--sphere $S^2_r$ of radius $r$ we obtain
\begin{eqnarray}
4\pi p^\Lambda\,&=&\,\int_{S^2_r}F^{\Lambda}\, = \,
\int_{S^2_\infty}F^{\Lambda}\,=\,2{\rm Re}t^\Lambda\nonumber\\
4\pi \ell^\Lambda (r)\,&=&\,-\int_{S^2_r}\widetilde{F}^{\Lambda}\,=\,2
{\rm Im}t^\Lambda
\label{cudierre}
\end{eqnarray}
The difference between the two results is evident. In the first case
the integrand is a closed two form and hence the choice of the
2--cycle representative is immaterial. In the second case
the integrand is not closed and hence the result depends on the radius
of the integration sphere.
\par
As far as the metric $g_{\mu \nu}$, the scalars $z^i$ and the Killing spinors $\epsilon_A (r)$
are concerned, the ansatze we adopt are the following:
\begin{eqnarray}
ds^2\,&=&\,e^{2{\cal U}\left(r\right)}dt^2-e^{-2{\cal U}\left(r\right)}d\vec{x}^2 ~~~~~
\left(r^2=\vec{x}^2\right)\nonumber\\
z^i\,&\equiv &\, z^i(r)\nonumber\\
\epsilon_A (r)\,&=&\,e^{f(r)}\xi_A~~~~~~~~~\xi_A=\mbox{constant}\nonumber\\
\gamma_0 \xi_A\,&=&\,\pm {\rm i}\epsilon_{AB}\xi^B
\end{eqnarray}
As usual we represent the scalars of the $STU$ model in terms of three complex
fields $\{z^i\}\equiv \{S,T,U\}$, parametrizing each of the three factors
$SL(2,\relax{\rm I\kern-.18em R})/SO(2)$ in ${\cal M}_{STU}$.
After some algebra, one obtains the following set of first order equations:
\begin{eqnarray}
\frac{dz^i}{dr}\, &=&\, \mp \left(\frac{e^{U(r)}}{4\pi r^2}\right)
g^{ij^\star}\overline{f}_{j^\star}^\Lambda ({\cal N}-\overline{{\cal N}})_{\Lambda\Sigma}t^\Sigma\,=\,\nonumber\\
&&\mp \left(\frac{e^{U(r)}}{4\pi r^2}\right) g^{ij^\star}\nabla_{j^\star}\overline{Z}(z,\overline{z},{p},{q})\nonumber\\
\frac{dU}{dr}\, &=&\,\mp \left(\frac{e^{U(r)}}{r^2}\right)(M_\Sigma {p}^\Sigma-
L^\Lambda {q}_\Lambda)\,=\, \mp \left(\frac{e^{U(r)}}{r^2}\right)Z(z,\overline{z},{p},{q})
\label{eqs122}
\end{eqnarray}
where ${\cal N}_{\Lambda\Sigma}(z,\overline{z})$ is the symmetric usual matrix entering the
action for the vector fields in (\ref{action}). The vector $(L^\Lambda(z,\overline{z}),M_\Sigma(z,\overline{z}))$ is the
covariantly holomorphic section on the symplectic bundle defined on the Special K\"ahler manifold
${\cal M}_{STU}$. Finally $Z(z,\overline{z},{p},{q})$ is the local realization on ${\cal M}_{STU}$ of
the central charge of the $N=2 $ superalgebra, while $Z^i(z,\overline{z},{p},{q})=g^{ij^\star}\nabla_
{j^\star}\overline{Z}(z,\overline{z},{p},{q})$ are the central charges associated with the matter vectors,
the so--called matter central charges. In writing eqs. (\ref{eqs122}) the following two properties
have been used:
\begin{eqnarray}
0\,&=&\, \overline{h}_{j^\star\vert\Lambda}
t^{\star \Sigma}-
\overline{f}_{j^\star}^\Lambda{\cal N}_{\Lambda\Sigma}t^{\star \Sigma}\nonumber\\
0\,&=&\, M_\Sigma t^{\star \Sigma}-L^\Lambda {\cal N}_{\Lambda\Sigma}
t^{\star \Sigma}
\end{eqnarray}
The electric charges $\ell^\Lambda (r)$ defined
in (\ref{cudierre}) are {\it moduli dependent} charges
which are functions of the radial direction through
the moduli $a_i,b_i$. On the other hand, the
{\it moduli independent} electric charges
$q_\Lambda$ in eqs. (\ref{eqs122}) are those that together with $p^\Lambda$
fulfill the Dirac quantization condition, and are expressed in terms
of $t^\Lambda (r)$ as follows:
\begin{equation}
q_\Lambda\,=\, \frac{1}{2\pi}{\rm Re}({\cal N}(z(r),\overline{z}(r))t (r))_\Lambda
\label{ncudierre}
\end{equation}
Equation (\ref{ncudierre}) may be inverted in order
to find the moduli dependence of
$\ell_\Lambda (r)$. The independence of $q_\Lambda$
on $r$ is a consequence of one of the Maxwell's equations:
\begin{equation}
\partial_a \left(\sqrt{-g}\widetilde{G}^{a0\vert \Lambda}(r)\right)\,=\,0\Rightarrow
\partial_r {\rm Re}({\cal N}(z(r),\overline{z}(r))t(r))^\Lambda\,=\,0
\label{prione}
\end{equation}
In order to compute the explicit form of eqs. (\ref{eqs122})
in a geometrical intrinsic way \cite{mp1} we need to decompose the
$4$ vector fields into the graviphoton $F_{\mu\nu}^0$ and the matter vector
fields $F_{\mu\nu}^i$ in the same representation of the scalars $z^i$ with respect to the
isotropy group $H=[SO(2)]^3$. This decomposition is immediately
performed by computing the positive weights $\vec{v}^\Lambda$ of the ${\bf (2,2,2)}$ on the three
generators $\{\widetilde{g}_i\}$ of $H$ combined in such a way
as to factorize in $H$ the automorphism group $H_{aut}=SO(2)$ of
the supersymmetry algebra generated by
$\lambda=\widetilde{g}_1+\widetilde{g}_2+\widetilde{g}_3$ from the remaining
$H_{matter}=[SO(2)]^2=\{\widetilde{g}_1-\widetilde{g}_2,\widetilde{g}_1-\widetilde{g}_3\}$ generators acting non
trivially only on the matter fields.
The real and imaginary components of the graviphoton central charge $Z$ will be associated with
the weight, say $\vec{v}^0$
having vanishing value on the generators of $H_{matter}$. The remaining
weights will define a representation ${\bf (2,1,1)}\oplus {\bf (1,2,1)}\oplus {\bf (1,1,2)}$ of $H$
in which the real and imaginary parts of the central charges $Z^i$ associated with $F^i_{\mu\nu}$
transform and will be denoted by $\vec{v}^i$, $i=1,2,3$.
This representation is the same as the one in which the $6$ real scalar components of
$z^i=a_i+{\rm i}b_i$ transform with respect to $H$. It is useful to define on the tangent space of
${\cal M}_{STU}$ curved indices $\alpha$
and rigid indices $\hat{\alpha}$, both running form $1$ to $6$. Using the solvable parametrization
of ${\cal M}_{STU}$, which defines real coordines $\phi^\alpha$, the generators of
$Solv_{STU}=\{T^\alpha\}$
carry curved indices since they are parametrized by the coordinates, but do
not transform in a representation of the isotropy group. The compact generators
$\relax{\rm I\kern-.18em K}=Solv_{STU}+Solv_{STU}^\dagger$ of $[SL(2,\relax{\rm I\kern-.18em R})]^3$ on the other hand transform
in the ${\bf (2,1,1)}\oplus {\bf (1,2,1)}\oplus {\bf (1,1,2)}$ of $H$ and we can choose an
orthonormal basis (with respect to the trace) for $\relax{\rm I\kern-.18em K}$ consisting of the generators
$\relax{\rm I\kern-.18em K}^{\hat{\alpha}}=T^\alpha +T^{\alpha \dagger}$. These generators now carry the rigid index and
are in one to one corresponcence with the real scalar fields $\phi^\alpha$.
There is a one to one correspondence between the non--compact matrices $\relax{\rm I\kern-.18em K}^{\hat{\alpha}}$ and
the eigenvectors $\vert v^i_{x,y}\rangle$ ($i=1,2,3$) which are orthonormal basis (in different
spaces) of the same representation of $H$:
\begin{eqnarray}
\underbrace{\{\relax{\rm I\kern-.18em K}^1,\relax{\rm I\kern-.18em K}^2,\relax{\rm I\kern-.18em K}^3,\relax{\rm I\kern-.18em K}^4,\relax{\rm I\kern-.18em K}^5,\relax{\rm I\kern-.18em K}^6\}}_{\{ \relax{\rm I\kern-.18em K}^{\hat{\alpha}}\}}\,
&\leftrightarrow\, &\underbrace{\{\vert v^1_{x}\rangle,\vert v^2_{y}\rangle,\vert v^3_{y}\rangle,
\vert v^1_{y}\rangle,\vert v^2_{x}\rangle,\vert v^3_{x}\rangle \}}_{\{\vert v^{\hat{\alpha}}
\rangle \}}
\end{eqnarray}
The relation between the real parameters $\phi^\alpha$ of the SLA and the real and imaginary parts
of the complex fields $z^i$ is:
\begin{eqnarray}
\{\phi^\alpha\}\, &\equiv& \{-2a_1,-2a_2,-2a_3,\log {(-b_1)},\log {(-b_2)},\log {(-b_3) },\}
\end{eqnarray}
Using the $Sp(8)_D$ representation of $Solv_{STU}$, we construct the coset representative $
\relax{\rm I\kern-.18em L}(\phi^\alpha)$ of ${\cal M}_{STU}$ and the vielbein $\relax{\rm I\kern-.18em P}_\alpha^{\hat{\alpha}}$ as follows:
\begin{eqnarray}
\relax{\rm I\kern-.18em L}(a_i,b_i)\,&=&\, \exp\left(T_\alpha \phi^\alpha\right)\,=\,\nonumber\\
&&\left(1-2a_1 g_1\right)\cdot \left(1-2a_2 g_2\right)\cdot \left(1-2a_3 g_3\right)\cdot
\exp{\left(\sum_i\log{(-b_i)}h_i\right)}\nonumber\\
\relax{\rm I\kern-.18em P}^{\hat{\alpha}}\,&=&\,\frac{1}{2\sqrt{2}}{\rm Tr}\left(\relax{\rm I\kern-.18em K}^{\hat{\alpha}}\relax{\rm I\kern-.18em L}^{-1}d\relax{\rm I\kern-.18em L}
\right)\,=\,\{-\frac{da_1}{2b_1},-\frac{da_2}{2b_2},-\frac{da_3}{2b_3},\frac{db_1}{2b_1},
\frac{db_2}{2b_2},\frac{db_3}{2b_3}\}\nonumber\\
\end{eqnarray}
The scalar kinetic term in the $N=2$ lagrangian (\ref{action}) is expressed in terms of the vielbein $\relax{\rm I\kern-.18em P}$ in the form $\sum_{\hat{\alpha}}(\relax{\rm I\kern-.18em P}_{\hat{\alpha}})^2$.
The following relations between quantities computed in the solvable approach and Special K\"ahler
formalism hold:
\begin{eqnarray}
\left(\relax{\rm I\kern-.18em P}^{\alpha}_{\hat{\alpha}}\langle v^{\hat{\alpha}}\vert \relax{\rm I\kern-.18em L}^t \relax\,\hbox{$\inbar\kern-.3em{\rm C}$} {\bf M}\right)\,&=&\,
\sqrt{2}\left(\matrix{{\rm Re}(g^{ij^\star}(\overline{h}_{j^\star\vert \Lambda})),-{\rm Re}(g^{ij^\star}
(\overline{f}_{j^\star}^\Sigma))\cr{\rm Im}(g^{ij^\star}(\overline{h}_{j
^\star\vert \Lambda})),-{\rm Im}(g^{ij^\star}(\overline{f}_{j^\star}^\Sigma)
) }\right)\nonumber\\
\left(\matrix{\langle v^{0}_y\vert \relax{\rm I\kern-.18em L}^t \relax\,\hbox{$\inbar\kern-.3em{\rm C}$} {\bf M}\cr \langle v^{0}_x\vert \relax{\rm I\kern-.18em L}^t \relax\,\hbox{$\inbar\kern-.3em{\rm C}$}
{\bf M}}\right)\,&=&\,\sqrt{2}\left(\matrix{{\rm Re}(M_{ \Lambda}),-{\rm Re}(L^\Sigma)\cr{\rm Im}
(M_{ \Lambda}),-{\rm Im}(L^\Sigma)} \right)
\label{secm2}
\end{eqnarray}
where in the first equation both sides are $6\times 8$ matrix
in which the rows are labeled by
$\alpha$. The first three values of $\alpha$
correspond to the axions $a_i$, the last three to the dilatons $\log (-b_i)$.
The columns are to be contracted with the vector consisting of the $8$
electric and magnetic charges $\vert \vec{Q}\rangle_{sc} =2\pi (p^\Lambda,q_\Sigma)$ in the {\it
special coordinate} symplectic gauge of ${\cal M}_{STU}$.
In eqs. (\ref{secm2}) $\relax\,\hbox{$\inbar\kern-.3em{\rm C}$}$ is the symplectic invariant matrix, while ${\bf M}$ is the symplectic
matrix relating the charge vectors in the $Sp(8)_D$ representation and in the {\it special
coordinate} symplectic gauge:
\begin{eqnarray}
\vert \vec{Q}\rangle_{Sp(8)_D}\, &=&\, {\bf M}\cdot \vert \vec{Q}\rangle_{sc}\nonumber\\
{\bf M}\, &=&\, \left(\matrix{0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\cr
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\cr
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\cr
0 & 0 & 0 & 0 & 0 & -1 & 0 & 0\cr
0 & 0 & -1 & 0 & 0 & 0 & 0 & 0\cr
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\cr
0 & 0 & 0 & -1 & 0 & 0 & 0 & 0\cr
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\cr }\right)\in Sp(8,\relax{\rm I\kern-.18em R})
\label{santiddio}
\end{eqnarray}
Using eqs. (\ref{secm2}) it is now possible to write in a geometrically
intrinsic way the first order equations:
\begin{eqnarray}
\frac{d\phi^\alpha}{dr}\,&=&\,
\left(\mp\frac{e^{U}}{r^2}\right)\frac{1}{2\sqrt{2}\pi}\relax{\rm I\kern-.18em P}^\alpha_{\hat{\alpha}}\langle
v^{\hat{\alpha}}\vert \relax{\rm I\kern-.18em L}^t\relax\,\hbox{$\inbar\kern-.3em{\rm C}$} {\bf M}\vert \vec{Q} \rangle_{sc}\nonumber \\
\frac{dU}{dr}\,&=&\,\left(\mp\frac{e^{U}}{r^2}\right) \frac{1}{2\sqrt{2}\pi}\langle
v^0_y\vert \relax{\rm I\kern-.18em L}^t\relax\,\hbox{$\inbar\kern-.3em{\rm C}$} {\bf M}\vert \vec{Q} \rangle_{sc}\nonumber \\
0\,&=&\,\langle v^0_x\vert \relax{\rm I\kern-.18em L}^t\relax\,\hbox{$\inbar\kern-.3em{\rm C}$} {\bf M}\vert t \rangle_{sc}
\label{1ordeqs}
\end{eqnarray}
The full explicit form of eq.s (\ref{1ordeqs}) can be found in Appendix B where, using eq.
(\ref{ncudierre}), everything is expressed in terms of the quantized moduli-independent charges
$(q_{\Lambda},p^{\Sigma})$.
The fixed values of the scalars at the horizon are obtained by setting the
right hand side of the above equations to zero and the result is consistent with the literature
(\cite{kal1}):
\begin{eqnarray}
\label{scalfixn}
(a_1+{\rm i}b_1)_{fix}\,&=&\,\frac{p^\Lambda q_\Lambda -2 p^1q_1-{\rm i}
\sqrt{f(p,q)}}{2p^2p^3 - 2p^0 q_1}\nonumber\\
(a_2+{\rm i}b_2)_{fix}\,&=&\,\frac{p^\Lambda q_\Lambda -2 p^2q_2-{\rm i}
\sqrt{f(p,q)}}{2p^1p^3 - 2p^0 q_2}\nonumber\\
(a_3+{\rm i}b_3)_{fix}\,&=&\,\frac{p^\Lambda q_\Lambda -2 p^3q_3-{\rm i}
\sqrt{f(p,q)}}{2p^1p^2 - 2p^0 q_3}
\end{eqnarray}
where $f(p,q)$ is the $E_{7(7)}$ quartic invariant $I_4(p,q)$ expressed as a function of all the
$8$ charges (and whose square root is proportional to the entropy of the solution):
\begin{equation}
f(p,q)\,=\,-(p^0q_0-p^1q_1+p^2q_2+p^3q_3)^2+4(p^2p^3-p^0q_1)(p^1q_0+q_2q_3)
\end{equation}
The last of eqs. (\ref{1ordeqs}) expresses the reality condition for $Z(\phi, p,q)$ and it amounts
to fix one of the three $SO(2)$ gauge symmetries of $H$ giving therefore a condition on the $8$
charges.
Without spoiling the generality (up to $U$--duality) of the black--hole
solution it is still possible to fix the remaining $[SO(2)]^2$ gauges in $H$
by imposing two conditions on the phases of the $Z^i(\phi, p,q)$.
For instance we could require two of the $Z^i(\phi, p,q)$ to be imaginary.
This would imply two more conditions on the charges, leading to a
generating solution depending only on $5$ parameters as we expect
it to be \cite{bala}.
Hence we can conclude with the following:
\begin{stat}
Since the radial evolution of the axion fields $a_i$ is related to the real
part of the corresponding central charge $Z^i(\phi, p,q)$ (see (\ref{eqs122})),
up to $U$ duality transformations, the
{\bf 5 parameter generating solution}
will have {\bf 3 dilatons} and {\bf 1 axion}
evolving from their fixed value at the horizon to the
boundary value at infinity, and 2 constant axions whose
value is the corresponding fixed
one at the horizon ({\it double fixed}).
\end{stat}
\section{The solution: preliminaries and comments on the most general one}
In order to find the solution of the $STU$ model we need also the equations of motion that must
be satisfied together with the first order ones. We go on using the Special K\"ahler formalism in
order to let the comparison to previous papers being more immediate. Let us first compute the field
equations for the scalar fields $z_i$, which can be obtained from an $N=2$ pure supergravity action
coupled to 3 vector multiplets. From the action \cite{N=2}:
\begin{eqnarray}
\label{action}
S\,&=&\, \int d^4x\sqrt{-g}\,{\cal L}\nonumber\\
{\cal L}\,&=&\,R[g]+h_{ij^\star}\partial_\mu z^i \partial^\mu
\overline{z}^{j^\star}
+\left({\rm Im}{\cal N}_{\Lambda\Sigma}F^{\Lambda}_{\cdot\cdot}
F^{\Sigma\vert\cdot\cdot}+{\rm Re}
{\cal N}_{\Lambda\Sigma}F^{\Lambda}_{\cdot\cdot}\widetilde{F}^{\Sigma\vert\cdot\cdot}\right)\nonumber\\
g_{\mu\nu}\,&=&\,{\rm diag}(e^{2\cal {U}},-e^{-2\cal {U}},-e^{-2\cal {U}},-e^{-2\cal {U}})
\end{eqnarray}
where $h_{ij^\star}(z,\overline{z})$ denotes the realization of the metric on the scalar manifold in a local coordinate chart. \\
\underline{Maxwell's equations :}\par
The field equations for the vector fields and the Bianchi identities
read:
\begin{eqnarray}
\partial_\mu \left(\sqrt{-g}\widetilde{G}^{\mu\nu}\right)\,&=&\,0\nonumber\\
\partial_\mu \left(\sqrt{-g}\widetilde{F}^{\mu\nu}\right)\,&=&\,0
\end{eqnarray}
Using the ansatze (\ref{strenghtsans}) the second equation is automatically fulfilled while the first
equation,
as it was anticipated in section 3, requires the quantized electric charges
$q_\Lambda$ defined by eq. (\ref{ncudierre}) to be $r$-independent
(eq. (\ref{prione})).\\
\underline{Scalar equations :}\par
varying with respect to $z^i$ one gets:
\begin{eqnarray}
&&-\frac{1}{\sqrt{-g}}\partial_\mu\left(\sqrt{-g}g^{\mu\nu}h_{ij^\star}
\partial_\nu \overline{z}^{j^\star}
\right)+\partial_i (h_{kj^\star})\partial_\mu z^k
\partial_\nu\overline{z}^{j^\star} g^{\mu\nu}+\nonumber\\
&& (\partial_i{\rm Im}{\cal N}_{\Lambda\Sigma})F^{\Lambda}_{\cdot\cdot}
F^{\Sigma\vert\cdot\cdot}+
(\partial_i{\rm Re}{\cal N}_{\Lambda\Sigma})F^{\Lambda}_{\cdot\cdot}
\widetilde{F}^{\Sigma\vert\cdot
\cdot}\,=\,0
\end{eqnarray}
which, once projected onto the real and imaginary parts of both sides, read:
\begin{eqnarray}
\frac{e^{2\cal {U}}}{4b_i^2}\left(a_i^{\prime\prime}+2\frac{a_i^{\prime}}{r}-2\frac{a_i^{\prime}
b_i^{\prime}}{b_i}\right)\,&=&\,-\frac{1}{2}\left((\partial_{a_i}{\rm Im}{\cal N}_{\Lambda\Sigma})
F^{\Lambda}_{\cdot\cdot}F^{\Sigma\vert\cdot\cdot}+(\partial_{
a_i}{\rm Re}{\cal
N}_{\Lambda\Sigma})F^{\Lambda}_{\cdot\cdot}\widetilde{F}^{\Sigma\vert\cdot\cdot}\right)
\nonumber\\
\frac{e^{2\cal {U}}}{4b_i^2}\left(b_i^{\prime\prime}+2\frac{b_i^{\prime }}{r}+
\frac{(a_i^{\prime 2}-b_i^{\prime2})}{b_i}\right)\,&=&\,-\frac{1}{2}\left((\partial_{b_i}{\rm
Im}{\cal
N}_{\Lambda\Sigma})F^{\Lambda}_{\cdot\cdot}F^{\Sigma\vert\cdot\cdot}+(\partial_{b_i}{\rm Re}
{\cal N}_{\Lambda\Sigma})F^{\Lambda}_{\cdot\cdot}\widetilde{F}^{\Sigma\vert\cdot\cdot}\right)
\label{scaleq}
\end{eqnarray}
\underline{Einstein equations :}\par
Varying the action (\ref{action}) with respect to the metric we obtain the
following equations:
\begin{eqnarray}
R_{MN}\,&=&\, -h_{ij^\star}\partial_M z^i\partial_N\overline{z}^{ j^\star}+S_{MN}\nonumber\\
S_{MN}\,&=&\,-2{\rm Im}{\cal N}_{\Lambda\Sigma}\left(F^\Lambda_{M\cdot}F^{\Sigma\vert\cdot}_{N}-
\frac{1}{4}g_{MN}F^\Lambda_{\cdot\cdot}F^{\Sigma\vert\cdot\cdot}\right)+\nonumber\\
&&-2{\rm Re}{\cal N}_{\Lambda\Sigma}\left(F^\Lambda_{M\cdot}\widetilde{F}^{\Sigma\vert\cdot}_{N}-
\frac{1}{4}g_{MN}F^\Lambda_{\cdot\cdot}\widetilde{F}^{\Sigma\vert\cdot\cdot}\right)
\label{eineq}
\end{eqnarray}
Projecting on the components $(M,N)=({\underline{0}},{\underline{0}})$ and
$(M,N)=({\underline{a}},{\underline{b}})$, respectively, these equations can be written in the
following way::
\begin{eqnarray}
{\cal U}^{\prime\prime}+\frac{2}{r}{\cal U}^\prime\,&=&\,-2e^{-2{\cal U}}S_{{\underline{0}}
{\underline{0}}}\nonumber\\
({\cal U}^\prime)^2+\sum_i\frac{1}{4b_i^2}\left((b_i^\prime)^2+(a_i^\prime)^2\right)\,&=&\,
-2e^{-2{\cal U}}S_{{\underline{0}}{\underline{0}}}
\label{2eqeinformern}
\end{eqnarray}
where:
\begin{equation}
S_{{\underline{0}}{\underline{0}}}\,=\, -\frac{2e^{4U}}{(8\pi)^2 r^4}
{\rm Im}{\cal N}_{\Lambda\Sigma}(p^\Lambda p^\Sigma+\ell (r)^\Lambda \ell (r)^\Sigma)
\end{equation}
In order to solve these equations one would need to explicitate the right hand side expression in
terms of scalar fields $a_i$,$b_i$ and quantized charges $(p^{\Lambda},q_{\Sigma})$. In order to do
that, one has to consider the ansatz for the field strenghts (\ref{strenghtsans}) substituting to
the moduli-dependent charges $q_{\Lambda}(r)$ appearing in the previous equations their
expression
in terms of the quantized charges obtained by inverting
eq.(\ref{ncudierre}):
\begin{eqnarray}
\hskip -3pt \ell^{\Lambda} (r) &=& {\rm Im}{\cal N}^{-1\vert \Lambda\Sigma}\left(
q_{\Sigma}-{\rm Re}{\cal N}_{\Sigma\Omega}p^\Omega\right)
\label{qrgen}
\end{eqnarray}
Using now the expression for the matrix ${\cal N}$ in eq. (\ref{Ngen}) of Appendix A, one can
find the explicit expression of the scalar fields equations of motion written in terms of the
quantized $r$-independent charges.
In Appendix B we report the full explicit expression of the equations of motion for both the scalars
and the metric. Let us stress that in order to find the 5 parameter
generating solution of the $STU$ model it is not sufficient to substitute to each
charge, in the scalar fixed values of eq.(\ref{scalfixn}), a corresponding harmonic function
($q_i \rightarrow H_i=1+q_i/r$).
As already explained, the generating solution should depend on 5 parameters and 4 harmonic
functions, as in \cite{stu+}. In particular, as explained above, 2 of the 6 scalar fields
parametrizing the $STU$ model, namely 2 axion fields, should be taken to be constant.
Therefore, in order to find the generating solution one as to solve the two systems of eq.s
(\ref{mammamia}) (first order) and (\ref{porcodue}) (second order) explicitely putting as an external
input the information on the constant nature of 2 of the 3 axion fields. As it is evident from
the above quoted system of eq.s,
it is quite difficult to give a not double extreme solution of the combined system
that is both explicit and manageable.
It is our aim, however, to work it out
in a forthcoming paper \cite{bft}.
\section{The solution: a simplified case, namley $S=T=U$}
In order to find a fully explicit solution we can deal with, let us consider the particular case where
$S=T=U$. Although simpler, this solution encodes all non-trivial aspects of the most general one:
it is regular, i.e. has non-zero entropy, and the scalars do evolve, i.e. it is an extreme but
{\em not} double extreme solution. First of all let us notice that eq.s (\ref{mammamia}) remain
invariant if the same set of permutations are performed on the triplet of subscripts $(1,2,3)$
in both the fields and the charges. Therefore the solution $S=T=U$ implies the positions
$q_1=q_2=q_3\equiv q$ and $p^1=p^2=p^3\equiv p$ on the charges and
therefore it will correspond
to a solution depending on (apparently only) $4$ charges $(p^0,p,q_0,q)$
instead of $8$.
Moreover, according to this identification, what we do expect now, is to find a solution which depends on
(apparently) only 3 independent charges and 2 harmonic functions.
Notice that this is not simply an axion--dilaton black--hole: such
a solution would have a vanishing entropy differently from our case.
The fact that we have just one
complex field in our solution
is because the three complex fields are taken to be equal in value.
The equations (\ref{mammamia}) simplify in the following way:
\begin{eqnarray}
\label{sfirst}
\frac{da}{dr}\,&=&\,\pm \left(\frac{e^{{\cal U}(r)}}{r^2}\right)\frac{1}{\sqrt{-2b}}
({bq} - 2\,{ab}\,p + \left( {a^2}\,b + {b^3} \right) \,{p^0})\nonumber\\
\frac{db}{dr}\,&=&\,\pm \left(\frac{e^{{\cal U}(r)}}{r^2}\right)\frac{1}{\sqrt{-2b}}
(3\,{aq} - \left( 3\,{a^2} + {b^2} \right) \,p + \left( {a^3} + a\,{b^2} \right) \,{p^0} +
{q_0})\nonumber\\
\frac{d\cal {U}}{dr}\,&=&\, \pm \left(\frac{e^{{\cal U}(r)}}{r^2}\right)\left(\frac{1}{2\sqrt{2}
(- b)^{3/2}}\right) (3\,{aq} - \left( 3\,{a^2} - 3\,{b^2} \right) \,p +
\left( {a^3} - 3\,a\,{b^2} \right) \,{p^0} + {q_0})\nonumber\\
0\,&=&\, 3\,{bq} - 6\,{ab}\,p + \left( 3\,{a^2}\,b - {b^3} \right) \,{p^0}
\end{eqnarray}
where $a\equiv a_i\,,\,b\equiv b_i\;(i=1,2,3)$.
In this case the fixed values for the scalars $a,b$ are:
\begin{eqnarray}
\label{scalfixs3}
a_{fix}\,&=&\, {\frac{p\,q + {p^0}\,{q_0}}
{2\,{p^2} - 2\,{p^0}\,q}}\nonumber\\
b_{fix}\,&=&\,-\,\frac{\sqrt{f(p,q,p^0,q_0)}}{2(p^2-p^0q)}\nonumber\\
\mbox{where}\;f(p,q,p^0,q_0)\,&=&\,3\,{p^2}\,{q^2} + 4\,{p^3}\,{q_0} -
6\,p\,{p^0}\,q\,{q_0} -
{p^0}\,\left( 4\,{q^3} +
{p^0}\,{{{q_0}}^2} \right)
\end{eqnarray}
Computing the central charge at the fixed point $Z_{fix}(p,q,p^0,q_0)=
Z(a_{fix},b_{fix},p,q,p^0,q_0)$ one finds:
\begin{eqnarray}
Z_{fix}(p,q,p^0,q_0)\,&=&\,\vert Z_{fix}\vert e^{\theta}\nonumber\\
\vert Z_{fix}(p,q,p^0,q_0)\vert\,&=&\, f(p,q,p^0,q_0)^{1/4}\nonumber\\
\sin\theta\,&=&\,\frac{p^0f(p,q,p^0,q_0)^{1/2}}{2(p^2-qp^0)^{3/2}}\nonumber\\
\cos\theta\,&=&\,{\frac{-2\,{p^3} + 3\,p\,{p^0}\,q +
{{{p^0}}^2}\,{q_0}}{2\,{{\left( {p^2} - {p^0}\,q \right) }^{{3/2}}}}}
\label{components}
\end{eqnarray}
The value of the $U$--duality group quartic invariant (whose square root is
proportional to the entropy) is:
\begin{eqnarray}
I_4(p,q,p^0,q_0)\,&=&\,\vert Z_{fix}(p,q,p^0,q_0)\vert^4\,=\,f(p,q,p^0,q_0)
\end{eqnarray}
We see form eqs.(\ref{components}) that in order for $Z_{fix}$ to be real and the entropy to be
non vanishing the only possibility is $p^0=0$ corresponding to $\theta=\pi$. It is in fact necessary
that $\sin\theta=0$ while keeping $f\not= 0$. We are therefore left with 3 independent charges
($q,p,q_0$), as anticipated.
\subsection{Solution of the $1^{st}$ order equations}
Setting $p^0=0$ the fixed values of the scalars and the quartic invariant become:
\begin{eqnarray}
\label{fixeds3}
a_{fix}\,&=&\, \frac{q}{2p}\nonumber\\
b_{fix}\,&=&\,-\,\frac{\sqrt{3q^2+4q_0 p}}{2p}\nonumber\\
I_4\,&=&\, (3q^2p^2+4q_0 p^3)
\end{eqnarray}
From the last of eq.s (\ref{sfirst}) we see that in this case the axion is double fixed, namely
does not evolve, $a\equiv a_{fix}$ and the reality condition for the central charge
is fulfilled for any $r$. Of course, also the axion equation is fulfilled and therefore
we are left with two axion--invariant equations for $b$ and $\cal {U}$:
\begin{eqnarray}
\frac{db}{dr}\,&=&\, \pm\frac{e^{\cal U}}{r^2\sqrt{- 2b}}(q_0+\frac{3q^2}{4p}-b^2p)\nonumber\\
\frac{d\cal {U}}{dr}\,&=&\, \pm\frac{e^{\cal U}}{r^2 (- 2b)^{3/2}}(q_0+\frac{3q^2}{4p}+3b^2p)
\label{eqbU}
\end{eqnarray}
which admit the following solution:
\begin{eqnarray}
\label{k1ek2}
b(r)\,&=&\,-\,\sqrt{\frac{(A_1+k_1/r)}{(A_2+k_2/r)}}\nonumber\\
e^{\cal U}\,&=&\,\left((A_2+\frac{k_2}{r})^3(A_1+k_1/r)\right)^{-1/4}\nonumber\\
k_1\,&=&\,\pm\frac{\sqrt{2}(3q^2+4q_0p)}{4p}\nonumber\\
k_2\,&=&\,\pm\sqrt{2}p
\end{eqnarray}
In the limit $r\rightarrow 0$:
\begin{eqnarray}
b(r)&\rightarrow&-\,\left(\frac{k_1}{k_2}\right)^{1/2}\,=\,b_{fix}\nonumber\\
e^{{\cal U}(r)}&\rightarrow& \,r\,(k_1k_2^3)^{-1/4}\,=\,r\,f^{-1/4} \nonumber
\end{eqnarray}
as expected, and the only undetermined constants are $A_1\,,\,A_2$. In order for the solution to be
asymptotically minkowskian it is necessary that $(A_1\,A_2^3)^{-1/4}=1$. There is then just one
undetermined parameter which is fixed by the asymptotic value of the dilaton $b$. We choose for
semplicity it to be $-1$, therefore $A_1=1\,,\,A_2=1$. This choice is arbitrary in the sense that
the different value of $b$ at infinity the different universe ($\equiv$black--hole solution), but
with the same entropy. Summarizing, before considering the eq.s of motion, the solution is:
\begin{eqnarray}
\label{sol}
a\,&=&\,a_{fix}\,=\,\frac{q}{2p}\nonumber \\
b\,&=&\,-\,\sqrt{\frac{(1+k_1/r)}{(1+k_2/r)}}\nonumber\\
e^{\cal U}\,&=&\,\left[(1+k_1/r)(1+k_2/r)^3\right]^{-1/4}
\end{eqnarray}
with $k_1$ and $k_2$ given in (\ref{k1ek2}).
\subsection{Solution of the $2^{st}$ order equations}
In the case $S=T=U$ the structure of the $\cal N$ matrix (\ref{Ngen}) and of the field strenghts
reduces considerably. For the period matrices one simply obtains:
\begin{equation}
{\rm Re}{\cal N}=\left(\matrix{ 2\,{a^3} & -{a^2} & -{a^2} & -{a^2} \cr
-{a^2} & 0 & a & a \cr -{a^2} & a & 0 & a \cr
-{a^2} & a & a & 0 \cr }\right)\;,\;
{\rm Im}{\cal N}=\left(\matrix{ 3\,{a^2}\,b + {b^3} & -\left( a\,b \right) & -\left(
a\,b \right) & -\left( a\,b \right) \cr -\left( a\,b
\right) & b & 0 & 0 \cr -\left( a\,b \right)
& 0 & b & 0 \cr -\left( a\,b \right) & 0 & 0 & b \cr }\right)
\end{equation}
while the dependence of $\ell^{\Lambda}(r)$ from the quantized charges simplifies to:
\begin{eqnarray}
\ell^\Lambda (r)\, &=&\, \left(\matrix{{\frac{-3\,{a^2}\,p + 3\,a\,q + {q_0}}{{b^3}}}\cr
{\frac{-3\,{a^3}\,p + {b^2}\,q +
3\,{a^2}\,q + a\,\left( -2\,{b^2}\,p + {q_0} \right) }{{b^3}}}\cr
{\frac{-3\,{a^3}\,p + {b^2}\,q +
3\,{a^2}\,q + a\,\left( -2\,{b^2}\,p + {q_0} \right) }{{b^3}}}\cr
{\frac{-3\,{a^3}\,p + {a^4}\,{p^0} + {b^2}\,q +
3\,{a^2}\,q + a\,\left( -2\,{b^2}\,p + {q_0} \right) }{{b^3}}}}\right)
\label{qr}
\end{eqnarray}
Inserting (\ref{qr}) in the expressions (\ref{strenghtsans}) and substituting the result in
the eq.s of motion (\ref{scaleq}) one finds:
{\small
\begin{eqnarray}
\left(a^{\prime\prime}-2\frac{a^{\prime}b^{\prime}}{b}+2\frac{a^{\prime}}{r}\right)\,&=&\,
0\nonumber\\
\left(b^{\prime\prime}+2\frac{b^{\prime }}{r}+\frac{(a^{\prime 2}-b^{\prime2})}{b}\right)\,&=&\,
-\frac{{b^2}\,{e^{2\,\cal {U}}}
\,( {p^2} - \,\frac{(-3\,{a^2}\,p + 3\,a\,q + q_0)^2}{b^6} \, )
\, }{{r^4}}
\end{eqnarray}
}
The equation for $a$ is automatically fulfilled by our solution (\ref{sol}). The equation for $b$
is fulfilled as well and both sides are equal to:
\begin{eqnarray}
{\frac{\left( {k_2} \,-\,{k_1} \right) \,
{e^{4\,\cal {U}}}\,\left( {k_1} + {k_2} +
{\frac{2\,{k_1}\,{k_2}}{r}}
\right) }{2\,b\,{r^4}}} \nonumber
\end{eqnarray}
If $\left( {k_2} - {k_1} \right)=0$ both sides are separately equal to $0$
which corresponds to the double fixed solution already found in \cite{kal1}.
Let us now consider the Einstein's equations. From equations (\ref{2eqeinformern}) we obtain in our
simpler case the following ones:
\begin{eqnarray}
{\cal U}^{\prime\prime}+\frac{2}{r}{\cal U}^\prime\,&=&\,({\cal U}^\prime)^2+\frac{3}{4b^2}
\left((b^\prime)^2+
(a^\prime)^2\right)\nonumber\\
{\cal U}^{\prime\prime}+\frac{2}{r}{\cal U}^\prime\,&=&\,-2e^{-2{\cal U}}S_{{\underline{0}}
{\underline{0}}}
\label{2eqeinn}
\end{eqnarray}
The first of eqs.(\ref{2eqeinn}) is indeed fulfilled by our ansatze. Both sides are equal to:
\begin{eqnarray}
{\frac{3\,{{\left( k_2 - k_1 \right)
}^2}}{16\,r^4{{\left( H_1\right) }^2}\,
{{\left( H_2\right) }^2}}}
\end{eqnarray}
Again, both sides are separately zero in the double-extreme case
$\left( k_2 - k_1\right)=0$.
The second equation is fullfilled, too, by our ans\"atz and again both sides are zero in the
double-extreme case.
Therefore we can conclude with the following:
\begin{stat}
Eq.5.8 yields a $\frac{1}{8}$ supersymmetry preserving solution of
N=8 supergravity that is {\bf not double extreme} and has a {\bf finite
entropy}:
\begin{eqnarray}
S_{BH}\,=\,2 \pi \left(q_0 p^3 + \frac{3}{4} \, p^2 \, q^2 \right)^{1/2}
\end{eqnarray}
depending on three of the $5$ truely independent charges
\end{stat}
\section{Conclusions}
This paper aimed at the completion of a programme started almost two
years ago, namely the classification and the construction of all
BPS saturated black-hole solutions of $N=8$ supergravity (that is
either M--theory compactified on $T^7$, or what amounts to the same
thing type IIA string theory compactified on $T^6$). Such solutions
are of three kinds:
\begin{enumerate}
\item 1/2 supersymmetry preserving solutions
\item 1/4 supersymmetry preserving solutions
\item 1/8 supersymmetry preserving solutions
\end{enumerate}
The first two cases were completely worked out in \cite{mp2}. For the
third case there existed an in depth study in \cite{mp1} which had
established the minimal number of charges and fields having a
dynamical role in the solution and also the identification of the
generating solution with an $N=2$ STU model. The actual structure of this
STU black--hole solution however was still missing and so was its
explicit embedding into the $N=8$ theory. The present paper, relying on
the techniques of Solvable Lie algebras has filled such a gap.
\par
In this paper we have written the explicit form of the rather involved
differential equations one needs to solve in order to obtain the
desired result. We also provided a solution of these equations which
is {\bf not double extreme} and has a {\bf finite entropy} depending on 3
charges. Finally we have indicated how the fully general solution depending
on $5$ non trivial charges can be worked out, leaving its actual
evalution to a future publication. This 5 parameter solution is presumibely related
via $U$--duality transformations to those found in \cite{stu+}. In that case the generating
solutions were obtained within the supergravity theory describing the low energy limit of
toroidally compactified heterotic string theory, therefore they were carrying only NS--NS
charges. Our group--theoretical embedding in the $N=8$ theory, on the other hand, allows
one to obtain quite directly the macroscopic description of pure Ramond--Ramond black--holes
which can be interpreted microscopically in terms of D--branes only\cite{bala1}.
\par
It should be stressed that the $1/8$ SUSY preserving case is the only
one where the entropy can be finite and where the horizon geometry is
\begin{equation}
AdS_2 \, \times \, S^2
\end{equation}
Correspondingly our results have a bearing on two interesting and related problems:
\begin{enumerate}
\item Assuming the validity of the $AdS/CFT$ correspondence \cite{adscft} we are
lead to describe the $0$--brane degrees of freedom in terms of
superconformal quantum mechanics \cite{scqm}. Can the entropy we obtain as an
invariant of the U--duality group be described microscopically in this way?
\item Can we trace back the solvable Lie algebra gauge fixing we need
to single out the relevant degrees of freedom to suitable wrappings of higher dimensional
$p$--branes?
\end{enumerate}
These questions are open and we propose to focus on them.
\vskip 3pt
{\large {\bf Acknowledgments}}
M.B. and M.T. wish to thank each other's institutions and the Dipartimento di Fisica Teorica di
Torino for hospitality and V. Balasubramanian for a useful remark on the general structure of the
5 parameter generating black--hole solution. A special thank goes also to the
{\em Pub on the Pond} for help and inspiration during the hardest days of the work (usually at sunset).
| proofpile-arXiv_065-7942 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
This paper argues that an interlingual representation must explicitly
represent some parts of the meaning of a situation as
\emph{possibilities} (or preferences), not as necessary or definite
components of meaning (or constraints). Possibilities enable the
analysis and generation of nuance, something required for faithful
translation. Furthermore, the representation of the meaning of words
is crucial, because it specifies which nuances words can convey in
which contexts.
In translation it is rare to find the exact word that faithfully and
directly translates a word of another language. Often, the target
language will provide many near-synonyms for a source language word
that differ (from the target word and among themselves) in nuances of
meaning. For example, the French \w{fournir} could be translated as
\w{provide, supply, furnish, offer, volunteer, afford, bring,} and so
on, which differ in fine-grained aspects of denotation, emphasis, and
style. (Figures~\ref{provide-note} and~\ref{offer-note} show some
of the distinctions.) But none of these options may carry the right
nuances to match those conveyed by \w{fournir} in the source text;
unwanted extra nuances may be conveyed, or a desired nuance may be
left out. Since an exact match is probably impossible in many
situations, faithful translation will require uncovering the nuances
conveyed by a source word and then determining how the nuances can be
conveyed in the target language by appropriate word choices in any
particular context. The inevitable mismatches that occur are one type
of \defn{translation mismatch}---differences of meaning, but not of
form, in the source and target language \cite{kameyama91}.\footnote{A
separate class of difference, \defn{translation divergence},
involves differences in the form of the source and target texts and
results from lexical gaps in the target language (in which no single
word lexicalizes the meaning of a source word), and from syntactic
and collocational constraints imposed by the source language.
`Paraphrasing' the source text in the target language is required in
order to preserve the meaning as much as possible
\cite{dorr94,stede96,elhadad97}. But even when paraphrasing,
choices between near-synonyms will have to be made, so, clearly,
translation mismatches and translation divergences are not
independent phenomena. Just as standard semantic content can be
incorporated or spread around in different ways, so can nuances of
meaning.}
\begin{figure}
\begin{description}\small\itemsep 0pt
\item[Provide] may suggest foresight and stress the idea of making
adequate preparation for something by stocking or shipping \ldots
\item[Supply] may stress the idea of replacing, of making up what is
needed, or of satisfying a deficiency.
\item[Furnish] may emphasize the idea of fitting something or someone
with whatever is necessary, or sometimes, normal or desirable.
\end{description}\vspace*{-2ex}
\caption{An abridged entry from \textit{Webster's New Dictionary of
Synonyms}~\cite{gove73}.}
\label{provide-note}
\end{figure}
\begin{figure}
\begin{description}\small\itemsep 0pt
\item[Offer] and \textbf{volunteer} may both refer to a generous
extending of aid, services, or a desired item. Those who
\textit{volunteer} agree by free choice rather than by submission to
selection or command.
\end{description}\vspace*{-2ex}
\caption{An abridged entry from \textit{Choose the Right
Word}~\cite{hayakawa94}.}
\label{offer-note}
\end{figure}
\section{Near-synonyms across languages}\label{near-syns}
This section examines how near-synonyms can differ within and across
languages. I will discuss some of the specific problems of lexical
representation in an interlingual MT system using examples drawn from
the French and English versions of the multi-lingual text provided for
this workshop.
To be as objective as possible, I'll rely on several dictionaries of
synonym discrimination including, for English, \newcite{gove73} and
\newcite{hayakawa94}, and for French, \newcite{bailly70},
\newcite{benac56}, and \newcite{batchelor93}. Unless otherwise stated,
the information on differences below comes from one of these reference
books.
\def\pair#1#2{\w{#1}\,::\,\w{#2}}
Notation: Below, `\pair{english}{french}' indicates that the pair of
words or expressions \w{english} and \w{french} correspond to one
another in the multi-lingual text (\ie they are apparent translations
of each other).
\subsubsection*{Fine-grained denotational mismatches}
If a word has near-synonyms, then they most likely differ in
fine-grained aspects of denotation. Consider the following pairs:
\begin{sentence
\sent\pair{provides}{fournit}
\sent\pair{provided}{apportaient}
\sent\pair{provide}{offrir}
\sent\pair{brought}{fournissait}
\sent\label{charge}\pair{brought}{se chargeait}
\end{sentence}
These all share the basic meaning of giving or making available what is
needed by another, but each adds its own nuances. And these are
not the only words that the translator could have used: in English,
\w{furnish, supply, offer,} and \w{volunteer} would have been
possibilities; in French, \w{approvisionner, munir, pourvoir, nantir,
pr\'esenter,} among others, could have been chosen. The differences
are complex and often language-specific. Figures~\ref{provide-note}
and~\ref{offer-note} discuss some of the differences between the
English words, and figures~\ref{fourni-note} and~\ref{offrir-note}
those between the French words. And this is the problem for translation:
none of the words match up exactly, and the nuances they carry when
they are actually used are context-dependent. (Also notice that the
usage notes are vague in many cases, using words like `may' and
`id\'ee'.)
Consider this second example:
\begin{msentence}{began}
\msent\pair{began}{amorc\'e}
\msent\pair{began}{commen\c{c}a}
\msent\pair{started}{au d\'ebut}
\end{msentence}
\w{Amorcer} implies a beginning that prepares for something else;
there is no English word that carries the same nuance, but \w{begin}
appears to be the closest match. \w{Commencer} also translates as
\w{begin}, although \w{commencer} is a general word in French,
implying only that the thing begun has a duration. In English,
\w{begin} differs from \w{start} in that the latter can imply a
setting out from a certain point after inaction (in opposition to
\w{stop}).
More pairings that exhibit similar fine-grained denotational
differences include these:
\begin{msentence}{broaden}
\msent\pair{broaden}{\'elargir}
\msent\pair{expand}{\'etendre}
\msent\pair{increase}{accro\^{\i}tre}
\end{msentence}
\begin{msentence}{transformation}
\msent\pair{transformation}{passer}
\msent\pair{transition}{transition}
\end{msentence}
\begin{sentence}
\sent\pair{enable}{permettre}
\end{sentence}
\begin{sentence}
\sent\pair{opportunities}{perspectives}
\end{sentence}
\begin{sentence}
\sent\pair{assistance}{assistance}
\end{sentence}
\begin{figure}
\begin{description}\small\itemsep 0pt
\item[Fourni] a rapport \`a la quantit\'e et ce dit de ce qui \`a
suffisamment ou en abondance le n\'ecessaire.
\item[Muni] et \textbf{arm\'e} sont relatifs \`a l'\'etat d'une chose rendue
forte ou capable, \w{muni}, plus g\'en\'erale, annon\c{c}ant un
secours pour faire quoi que ce soit.
\item[Pourvu] comporte un id\'ee de pr\'ecaution et ce dit bien en
parlant des avantages naturels donn\'es par une sorte de finalit\'e
\ldots
\item[Nanti,] muni d'un gage donn\'e par un d\'ebiteur \`a son
cr\'eancier, par ext.\ muni par pr\'ecaution et, absolumment, assez
enrichi pour ne pas craindre l'avenir.
\end{description}\vspace*{-2ex}
\caption{An abridged entry from \newcite{benac56}.}
\label{fourni-note}
\end{figure}
\begin{figure}
\begin{description}\small\itemsep 0pt
\item[Offrir,] c'est faire hommage d'une chose \`a quelqu'un, en
manifestant le d\'esir qu'il l'accepte, afin que l'offre devienne un
don.
\item[Pr\'esenter,] c'est offrir une chose que l'on tient \`a la main
ou qui est l\`a sous les yeux et dont la personne peut \`a
l'instant prendre possession.
\end{description}\vspace*{-2ex}
\caption{An abridged entry from \newcite{bailly70}.}
\label{offrir-note}
\end{figure}
There are two main problems in representing the meanings of these
words. First, although some of the nuances could be represented by
simple features, such as \g{foresight} or \g{generous}, most of them
cannot because they are complex and have an `internal' structure.
They are concepts that relate aspects of the situation. For example,
for \w{furnish}, \g{fitting someone with what is necessary} is not a
simple feature; it involves a concept of \g{fitting}, a patient (the
same patient that the overall situation has), a thing that is
provided, and the idea of the necessity of that thing to someone.
Thus, many nuances must be represented as fully-fledged concepts (or
instances thereof) in an interlingua.
Second, many of the nuances are merely suggested or implied, if they
are conveyed at all. That is, they are conveyed indirectly---the
reader has the license to decide that such a nuance was
unintended---and as such are not necessary conditions for the
definition of the words. This has ramifications for both the analysis
of the source text and the generation of the target text because one
has to determine how strongly a certain nuance is intended, if at all
(in the source), and then how it should be conveyed, if it can be, in
the target language. One should seek to translate indirect
expressions as such, and avoid making them direct. One must also
avoid choosing a target word that might convey an unwanted
implication. In any case, aspects of word meaning that are indirect
must be represented as such in the lexicon.
\subsubsection*{Coarse-grained denotational mismatches}
Sometimes the translator chooses a target word that is semantically
quite different from the source word, yet still conveys the same basic
idea. Considering pair~\ref{charge}, above: \w{bring} seems to mean
to carry as a contribution, and \w{se charger} to take responsibility
for. Perhaps there are no good equivalents in the opposite languages
for these terms, or alternatively, the words might have been chosen
because of syntactic or collocational preferences---they co-occur with
\pair{leadership}{l'administration}, which are not close translations
either.
In fact, the desire to use natural-sounding syntactic and
collocational structures is probably responsible for many of these
divergences. In another case, the pair \pair{factors}{raisons} occurs
perhaps because the translator did not want to literally
translate the expressions \pair{Many factors contributed to}{Parmi les
raisons de}. Such mismatches are outside the scope of this paper,
because they fall more into the area of translation divergences. (See
\newcite{smadja96} for research on translating collocations.)
\subsubsection*{Stylistic mismatches}
Words can also differ on many stylistic dimensions, but formality is
the most recognized dimension.\footnote{\newcite{hovy88a} suggests
others including force and floridity, and \newcite{dimarco93}
suggest concreteness or vividness. Actually, it seems that the
French text is more vivid---if a text on banking can be considered
vivid at all---than the English, using words such as \w{baptis\'ee,
\'eclatant, contagieux,} and \w{d\'emunis}.} Consider the
following pairs:
\begin{msentence}{plans}
\msent\pair{plans}{entend bien}
\msent\pair{plan}{envisagent de}
\end{msentence}
While the French words differ in formality (\w{entend bien} is
formal, and \w{envisagent de} is neutral), the same word was chosen in
English. Note that the other French words that could have been chosen
also differ in formality: \w{se proposent de} has intermediate
formality, and \w{comptent, avont l'intention,} and \w{proj\`etent
de} are all neutral.
Similarly, in~\ref{began}, above, \w{amorcer} is more formal than
\w{commencer}. Considering the other near-synonyms: the English
\w{commence} and \w{initiate} are quite formal, as is the French
\w{initier}. \w{D\'ebuter} and \w{d\'emarrer} are informal, yet both
are usually translated by \w{begin}, a neutral word in English.
(Notice also that the French cognate of the formal English
\w{commence}, \w{commencer}, is neutral.)
Style, which can be conveyed by both the words and the structure of a
text, is best represented as a global property in an interlingual
representation. That way, it can influence all decisions that are
made. (It is probably not always necessary to preserve the style of
particular words across languages.)
A separate issue of style in this text is its use of technical or
domain-specific vocabulary. Consider the following terms used to
refer to the subject of the text:
\begin{msentence}{bank}
\msent\pair{institution}{institution}
\msent\pair{institution}{\'etablissement}
\msent\pair{institution}{association}
\msent\pair{joint venture}{association}
\msent\pair{programme}{association}
\msent\pair{bank}{\'etablissement}
\msent\pair{bank}{banque}
\end{msentence}
In French, it appears that \w{association} must be used to refer to
non-profit companies and \w{\'etablissement} or \w{banque} for their
regulated (for-profit) counterparts. In English \w{institution},
among other terms, is used for both. Consider also the following
pairs:
\begin{msentence}{capital}
\msent\pair{seed capital}{capital initial}
\msent\pair{working capital}{fonds de roulement}
\msent\pair{equity capital}{capital social}
\end{msentence}
\subsubsection*{Attitudinal mismatches}
Words also differ in the attitude that they express. For example, of
\pair{poor}{d\'emunis}, \w{poor} can express a derogatory attitude,
but \w{d\'emunis} (which can be translated as \w{impoverished})
probably expresses a neutral attitude. Consider also \pair{people of
indigenous background}{Indiens}. Attitudes must be included in the
interlingual representation of an expression, and they must refer
to the specific participant(s) about whom the speaker is expressing an
attitude.
\section{Representing near-synonyms}
Before I discuss the requirements of the interlingual representation,
I must first discuss how the knowledge of near-synonyms ought to be
modelled if we are to account for the complexities of word meaning in
an interlingua. In the view taken here, the lexicon is given the
central role as bridge between natural language and interlingua.
The conventional model of lexical knowledge, used in many
computational systems, is not suitable for representing the
fine-grained distinctions between near-synonyms \cite{hirst95}. In
the conventional model, knowledge of the world is represented by
ostensibly language-neutral concepts that are often organized as an
ontology. The denotation of a lexical item is represented as a
concept, or a configuration of concepts, and amounts to a direct
word-to-concept link. So except for polysemy and (absolute) synonymy,
there is no logical difference between a lexical item and a concept.
Therefore, words that are nearly synonymous have to be linked each to
their own slightly different concepts. The problem comes in trying to
represent these slightly different concepts and the relationships
between them. \newcite{hirst95} shows that one ends up with an
awkward proliferation of language-dependent concepts, contrary to the
interlingual function of the ontology. And this assumes we can even
build a representative taxonomy from a set of near-synonyms to begin
with.
Moreover, the denotation of a word is taken to embody the necessary
and sufficient conditions for defining the word. While this has been
convenient for text analysis and lexical choice, since a denotation
can be used as an applicability condition of the word, the model is
inadequate for representing the nuances of meaning that are conveyed
indirectly, which, clearly, are not necessary conditions.
\begin{figure*}
\begin{center}
\psfig{file=furnish-clusters.eps,scale=0.833}
\end{center}
\caption{The clustered model of lexical knowledge.}\label{cluster-model}
\end{figure*}
An alternative representation is suggested by the principle behind
Gove's~\shortcite{gove73} synonym usage notes. Words are grouped into
a entry if they have the same essential meaning, i.e., that they ``can
be defined in the same terms up to a certain point''~(p.\ 25a) and
differ only in terms of minor ideas involved in their meanings. We
combine this principle with Saussure's paradigmatic view that ``each
of a set of synonyms \ldots\ has its particular value only because
they stand in contrast with one another''~\cite[p.\ 114]{saussure83}
and envision a representation in which the meaning of a word arises
out of a combination of its essential denotation (shared with other
words) and a set of explicit differences to its near-synonyms.
Thus, I propose a \defn{clustered model of lexical knowledge},
depicted in figure~\ref{cluster-model}. A cluster has two levels of
representation: a core concept and peripheral concepts. The
\defn{core concept} is a denotation as in the conventional model---a
configuration of concepts (that are defined in the ontology) that
functions as a necessary applicability condition (for choice)---but it
is shared by the near-synonyms in the cluster. In the figure, the
ontological concepts are shown as rectangles; in this case all three
clusters denote the concept of \textsc{making-available}. All of the
\defn{peripheral concepts} that the words may differ in denoting,
suggesting, or emphasizing are also represented as configurations of
concepts, but they are explicitly distinguished from the core concept
as indirect meanings that can be conveyed or not depending on the
context. In the figure, the differences between words (in a single
language) are shown as dashed lines; not all words need be
differentiated. Stylistic, attitudinal, and collocational factors are
also encoded in the cluster.
Each language has its own set of clusters. Corresponding clusters
(across languages) need not have the same peripheral concepts since
languages may differentiate their synonyms in entirely different
terms. Differences across languages are represented, for convenience,
by dashed lines between clusters, though these would not be used in
pure interlingual MT. Essentially, a cluster is a language-specific
\defn{formal usage note}, an idea originated by \newcite{dimarco93}
that \newcite{edmonds-thesis} is formalizing.
\section{Interlingual representation}
Crucially, an interlingual representation should not be tied to any
particular linguistic structure, whether lexical or syntactic.
Assuming that one has constructed an ontology or domain model (of
language-neutral concepts), an interlingual representation of a
situation is, for us, an instantiation of part of the domain
knowledge. Both \newcite{stede96} and \newcite{elhadad97} have
developed such formalisms for representing the input to natural
language generation applications (the former to multilingual
generation), but they are applicable to interlingual MT as well. The
formalisms allow their applications to paraphrase the same input in
many ways including realizing information at different syntactic
ranks and covering/incorporating the input in different ways. For
them, generation is a matter of satisfying two types of constraints:
(1) covering the whole input structure with a set of word denotations
(thereby choosing the words), and (2) building a well-formed syntactic
structure out of the words. But while their systems can provide many
options to choose from, they lack the complementary ability to
actually choose which is the most appropriate.
Now, finding the most appropriate translation of a word involves a
tradeoff between many possibly conflicting desires to express certain
nuances in certain ways, to establish the right style, to observe
collocational preferences, and to satisfy syntactic constraints. This
suggests that lexical choice is not a matter of satisfying constraints
(\ie of using the necessary applicability conditions of a word), but
rather of attempting to meet a large set of \defn{preferences}. Thus,
a distinction must be made between knowledge that should be treated as
preferences as opposed to constraints in the interlingual
representation. In the generation stage of MT, one attempts to choose the
near-synonym from a cluster (activated because of the constraints)
whose peripheral concepts best meet the most preferences.
Turning to the analysis stage of MT, since many nuances are expressed
indirectly and are influenced by the context, one cannot know for sure
whether they have been expressed unless one performs a very thorough
analysis. Indeed, it might not be possible for even a thorough
analysis to decide whether a nuance was expressed, or how indirectly
it was expressed, given the context-dependent nature of word meaning.
Thus, on the basis of the knowledge of what words can express, stored
in the clusters, the analysis stage would output an interlingual
representation that includes \defn{possibilities} of what was
expressed. The possibilities then become preferences during
generation.
\section{Examples}
Figures~\ref{ex-1}--\ref{ex-last} give examples of interlingual
representations for four segments of the text that involve some of the
words discussed in section~\ref{near-syns}. Since my focus is on word
meanings, I will not give complete representations of the expressions.
Also note that while I use specific ontological concepts in these
descriptions, this in no way implies that I claim these are the right
concepts to represent---in fact, some are quite crude. A good
ontology is crucial to MT, and I assume that such an ontology will in
due course be constructed.
I have used attribute-value structures, but any equivalent formalism
would do. Square brackets enclose recursive structures of
instantiations of ontological concepts. Names of instances are in
lowercase; concepts are capitalized; relations between instances are
in uppercase; and cross-reference is indicated by a digit in a square.
A whole interlingual representation is surrounded by brace brackets
and consists of exactly one specification of the situation and any
number of possibilities, attitudes, and stylistic preferences. The
`situation' encodes the information one might find in a traditional
interlingual representation---the definite portion of meaning to be
expressed. A `possibility' takes as a value a four-part structure of
(1) frequency (never, sometimes, or always), which represents the degree
of possibility; (2) strength (weak, medium, or strong), which represents
how strongly the nuance is conveyed; (3) type (emphasis, suggestion,
implication, or denotation), which represents how the nuance is conveyed;
and (4) an instance of a concept. The `style' and `attitude'
attributes should be self-explanatory. As for content, some of the
meanings were discussed in section~\ref{near-syns}, and the rest are
derived from the aforementioned dictionaries. Comments (labelled with
`\%') are included to indicate which words gave rise to which
possibilities.
\section{Conclusion}
This paper has motivated the need to represent possibilities (or
preferences) in addition to necessary components (or constraints) in
the interlingual representation of a situation. Possibilities are
required because words can convey a myriad of sometimes indirect
nuances of meaning depending on the context. Some examples of how one
could represent possibilities were given.
\section*{Acknowledgements}
For comments and advice, I thank Graeme Hirst. This work is
financially supported in part by the Natural Sciences and Engineering
Research Council of Canada.
\begin{figure*}
\begin{center}
\begin{avm}
\{situation
[provide1 \\
instance-of MakingAvailable \\
AGENT @{1} [accion-international \\
instance-of NonProfitOrganization] \\
OBJECT [assistance1 \\
instance-of Helping \\
ATTRIBUTE [technical1 \\
instance-of Technical]] \\
RECIPIENT @{2} [network \\
instance-of Network]] \\
possibility (frequency sometimes \\
type suggestion \\
concept [foresight1 \\
instance-of Foreseeing \\
AGENT @{1}]) \textit{\% from the word `provides'} \\
possibility (frequency sometimes \\
type emphasis \\
concept [prepare1 \\
instance-of Preparing \\
AGENT @{1} \\
ATTRIBUTE [adequate \\
instance-of Adequacy]]) \textit{\% from `provides'}\\
possibility (frequency always \\
type suggestion \\
concept [subordinate-status \\
instance-of Status \\
DEGREE [subordinate \\
instance-of Subordinate] \\
ATTRIBUTE-OF @{1} \\
RELATIVE-TO @{2}]) \textit{\% from `assistance'} \}
\end{avm}
\\[2ex]
``ACCION International \ldots\ provides technical assistance
to a network \ldots'' \\
``ACCION International \dots\ fournit une assistance technique \`a
un r\'eseau \ldots'' \\
\caption{Interlingual representation of the `equivalent' sentences shown above.
Includes four possibilities of what is expressed.}
\label{ex-1}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\begin{avm}
\{situation
[provide2 \\
instance-of MakingAvailable \\
AGENT @{1} [prodem-venture \\
instance-of NonProfitJointVenture] \\
RECIPIENT @{2} [workers \\
instance-of Worker] \\
OBJECT [credit-and-training \\
instance-of CreditAndTraining \\
AGENT-OF @{3} [broaden \\
instance-of Increasing \\
PATIENT @{4} [opportunity \\
instance-of Chance \\
POSSESSED-BY @{2} \\
REGARDING @{5} [employment \\
instance-of Employment]]]]] \\
possibility (frequency sometimes \\
type implication \\
concept [scope \\
instance-of Scope \\
MANNER-OF @{3}]) \textit{\% from the word `broaden'}\\
possibility (type implication \\
concept [desire \\
instance-of Desiring \\
AGENT @{2}\\
PATIENT @{5}]) \textit{\% from `opportunities'}\\
possibility (frequency sometimes \\
strength weak \\
type suggestion \\
concept [provoke \\
instance-of Provoking \\
AGENT @{4} \\
PATIENT @{2}]) \textit{\% from `opportunities'} \}
\end{avm}
\\[2ex]
``PRODEM \ldots\ provided credit and training to broaden employment
opportunities \ldots'' \\
``PRODEM \dots\ d'offrir \ldots\ des possibilit\'es de cr\'edit et de
formation pour \'elargir leurs perspectives d'emploi''
\caption{Another interlingual representation with possibilities of
what is expressed.}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\begin{avm}
\{ situation
@{1} [begin \\
instance-of Beginning \\
OBJECT [transition \\
instance-of StateChange] \\
TIME [year-1989 \\
instance-of Year]] \\
possibility (type implication \\
concept [prepare2 \\
instance-of Preparing \\
AGENT @{1}]) \textit{\% from `amorc\'ee'} \\
style (formality (level high)) \}
\end{avm}
\\[2ex]
``The transition \ldots\ began in 1989.'' \\
``La transition, amorc\'ee en 1989 \ldots''
\caption{Interlingual representation with a stylistic preference (for
high formality).}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\begin{avm}
\{situation
@{1} [workers \\
instance-of Worker \\
ATTRIBUTE [poor \\
instance-of Poor \\
DEGREE [high]] \\
ATTRIBUTE [self-employed \\
instance-of EmploymentStatus]] \\
attitude (type neutral \\
of @{1} ) \}
\end{avm}
\\[2ex]
``the very poor self-employed'' \\
``travailleurs ind\'ependents les plus d'\'emunis''
\caption{Interlingual representation with an expressed attitude.}
\label{ex-last}
\end{center}
\end{figure*}
\bibliographystyle{acl}
| proofpile-arXiv_065-7962 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Cosmology, Stars and Life}
Prior to the discovery of the expansion of the Universe there was little
that cosmology could contribute to the question of extraterrestrial life
aside from probabilities and prejudices. After our discovery of the
expansion and evolution of the Universe the situation changed significantly.
The entire cosmic environment was recognised as undergoing steady change.
The history of the Universe took on the complexion of an unfolding drama in
many acts, with the formations of first atoms and molecules, then galaxies
and stars, and most recently, planets and life. The most important and
simplest feature of the overall change in the Universe that the expansion
produces is the rate at which it occurs. This is linked to the age of the
expanding universe and that of its constituents.
In the 1930s, the distinguished biologist JBS Haldane took an interest in
Milne's proposal \cite{milne} that there might exist two different
timescales governing the rates of change of physical processes in the
Universe: one, $t$, for 'atomic' changes and another, $\tau $, for
'gravitational changes' where $\tau =\ln (t/t_0)$ with $t_0$ constant.
Haldane explored how changing from one timescale to the other could alter
ones picture of when conditions in the Universe would become suitable for
the evolution of biochemical life \cite{hald}, \cite{BT}. In particular, he
argued that it would be possible for radioactive decays to occur with a
decay rate that was constant on the $t$ timescale but which grew in
proportion to $t$ when evaluated on the $\tau $ scale. The biochemical
processes associated with energy derived from the breakdown of adenosine
triphosphoric acid would yield energies which, while constant on the $t$
scale, would grow as $t^2$ on the $\tau $ scale. Thus there would be an
epoch of cosmic history on the $\tau $ scale before which life was
impossible but after which it would become increasingly likely. Milne's
theory subsequently fell into abeyance although the interest in gravitation
theories with a varying Newtonian 'constant' of gravitation led to detailed
scrutiny of the paleontological and biological consequences of such
hypothetical changes for the past history of the Earth \cite{BT}.
Ultimately, this led to the formulation of the collection of ideas now known
as the Anthropic Principles, \cite{cart1}, \cite{jb2}.
Another interface between the problem of the origin of life and cosmology
has been the perennial problem of dealing with finite probabilities in
situations where an infinite number of potential trials seem to be
available. For example, in a universe that is infinite in spatial volume (as
would be expected for the case for an expanding open universe with
non-compact topology), any event that has a finite probability of occurring
should occur not just once but infinitely often with probability one if the
spatial structure of the Universe is exhaustively random \cite{ellis}. In
particular, in an infinite universe we conclude that there should exist an
infinite number of sites where life has progressed to our stage of
development. In the case of the steady-state universe, it is possible to
apply this type of argument to the history of the universe as well as its
geography because the universe is assumed to be infinitely old. Every
past-directed world line should encounter a living civilisation.
Accordingly, it has been argued that the steady state universe makes the
awkward prediction that the universe should now be teeming with life along
every line of sight \cite{BT}.
The key ingredient that modern cosmology introduces into considerations of
biology is that of \textit{time}. The observable universe is expanding and
not in a steady state. The density and temperature are steadily falling as
the expansion proceeds. This means that the average ambient conditions in
the universe are linked to its age. Roughly, in all expanding universes,
dimensional analysis tells us that the density of matter, $\rho $, is
related to the age $t$ measured in comoving proper time and Newton's
gravitation constant, $G$, by means of a relation of the form
\begin{equation}
\rho \approx \frac 1{Gt^2} \label{rho}
\end{equation}
The expanding universe creates an interval of cosmic history during which
biochemical observers, like ourselves, can expect to be examining the
Universe. Chemical complexity requires basic atomic building blocks which
are heavier than the elements of hydrogen and helium which emerge from the
hot early stages of the universe. Heavier elements, like carbon, nitrogen,
and oxygen, are made in the stars, as a result of nuclear reactions that
take billions of years to complete. Then, they are dispersed through space
by supernovae after which they find their way into grains, planets, and
ultimately, into people. This process takes billions of years to complete
and allows the expansion to produce a universe that is billions of light
years in size. Thus we see why it is inevitable that the universe is seen to
be so large. A universe that is billions of years old and hence billions of
light years in size is a necessary pre-requisite for observers based upon
chemical complexity. Biochemists believe that chemical life of this sort,
and the form based upon carbon in particular, is likely to be the only sort
able to evolve spontaneously. Other forms of living complexity (for example
that being sought by means of silicon physics) almost certainly can exist
but it is being developed with carbon-based life-forms as a catalyst rather
than by spontaneous evolution.
The inevitability of universes that are big and old as habitats for life
also leads us to conclude that they must be rather cold on average because
significant expansion to large size reduces the average temperature
inversely in proportion to the size of the universe. They must also be
sparse, with a low average density of matter and large distances between
different stars and galaxies. This low temperature and density also ensures
that the sky is dark at night (the so called 'Olbers' Paradox' first noted
by Halley, \cite{harr}) because there is too little energy available in
space to provide significant apparent luminosity from all the stars. We
conclude that many aspects of our Universe which, superficially, appear
hostile to the evolution of life are necessary prerequisites for the
existence of any form of biological complexity in the Universe.
Life needs to evolve on a timescale that is intermediate between the typical
time scale that it takes for stars to reach a state a state of stable
hydrogen burning, the so called main-sequence lifetime, and the timescale on
which stars exhaust their nuclear fuel and gravitationally collapse. This
timescale, $t_{*}$, is determined by a combination of fundamental constants
of Nature
\begin{equation}
t_{*}\approx \left( \frac{Gm_N^2}{hc}\right) ^{-1}\times \frac h{m_Nc^2}\
\approx 10^9\text{ }yrs \label{ms}
\end{equation}
where $m_N$ is the proton mass, $h$ is Planck's constant, and $c$ is the
velocity of light \cite{dic}, \cite{BT}.
In expanding universes of the Big Bang type the reciprocal of the observed
expansion rate of the universe, Hubble's constant $H_0\approx
70Km.s^{-1}Mpc^{-1},$ is closely related to the expansion age of the
universe, $t_0$, by a relation of the form
\begin{equation}
t_0\approx \frac 2{3H_0} \label{H}
\end{equation}
The fact that the age $t_0$ $\approx 10^{10}yr$ deduced from observations of
$H_0$ in this way is a little larger than the main sequence lifetime, $t_{*}$%
, is entirely natural in the Big Bang theory that is, we observe a little
later than the time when the Sun forms). However, the now defunct steady
state theory, in which there is no relation between the age of the universe
(which is infinite) and the measured value of $H_0,$ would have had to
regard the closeness in value of $H_0^{-1}$ and $t_{*}$ as a complete
coincidence \cite{rees}.
\section{Biology and Stars: Is there a link?}
Evidently, in our solar system life first evolved quite soon after the
formation of a hospitable terrestrial environment. Suppose the typical time
that it takes for life to evolve is denoted by some timescale $\ t_{bio}$,
then from the evidence presented by the solar system, which is about $%
4.6\times 10^9yrs$ old, it is seems that
\[
t_{*}\approx t_{bio}
\]
At first sight we might assume that the microscopic biochemical processes
and local environmental conditions that combine to determine the magnitude
of $t_{bio}$ are \textit{independent} of the nuclear astrophysical and
gravitational processes that determine the typical stellar main sequence
lifetime $t_{ms}$. However, this assumption leads to the striking conclusion
that we should expect extraterrestrial forms of life to be exceptionally
rare \cite{cart}, \cite{BT}, \cite{les}. The argument, in its simplest form,
is as follows. If $t_{bio}$ and $t_{*\text{ }}$are independent then the time
that life takes to arise is random with respect to the stellar timescale $%
t_{*}$. Thus it is most likely that either $t_{bio}>>t_{*\text{ }}$or that $%
t_{bio}<<t_{*\text{ }}$. Now if$\ t_{bio}<<t_{*\text{ }}$we must ask why it
is that the first observed inhabited solar system (that is, us) has $%
t_{bio}\approx t_{*\text{ }}.$ This would be extraordinarily unlikely. On
the other hand, if $t_{bio}>>t_{*\text{ }}$then the first observed inhabited
solar system (us) is most likely to have $t_{bio}\approx t_{*\text{ }}$since
systems with $t_{bio}>>t_{*\text{ }}$have yet to evolve. Thus we are a
rarity, one of the first living systems to arrive on the scene. Generally,
we are led to a conclusion, an extremely pessimistic one for the SETI
enterprise, that $t_{bio}>>t_{*\text{ }}$.
In order to escape from this conclusion we have to undermine one of the
assumptions underlying the argument that leads to it. For example, if we
suppose that $\ t_{bio}$ is no independent of $t_{*}$ then things look
different. If $t_{bio}/t_{*}$ is a rising function of $t_{*}$ then it is
actually likely that we will find $t_{bio}\approx t_{*\text{ }}$. Livio \cite
{liv} has given a simple model of how it could be that $t_{bio}\ $and $t_{*%
\text{ }}$are related by a relation of this general form. He takes a very
simple model of the evolution of a life-supporting planetary atmosphere like
the Earth's to have two key phases which lead to its oxygen content:
\textit{Phase1}: Oxygen is released by the photodissociation of water
vapour. On Earth this took $2.4\times 10^9yr$ and led to an atmospheric $O_2$
build up to about $10^{-3}\ $of its present value.
\textit{Phase 2}: Oxygen and ozone levels grow to about $0.1$ of their
present levels. This is sufficient to shield the Earth's surface from lethal
levels of ultra-violet radiation in the 2000-3000 \AA\ band (note that
nucleic acid and protein absorption of ultra-violet radiation peaks in the
2600-2700 \AA\ and 2700-2900 \AA\ bands, respectively). On Earth this phase
took about $1.6\times 10^9yr$.
Now the length of Phase 1 might be expected to be inversely proportional to
the intensity of radiation in the wavelength interval 1000-2000 \AA , where
the key molecular levels for $H_2O$ absorption lie. Studies of stellar
evolution allow us to determine this time interval and provide a rough
numerical estimate of the resulting link between the biological evolution
time (assuming it to be determined closely by the photodissociation time)
and the main sequence stellar lifetime, with \cite{liv}
\[
\frac{t_{bio}}{t_{*}}\approx 0.4\left( \frac{t_{*}}{t_{sun}}\right) ^{1.7},
\]
where $t_{sun}$ is the age of the Sun.
This model indicates a possible route to establishing a link between the
biochemical timescales for the evolution of life and the astrophysical
timescales that determine the time required to create an environment
supported by a stable hydrogen burning star. There are obvious weak links in
the argument. It provides on a necessary condition for life to evolve, not a
sufficient one. We know that there are many other events that need to occur
before life can evolve in a planetary system. We could imagine being able to
derive an expression for the probability of planet formation around a star.
This would involve many other factors which would determine the amount of
material available for the formation of solid planets with atmospheres at
distances which permit the presence of liquid water and stable surface
conditions. Unfortunately, we know that there were many 'accidents' of the
planetary formation process in the solar system which have subsequently
played a major role in the existence of long-lived stable conditions on
Earth, \cite{art}. For example, the presence of resonances between the
precession rates of rotating planets and the gravitational perturbations
they feel from all other bodies in their solar system can easily produce
chaotic evolution of the tilt of a planet's rotation axis with respect to
the orbital plane of the planets over times must shorter than the age of the
system \cite{tilt}, \cite{art}. The planet's surface temperature variations,
insolation levels, and sea levels are sensitive to this angle of tilt. It
determines the climatic differences between what we call 'the seasons'. In
the case of the Earth, the modest angle of tilt (approximately 23 degrees)
would have experienced this erratic evolution had it not been for the
presence of the Moon \cite{moon}, \cite{art}. The Moon is large enough for
its gravitational effects to dominate the resonances which occur between the
Earth's precessional rotation and the frequency of external gravitational
perturbations from the other planets. As a result the Earth's tilt wobbles
only by a fraction of a degree around $23^{\circ }$ over hundreds of
thousands of years. Enough perhaps to cause some climatic change, but not
catastrophic for the evolution of life.
This shows how the causal link between stellar lifetimes and biological
evolution times may be rather a minor factor in the chain of fortuitous
circumstances that must occur if habitable planets are to form and sustain
viable conditions for the evolution of life over long periods of time. The
problem remains to determine whether he other decisive astronomical factors
in planet formation are functionally linked to the surface conditions needed
for biochemical processes.
\section{Habitable Universes}
We know that several of the distinctive features of the large scale
structure of the visible universe play a role in meeting the conditions
needed for the evolution of biochemical complexity within it.
The first example is the proximity of the expansion dynamics to the
'critical' state which separates an ever-expanding future from one of
eventual contraction, to better than ten per cent. Universes that expanded
far faster than this would be unable to form galaxies and stars and hence
the building blocks of biochemistry would be absent. The rapid expansion
would prevent islands of material separating out from the global expansion
and becoming bound by their own self-gravitation. By contrast, if the
expansion rate were far below that characterising the critical rate then the
material in the universe would have condensed into dense structures and
black holes long before stars could form \cite{ch}, \cite{jb1}, \cite{BT},
\cite{jb}, \cite{rees2}.
The second example is that of the uniformity of the universe. The
non-uniformity level on the largest scales is very small, $\Delta \approx
10^{-5}.$ This is a measure of the average relative fluctuations in the
gravitational potential on all scales. If $\Delta $ were significantly
larger then galaxies would have rapidly degenerated into dense structures
within which planetary orbits would be disrupted by tidal forces and black
holes would form rapidly before life-supporting environments could be
established. If $\Delta $ were significantly smaller then the
non-uniformities in the density would be gravitationally too feeble to
collapse into galaxies and no stars would form. Again, the universe would be
bereft of the biochemical building blocks of life \cite{rees3}.
In recent years the most popular theory of the very early evolution of the
universe has provided a possible explanation as to why the universe expands
so close to the critical life-supporting divide and why the fluctuation
level has the value observed. This theory is called 'inflation'. It proposes
that during a short interval of time when the temperature was very high (say
$\sim 10^{25}K$), the expansion of the universe \textit{accelerated}. This
requires the material content of the universe to be temporarily dominated by
forms of matter which effectively antigravitate for that period of time \cite
{guth}. This requires their density $\rho $, and pressure, $p$, to satisfy
the inequality \cite{jb}
\begin{equation}
\rho +\frac{3p}{c^2}<0 \label{sec}
\end{equation}
The inflation is envisaged to end because the matter fields responsible
decay into other forms of matter, like radiation, which do not satisfy this
inequality. After this occurs the expansion resumes the state of
decelerating expansion that it possessed before its inflationary episode
began.
If inflation occurs it offers the possibility that the whole of the visible
part of the universe (roughly $15$ billion light years in extent today) has
expanded from a region that was small enough to be causally linked by light
signals at the very high temperatures and early times when inflation
occurred. If inflation does not occur then the visible universe would have
expanded from a region that is far larger than the distance that light can
circumnavigate at these early times and so its smoothness today is a
mystery. If inflation occurs it will transform the irreducible quantum
statistical fluctuations in space into distinctive patterns of fluctuations
in the microwave background radiation which future satellite observations
will be able to detect if they were of an intensity sufficient to have
produced the observed galaxies and clusters by the process of gravitational
instability.
As the inflationary universe scenario has been explored in greater depth it
has been found to possess a number of unexpected properties which, if they
are realised, would considerably increase the complexity of the global
cosmological problem and create new perspectives on the existence of life in
the universe \cite{linde}, \cite{vil}, \cite{jb}.
It is possible for inflation to occur in different ways in different places
in the early universe. The effect is rather like the random expansion of a
foam of bubbles. Some inflate considerably while others hardly inflate at
all. This is termed 'chaotic inflation'. Of course, we have to find
ourselves in one of the regions that underwent sufficient inflation so that
the expansion lasted for longer than $t_{*}$ and stars could produce
biological elements. In such a scenario the global structure of the Universe
is predicted to be highly inhomogeneous. Our observations of the microwave
background temperature structure will only be able to tell us whether the
region which expanded to encompass out visible part of the universe
underwent inflation in its past. An important aspect of this theory is that
for the first time it has provided us wit ha positive reason to expect that
the observable universe is not typical of the structure of the universe
beyond our visible horizon, 15 billion light years away.
It was subsequently been discovered that under fairly general conditions
inflation can be self-reproducing. That is, quantum fluctuations within each
inflating bubble will necessarily create conditions for further inflation of
microscopic regions to occur. This process or 'eternal inflation' appears to
have no end and may not have had a beginning. Thus life will be possible
only in bubbles with properties which allow self-organised complexity to
evolve and persist.
It has been found that there is further scope for random variations in these
chaotic and eternal inflationary scenarios. In the standard picture we have
just sketched, properties like the expansion rate and temperature of each
inflated bubble can vary randomly from region to region. However, it is also
possible for the strengths and number of low-energy forces of Nature to
vary. It is even possible for the number of dimensions of space which have
expanded to large size to be different from region to region. We know that
we cannot produce the known varieties of organised biochemical complexity if
the strengths of forces change by relatively small amounts, or in dimensions
other than three because of the impossibility of creating chemical or
gravitational bound states, \cite{eh, whit, tang, BT, teg}.
The possibility of these random variations arises because inflation is ended
by the decay of some matter field satisfying (\ref{sec}). This corresponds
to the field evolving to a minimum in its self-interaction potential. If
that potential has a single minimum then the characteristic physics that
results from that ground state will be the same everywhere. But if the
potential has many minima (for example like a sine function) then each
minimum will have different low-energy physics and different parts of the
universe can emerge from inflation in different minima and with different
effective laws of interaction for elementary particles. In general, we
expect the symmetry breaking which chooses the minima in different regions
to be independent and random.
\section{Changing Constants}
Considerations like these, together with the light that superstring theories
have shed upon the origins of the constants of Nature, mean that we should
assess how narrowly defined the existing constants of Nature need to be in
order to permit biochemical complexity to exist in the Universe \cite{BT},
\cite{carr}. For example, if we were to allow the ratio of the electron and
proton masses ($\beta =m_e/m_N$) and the fine structure constant $\alpha $
to be change their values (assuming no other aspects of physics is changed
by this assumption -- which is clearly going to be false!) then the allowed
variations are very constraining. Increase $\beta $ too much and there can
be no ordered molecular structures because the small value of $\beta $
ensures that electrons occupy well-defined positions in the Coulomb field
created by the protons in the nucleus; if $\beta $ exceeds about $5\times
10^{-3}\alpha ^2$ then there would be no stars; if modern grand unified
gauge theories are correct then $\alpha $ must lie in the narrow range
between about $1/180$ and $1/85$ in order that protons not decay too rapidly
and a fundamental unification of non-gravitational forces can occur. If,
instead, we consider the allowed variations in the strength of the strong
nuclear force, $\alpha _s$, and $\alpha $ then roughly $\alpha _s<0.3\alpha
^{1/2}$ is required for the stability of biologically useful elements like
carbon. If we increase $\alpha _s$ by 4\% there is disaster because the
helium-2 isotope can exist (it just fails to be bound by about $70KeV$ in
practice) and allows very fast direct proton + proton $\rightarrow $
helium-2 fusion. Stars would rapidly exhaust their fuel and collapse to
degenerate states or black holes. In contrast, if $\alpha _s$ were decreased
by about 10\% then the deuterium nucleus would cease to be bound and the
nuclear astrophysical pathways to the build up of biological elements would
be blocked. Again, the conclusion is that there is a rather small region of
parameter space in which the basic building blocks of chemical complexity
can exist.
We should stress that conclusions regarding the fragility of living systems
with respect to variations in the values of the constants of Nature are not
fully rigorous in all cases. The values of the constants are simply assumed
to take different constant values to those that they are observed to take
and the consequences of changing them one at a time are examined. However,
if the different constants are fully linked together, as we might expect for
many of them if a unified Theory of Everything exists, then many of these
independent variations may not be possible. The consequences of a small
change in one constant would have further necessary ramifications for the
allowed values of other constants. One would expect the overall effect to be
more constraining on the allowed variations that are life-supporting. For
examples of such coupled variations in string theories see refs. \cite{marc,
barr, drin}.
These considerations are likely to have a bearing on interpreting any future
quantum cosmological theory. Such a theory, by its quantum nature, will make
probabilistic predictions. It will predict that it is 'most probable' that
we find the universe (or its forces and constants) to take particular
values. This presents an interpretational problem because it is not clear
that we should expect the most probable values to be the ones that we
observe. Since only a narrow range of the allowed values for, say, the fine
structure constant will permit observers to exist in the Universe, we must
find ourselves in the narrow range of possibilities which permit them, no
matter how improbable they may be \cite{misha}, \cite{jb}. This means that
in order to fully test the predictions of future Theories of Everything we
must have a thorough understanding of all the ways in which the possible
existence of observers is constrained by variations in the structure of the
universe, in the values of the constants that define its properties, and in
the number of dimensions it possesses.
\textbf{Acknowledgements}
I would like to thank Professor Elio Sindoni and Donatella Pifferetti for
their efficient organisation and kind hospitality in Varenna and Paul
Davies, Christian de Duve, Mario Livio, Martin Rees, and Max Tegmark for
helpful discussions on some of the topics discussed here. The author was
supported by a PPARC Senior Fellowship.
| proofpile-arXiv_065-7963 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
At optical wavelengths the spectral energy distribution (SED) of
elliptical galaxies falls precipitously shortward of the 4000 \AA \ break,
hence the discovery of the UV-rising branch came as one of the
most unexpected results of the first UV satellites.
The general prejudice was that ellipticals contained exclusively old and
cool stellar populations, similar to metal rich galactic globulars,
albeit even more
metal rich. However, a few bright very hot stars were known to
be globular cluster members, and the presence of similar objects in
ellipticals had been suggested before the discovery of their
UV rising branch (Minkowski \& Osterbrock 1959; Hills
1971), but this knowledge was not widely spread.
The first UV spectra of ellipticals and bulges of spirals showed
instead that shortward of $\sim$ 2300 \AA~ the flux was increasing with
decreasing wavelenth. A fundamental contribution to this subject is
due to Burstein et al. (1988), who collected and organized all the relevant
information from IUE observations. The three main
results of this study were: 1) all studied
ellipticals have detectable UV flux, 2) their (1550--V)
color spans a range of $\approx$ 2.5 mag, and 3) it is
strongly correlated with the \mbox{Mg$_2$~} index. Hence, the ratio of the UV
to the optical emission varies by $\sim$ an order of magnitude,
and this ratio appears to increase with average metallicity ($Z$),
assuming that the \mbox{Mg$_2$~} index traces $Z$.
The presence of young, massive (hence hot) stars in ellipticals was
soon entertained in order
to account for the observed UV radiation.
On the other hand, low mass stars do evolve through hot evolutionary
phases at the end of their life, and some UV radiation {\it is}
naturally expected to arise also from purely old stellar populations.
Greggio \& Renzini (1990, hereinafter GR90) explored a variety of
possible candidates
produced by the evolution of low mass stars, both single and in binary
systems. In particular, in GR90 we concentrated on the possibility of
producing hot stars in $\ifmmode{\mathrel{\mathpalette\@versim>}$ 10 Gyr old stellar systems, which could
account for the observed level of the UV-to-optical flux ratio, and
(qualitatively) of the correlation with metallicity.
We used simple energetic arguments, based on the fuel consumption
theorem (Renzini and Buzzoni 1986), to translate the observed
level of the UV rising branch
into specific requirements for the candidate stars
responsible for the UV emission. In this paper we first summarize the
main results of GR90, and then review and discuss both the
observational and the theoretical developments following 1990.
\section{The Theoretical Background}
The argument in GR90 goes as follows.
The ultraviolet SED as measured from IUE is consistent with the
Rayleigh-Jeans tail of a black
body curve of temperature higher than $\sim$25000 K. In order to estimate
the ratio of UV to total flux an assumption on the typical
temperature of the hot component is necessary. For example, for
NGC 4649, one of the most powerful ellipticals, $L_{\rm UV}/L_{\rm T}$
is in the range from 0.014 to 0.021, for $20000\ifmmode{\mathrel{\mathpalette\@versim<} \hbox{$T_{\rm eff}$} \ifmmode{\mathrel{\mathpalette\@versim<} 40000$ K.
For single age and single
metallicity populations of single stars (simple stellar populations, SSPs),
the contribution to the total
bolometric light of stars in the generic $j$-th post-MS evolutionary phase is
\begin{equation}
\frac {L_{\rm j}}{L_{\rm T}} \simeq 9.75 \times 10^{10} B(t) F_{\rm j}(\mbox{$M_{\rm TO}$})
\label{eq:fct0}
\end{equation}
where $B(t)$ is the specific evolutionary
flux, i.e. the number of stars evolving through the turnoff and beyond
per year per solar luminosity of the parent population
(in units of $\#**\ yr^{-1} \hbox{$L_\odot$}^{-1}$), and $F_{\rm j}$ is the amount of
fuel burned during the
phase $j$ by stars of initial mass equal to the turn-off mass (\mbox{$M_{\rm TO}$}) at
the age $t$ of the population. The fuel $F_{\rm j}$ is expressed in
\mbox{$M_{\odot}$} \ of equivalent hydrogen, i.e. $F_{\rm j} = \Delta M^{\rm
H}_{\rm j} + 0.1 \Delta M^{\rm He}_{\rm j}$, where
$\Delta M^{\rm H}_{\rm j}$ and $\Delta M^{\rm He}_{\rm j}$ are
respectively the mass of hydrogen and helium burned during the phase
$j$. For old SSPs $B(t) \simeq 2.2
\times 10^{-11}$ stars $L_\odot^{-1}$ yr$^{-1}$, almost independent of
composition and age (cf. Fig. 1 in Renzini 1998).
Going one step further, equation (1)
can be generalized to a collection of SSPs (composite stellar
population, CSP), e.g. one exhibiting a narrow range of ages but a wide
metallicity distribution, that GR90 adopted as a fair description
of the stellar content of ellipticals.
To this end, suffice to substitute $F_{\rm
j}(\mbox{$M_{\rm TO}$})$ with the fuel averaged over the metallicity distribution
$< F_{\rm j}>_{Z}$. Thus, for old stellar populations with a
metallicity distribution the following simple relation holds:
\begin{equation}
L_{\rm j}/L_{\rm T}\simeq 2 \times <F_{\rm j}(\mbox{$M_{\rm TO}$})>_Z.
\label{eq:fct}
\end{equation}
This equation was the main tool used in GR90 to evaluate various kinds
of stars as potential contributors to the UV rising branch in ellipticals.
From the observational requirement $L_{\rm UV}/L_{\rm T}\simeq
0.02$, equation (2) immediately indicates that
the hot stars responsible for the UV emission from giant elliptical
galaxies should burn at least
$\sim$ 0.01 $M_\odot$ of equivalent hydrogen.
As already mentioned, the range in $(1550-V)$ colors spanned by ellipticals
is consistent with $L_{\rm UV}/L_{\rm T}$ varying by one
order of magnitude. Accordingly, the fuel burned by the candidate hot
stars, averaged over the metallicity distribution,
should increase from $\simeq$ 0.001 to $\simeq$ 0.01 \mbox{$M_{\odot}$} \ when the
average metallicity of the CSP increases.
GR90 listed four candidates, which are naturally produced in the advanced
stages of the evolution of single stars:
\par\noindent
(i) Post AGB stars (P--AGB), i.e. stars which leave the AGB after the first
thermal pulse, and reach $\hbox{$T_{\rm eff}$}\ifmmode{\mathrel{\mathpalette\@versim>} 100,000$ K before approaching the
white dwarf (WD) cooling sequence. This is certainly the most common
channel to reach the finale WD stage. Typical luminosity $\sim 1000\hbox{$L_\odot$}$.
\par\noindent
(ii) Post Early AGB stars (P--EAGB), i.e. stars leaving the AGB before
the first thermal pulse, as most of their hydrogen envelope is lost
before. Typical luminosity $\ifmmode{\mathrel{\mathpalette\@versim<} 1000\hbox{$L_\odot$}$.
\par\noindent
(iii) Hot HB stars (HHB), sometimes also called Extreme HB (EHB)
stars, i.e. stars which spend the core helium
burning phase at high temperatures, and whose subsequent evolution
(shell helium burning phase) also takes place at high temperature
(AGB--manqu\'e). Typical luminosity $\sim 20\hbox{$L_\odot$}$ for HHB, few
$100\hbox{$L_\odot$}$ for AGB--manqu\'e.
\par\noindent
(iv) Post RGB stars (P--RGB), i.e. stars which fail helium ignition
because they loose their envelope while climbing on the RGB.
Typical luminosity $\ifmmode{\mathrel{\mathpalette\@versim<} 1000\hbox{$L_\odot$}$.
The first three channels eventually produce carbon-oxygen WDs, the
last one helium WDs. Fig. 1 shows schematically the evolutionary paths
corresponding to channels (i), (ii) and (iii). Also shown are the
limiting magnitudes for objects in M31 reached with 1.5 h exposures
with WFPC2 in two Wood's filters. It appears evident how difficult it
is to detect individual HHB stars at this distance.
A stellar population of given metallicity will
certainly produce stars evolving through channels ii), iii) and iv)
provided it becomes sufficiently old. However, the age at which this
happens cannot be accurately predicted.
A model star of given initial mass will evolve through
one of the four channels above depending on the wind mass loss rate
efficiency ($\eta$). For $\eta$ below a
critical value it will go through the P--AGB, and for larger and larger
values of $\eta$ it will switch to the P--EAGB, HHB+AGB-manq\'e, and finally
to the P--RGB track. As illustrated in GR90,
this whole range of possibilities is
realized by varying the mass loss rate parameter $\eta$ by just $\sim$
a factor of two, i.e. by an amount vastly smaller
than any observational uncertainty affecting empirical RGB and AGB
mass loss rates. This leaves ample freedom to theoreticians.
\begin{figure}
\epsfysize=9cm
\hspace{2.0cm}\epsfbox{uvsaitf1.ps}
\caption[h]{Examples of evolutionary tracks for P--AGB, P--EAGB and
HHB plus AGB-manqu\'e objects. The slanted box indicates the HB locus.
Also shown are the limiting magnitudes
for objects at the distance of M31 (having adopted a true distance
modulus of 24.2) for 1.5 h exposures with WFPC2 (PC chip) on board of
HST, in the two Wood's filter F160W and F218W.}
\end{figure}
All available evolutionary calculations indicated (and still indicate)
that P--AGB stars burn less than $\sim 0.003$ \mbox{$M_{\odot}$},
and P--RGB objects even less. This allowed GR90 to conclude that stars
in
these evolutionary phases could play only a minor role in the
production of the UV upturn.
More promising appeared the
P-EAGB, with $F_{\rm P-EAGB}$ up to $\simeq 0.025$ \mbox{$M_{\odot}$}, and the
HHB and their AGB--manqu\'e progeny,
burning in total $\sim$ 0.5 \mbox{$M_{\odot}$} \ of helium (equivalent to $\sim$ 0.05
\mbox{$M_{\odot}$} \ of hydrogen).
If all stars were to go through the HHB+AGB--manq\'e channel $\sim$ 5
times more UV radiation would be produced than
needed to account for the $\sim 2\%$ of the total luminosity emitted
in the UV, as in the $(1550-V)$ bluest galaxies.
Thus, a relatively small fraction ($\approx$ 20 $\%$) of the
stellar population in
ellipticals needed to evolve through channel (iii) in order to fit the
observations.
The trend of the (1550--V) color increasing with $Z$ could
then be understood if a larger fraction of the population was evolving
through channel (iii) at higher average metallicity.
Such a trend
could be accomplished in either of two ways: 1) with a modest increase
with metallicity of the mass loss rate parameter $\eta$, or 2) with the
helium abundance ($Y$) increasing with metallicity ($Z$). Indeed,
at fixed age and metallicity, a larger $Y$ corresponds to
a smaller envelope mass for the star evolving along the RGB, so that it is
easier to produce objects (ii) to (iv). Moreover, higher helium {\it
per se} favors higher effective temperatures, e.g. during the HB phase
(cf. Sweigart \& Gross 1976).
In essence, which
of the 4 channels is realized depends on how the mass loss
rate and $Y$ scale with $Z$, both
parameters $\eta(Z)$ and \mbox{$\Delta Y/\Delta Z$}\ being poorly known observationally.
At the same time,
hosting stellar populations with a metallicity spread, ellipticals
should be inhabited by all four kind of objects, though in different
proportions.
The main conclusions in GR90 can be summarized as follows:
\par\noindent
(i) P--AGB stars, these hot low mass objects necessarily present in
ellipticals do not provide enough UV flux to account for the level of
the UV rising branch in the most powerful galaxies.
\par\noindent
(ii) The UV upturn in old stellar populations could be accounted for
only by the presence of P--EAGB and HHB stars, with up to $\sim 20\%$
of all evolving stars venturing through these channels.
\par\noindent
(iii) The production of these stars at
high $Z$, as seemed implied by the $(1550-V)$ -- \mbox{Mg$_2$~} correlation, was
possible within the uncertainties affecting the empirical
determination of $\eta(Z)$ and \mbox{$\Delta Y/\Delta Z$}, which are inputs to
stellar evolution theory.
\par\noindent
(iv) Whatever mechanism is responsible for the production of these
stars (i.e. mass loss or \mbox{$\Delta Y/\Delta Z$}), one expects that all possibile
candidates are present in ellipticals, though in different proportions.
\par\noindent
(v) If the hot stars responsible for the UV emission in the most
powerful ellipticals were P--EAGB and HHB objects, the UV rising
branch should fade away rapidly with increasing redshift, possibly
already disappearing at redshifts as low as
$z \ifmmode{\mathrel{\mathpalette\@versim>} 0.2$. This was a direct consequence of the sensitivity of
the effective temperature of HB stars on the envelope mass
at helium ignition.
GR90 paid less attention at the temperature distribution of the stars
evolving trough the various channels, but pointed out the
overwhelming difficulty of predicting such distribution, that from the
theoretical point of view depends on a number of arbitrary functions.
We felt indeed that detailed spectral synthesis modelling was not worth
the effort, while the only possible firm conclusions could be reached
with very simple arguments.
However, it was mentioned that the $2300$ \AA \ dip in the SED requires
a gap in the temperature distribution between HHB stars and the
remaining, cool HB stars. No explanation for the existence of this
gap was given in GR90. Moreover, if a continuity exists between HHB and P--RGB
stars, then one may expect HHB stars to extend all the way to the
helium main sequence, hence to fairly high effective temperatures
($\hbox{$T_{\rm eff}$} \sim 50,000$ K). In other words, this scenario would most naturally
produce a fairly wide temperature distribution of UV emitters.
It was also speculated that a sizable fraction of HHB and their
AGB--manqu\'e progeny could be helium stars, suffice indeed fairly
modest mass loss rates ($10^{-10}-10^{-9}\mbox{$M_{\odot}$}\yr-1$) for these stars
to lose completely their hydrogen envelope. This possibility was
predicted to be subject to observational test, as in this case the UV
upturn should have exhibited some Wolf-Rayet, WN-like features, such
as low or absent CIV and Lyman lines, and strong HeII and NV.
\section{UV Observations Beyond IUE}
In 1990 much of the available information on the UV upturn came from
IUE, and it has been organized by Burstein et al. (1988). Later, most of the
observational novelties came from UIT, HUT, and HST. Direct UV imaging
became possible, as well as spectroscopy down to the Lyman continuum.
\subsection{UIT and HST Imaging of Nearby Spheroids}
UIT and HST imaging have definitely ruled out massive stars as the
origin of the UV upturn in M32, the bulge of M31 , as well as in NGC~1399,
one of the UV most powerful ellitpicals (O'Connell et
al. 1992; King et al. 1992; Bertola et al. 1995a; Cole et al. 1998).
Extremely blue, low mass stars have been directly imaged in the
bulge of M31 and in M32 by HST (King et al. 1992; Bertola et al. 1995a;
Brown et al. 1998). Although none of
these objects is a massive elliptical, they follow the $(1550-V)$ $-$ \mbox{Mg$_2$~}
correlation which characterizes all {\it quiescent} Es.
The King et al. and Bertola et al. data were taken with the pre-Costar FOC.
The first group obtained images of a central field in
M31 through a filter centered at $\lambda$ = 1750 \AA,
resolving more than 100 objects. Based on their measure of the UV
magnitudes, and on an upper limit to their $(1750 - B)$
color, the authours concluded that the resolved stars in the F175W images
are P--AGB stars; and, by comparing with the IUE flux from the same area,
that these stars
account for only a fraction ($\sim 20 \%$) of the total flux at 1750 \AA.
Bertola et al. (1995a) obtained images of M31, M32 and
NGC~205 through the combined UV filters F150W and F130LP, resolving
81, 10 and 78 stars in the three objects respectively. The point like
sources in NGC~205 were interpreted as young OB stars (as already
known from ground-based observations), while the
luminosity of the sources in M31 and M32 suggests that these are
P--AGB stars. By comparing with IUE data, the authors conclude that
the resolved P--AGB objects
can account for the total UV flux in the case of M32, while for M31 $\sim 50
\%$ of the UV flux comes from an unresolved background.
Therefore, both groups conclude that the UV light in the bulge of M31
likely comes from the combination of P--AGB stars and
fainter objects, which appear as an unresolved diffuse background on the HST
image. The different value derived by the two groups
for the contribution of the resolved
sources results from the different assumptions on
the sensitivity calibration of the pre-COSTAR
FOC, and on the uncertainties on the red leak through the UV
filters. At any rate, the conclusions from both groups confirmed the
prediction of GR90, i.e. the population of hot stars in old
stellar systems is composite, with contribution from P--AGB stars
bright enough to be individually detected in nearby spheroids, and
fainter sources such as P--EAGB and HHB+AGB--manqu\'e stars as faint as
$\sim 20\hbox{$L_\odot$}$ (hence below detection threshold).
Concerning M32, the low level of its UV upturn is in agreement with
the notion that the UV sources in
this galaxy should just be P--AGB stars, with very few stars -- if any
-- going through the (ii)-(iii) channels.
UV color gradients have also been detected in a few objects
(O Connell et al. 1992). With the exception of M32, UV
colors become redder with increasing radius, probably tracing the \mbox{Mg$_2$~}
gradients.
The UV light appears diffuse, but more concentrated than
the visual light, in agreement with the expectation that it is
produced by the higher $Z$ stars, preferentially found in
the central regions (see also Brown et al. 1998).
Post-Costar FOC photometry of M32 and M31 has been recently
obtained by Brown et al. (1998) in two UV filters, namely F275W and
F175W. Again, many point like sources are resolved in these images:
433 stars in M31 and 138 in M32 down to $m_{\rm F275W} = 25.5$ mag and
$m_{\rm F175W} = 24.5$ mag. Brown et al. (1998) show that the pre-COSTAR
FOC calibrations were likely in severe error, basically leading to an
overestimate of the intrinsic UV flux from the sources. As a result, the
resolved stars in Brown et al. are interpreted as AGB--manqu\'e objects, the
bright progeny of HHB stars. Again, the cumulative flux
from the resolved stars accounts for only a
fraction ($< 20 \%$) of the total IUE flux.
Although still affected by some uncertainty, in particular a possible
systematic underestimate of the flux in the F275W filter at the 0.3
mag level, the photometry by Brown et al. (1998)
is in reasonable agreement with the expectations from IUE and HUT
spectra.
The interpretation of the nature of the resolved sources in Brown et
al. (1998) rests essentially upon the characteristic of the luminosity
functions (LF) in the two UV filters. There appears to be an increasing
number of objects towards fainter magnitudes, a trend which is not
present in the P--AGB tracks of Vassiliadis \& Wood (1994) to which
the empirical LF was compared. These tracks peak instead at magnitudes
for which there are virtually no stars observed at all. Brown et al. conclude
that the bulk ($\ifmmode{\mathrel{\mathpalette\@versim>} 95\%$) of all stars do indeed go through the
P--AGB channel, but the mass of the P--AGB stars is in excess of
$0.63\mbox{$M_{\odot}$}$, for which the P--AGB timescale is so short to be
consistent with the observed LF. However, such a high value of the
P--AGB mass would imply the existence of a prominent and very bright
population of AGB stars, for which there is no evidence in the bulge
of M31 (Renzini 1998). Moreover, this population would produce an
enourmous amount of
energy ($\ifmmode{\mathrel{\mathpalette\@versim>} 0.15\mbox{$M_{\odot}$}$ of fuel would be burned on the AGB), hence
leading to optical--infrared colors at variance with the observed ones.
In our opinion, the Brown et al. LF demonstrates that the Vassiliadis
\& Wood P--AGB tracks are inapplicable to the case of the M31 bulge.
These tracks are based on the assumption that the transition from the
AGB to the planetary
nebula stage takes place on a nuclear time scale, being
controlled by the burning of the residual envelope mass. For the low
values of the P--AGB mass expected in an old stellar population
($\sim 0.55\mbox{$M_{\odot}$}$) this transition time is indeed very long ($\sim
10^5$
yr), and a sizable number of hot P--AGB stars would have been
observed,
lying along the nearly horizontal track in the upper part of Fig. 1.
However, one knows from galactic globular clusters that the transition is
instead much faster, taking place either on a mass-loss time scale, or
even more probably on a thermal time scale (Kaeufl, Renzini, \&
Stanghellini 1993, and references therein). We conclude that the
observed LF is likely due to the combination of the low-mass P--AGB
channel ($\ifmmode{\mathrel{\mathpalette\@versim>} 95\%$ of the total stellar evolutionary flux),
with the transition to high temperatures taking
place on a thermal time scale, plus P--EAGB and/or the
HHB+AGB--manqu\'e objects for the residual stellar evolutioanry flux
($\ifmmode{\mathrel{\mathpalette\@versim<} 5\%$ of the total).
An apparently puzzling result of the Brown et
al. study is that the LFs of
the UV stars in M31 and M32 look similar in shape, in spite of the
strong difference in the level of the UV upturn in these two galaxies.
According to Brown et
al. the fraction of the total evolutionary flux that has to go through
the non P--AGB channel is 2 $\%$ and 0.5 $\%$ respectively in the
bulge of M31 and in M32. Hence, the similarity of the two LFs comes
from both being dominated by P--AGB stars that do {\it not} evolve on
a nuclear time scale through their transition from the AGB to high
temperatures.
In 1993 FOC imaging in four UV bands was obtained of the central
regions of the ellipticals NGC 1399 and NGC 4552 and of the bulge of
the NGC 2681 spiral (PI F. Bertola). The aim was to study the spacial
structure of the UV emission, checking for color gradients and if any
patcheness existed due to star formation. No such patcheness was
found neither strong color gradients, but instead NGC 4552 and NGC
2681 showed a central, unresolved, point-like source (Bertola et al. 1995b).
To our surprize, we found that the point-like source in NGC 4552 had
changed its
brightness by a factor $\sim 7\pm 1.5$ in the F342W band, compared to
a previous FOC image taken in 1991: a central {\it flare} had been
discovered, possibly due to a red giant having been tidally stripped
by a massive central black hole (Renzini et al. 1995). Subsequent,
post-COSTAR FOC imaging and FOS spectroscopy confirmed that the
central source is a variable mini-AGN, possibly the faintest known
AGN, with broad (FWHM$\simeq 3000$ km s$^{-1}$) emission lines
(Cappellari et al. 1998). While trying to better understand the UV
upturn, we had serendipitously found yet another way of gathering
information on the central black hole demography in galaxies.
\subsection{HUT Spectroscopy of the UV Upturn}
Extending the observed spectral range down to the Lyman limit
HUT has detected
the maximum in the UV spectral energy distribution (Ferguson et
al. 1991). To date, HUT data for 8 early type objects, including the
bulge of M31, have been collected (Ferguson $\&$ Davidsen 1993; Brown,
Ferguson $\&$ Davidsen 1995; Brown et al. 1997). In all studied objects,
the UV rising branch appears to have a turn-over at
$\lambda \approx$ 1000 \AA, which indicates that the bulk of the
radiation comes from moderately hot stars, with temperatures in the range
20000--25000 K (Brown et al. 1995).
Assuming that this is the characteristic spectral energy distribution
in the UV for giant ellipticals, like NGC~4649,
we obtain a better estimate for the ratio $L_{\rm UV}/L_{\rm T}$ of $\simeq
0.015$, which translates into $< F_{\rm j}>_{Z} \simeq 0.007$ for
the hot stars inhabiting the most powerful ellipticals.
Since the UV SED has a minimum
around 2300 \AA , a large contribution from stars with
intermediate temperatures, say $\approx$ 10000 K, is excluded.
Thus the bulk of
the UV emission comes from stars in a narrow range of temperatures.
This is an important constraint for the astrophysically plausible
evolutionary paths that can account for the UV rising branch
phenomenon. For
example, an even distribution of stars on the HB like in the globular
cluster M3, corresponds to a spectrum flatter than observed in ellipticals,
due to the similar contribution from stars in the wide effective
temperature range (see e.g. Nesci $\&$ Perola 1985, Ferguson 1995).
Another important characteristic of the HUT spectra of early type
systems is the fact that they are composite:
when removing from the observed spectrum the theoretical contribution
of HHB star, according to their complete evolution from the ZAHB to the
WD final stage, some residual flux at the shorter wavelenghts
is left (Ferguson and Davidsen 1993; Brown et
al. 1997). The best fits to the SED of all the studied objects are
obtained with contributions from both HHB and P--AGB evolutionary tracks.
Based on their detailed modelling, Brown et al. (1997)
conclude that approximately 10 $\%$ of the total stellar population
should go through the HHB channel of evolution in NGC~1399, one of
the strongest UV emitters. This is in very good agreement with the
predictions of the fuel consumption theorem:
for a two components CSP, with 90$\%$ of the stars
evolving through the P--AGB channel and the remaining 10$\%$ going
through the HHB evolution, the average fuel burned in the hot
evolutionary phases is:
\begin{equation}
< F_{\rm j}> = 0.9 \times F_{\rm P-AGB} + 0.1 \times F_{\rm H-HB}
\label{eq:fuelave}
\end{equation}
Adopting $F_{\rm P-AGB}$ = 0.003 \mbox{$M_{\odot}$} \ and $F_{\rm H-HB}$ = 0.05
\mbox{$M_{\odot}$} \ (see Sect. 2) one gets $< F_{\rm j}>$ = 0.0077, close
indeed to the estimate above.
All of the 8 objects in the Brown et al.\ sample seem to require some
contribution from HHB stars, in
different proportions. This does not come unexpected, since their \mbox{Mg$_2$~}
indices range from 0.31 to 0.36, which puts them among the high average
metallicity objects.
Brown et al. (1995) also claimed that, within their
sample, a larger fraction of
stars evolve through the HHB channel in the stronger \mbox{Mg$_2$~} galaxies.
Modelling the UV spectra, they derive the stellar
evolutionary flux through the HHB plus AGB--manqu\'e track which is
needed to account for the observed UV emission. The ratio
beteween this evolutionary flux and that of the total stellar
population sampled by the HUT aperture appears to be nicely correlated
with the \mbox{Mg$_2$~} index of the parent galaxy.
The value of this ratio is model dependent, and somewhat
different figures are obtained in the more detailed computations
in Brown et al. (1997). Nevertheless, judging from their tables,
the general trend seems confirmed. Thus, it can be concluded
that galaxies with \mbox{Mg$_2$~} indices in excess of $\sim$ 0.3 very likely
host HHB stars in their nuclei. Only a small fraction of the
population need to go through this extreme evolutionary channel to
account for the observed UV fluxes, varying from
$\sim 1$ to $\sim 10 \%$ for galaxies with \mbox{Mg$_2$~} ranging from 0.3 to
0.36. These results are clearly consistent with the expectations from
GR90.
The low S/N in the HUT spectra prevents accurate determinations of
abundances.
The absorption features seem however to indicate a low
metallicity in the stellar atmospheres of the stars mainly
contributing the UV emission: $Z_{\rm atm} = 0.1 Z_\odot$ (Brown et al 1997).
This would imply that the UV rising branch phenomenon is not directly related
to the presence of high $Z$ stars, and the correlation between the
$(1550-V)$ color and \mbox{Mg$_2$~} index has to find a different
explanation (Park \& Lee 1997, see below) from what proposed in GR90.
Alternatively, $Z_{\rm atm}$
is not representative of the true metallicity of the hot stars in
ellipticals, as heavy elements may diffuse out of the
atmospheres of HHB stars (Brown et al. 1997).
A more firm result of the abundance analysis is the lack of CIV as
would be expected if massive stars were responsible for the UV
emission. However, HUT spectra show also strong Ly$\beta$ and Ly$\gamma$
lines, indicating that the vast majority of HHB and AGB--manqu\'e do
{\it not} lose their hydrogen envelope, and do not become WN-like
helium stars. This means that average mass loss
rates during these phases must be lower than $\sim 10^{-10}$ and $\sim
10^{-9}\mbox{$M_{\odot}$}\yr-1$, respectively.
\subsection{Attempts at Detecting the Evolution with redshift of the
UV Upturn}
One crucial prediction of GR90 concerned the evolution with redshift
of the UV upturn. If due to a combination of P-EAGB and
HHB+AGB--manqu\'e stars, the UV upturn should fade away already
at fairly low redshift, see for example the realization by
Barbaro, Bertola, \& Burstein (1992).
To check this prediction two Cycle-I HST
projects were implemented (PIs R. Windhorst and A. Renzini, respectively).
FOS spectra of $z=0.1-0.6$ elliptical galaxies selected for being
either weak radiogalaxies and/or cluster members were obtained. They
all showed a strong UV upturn, which at first sight appeared to be in clear
conflict with the prediction. However, it soon turned out that a
similarly strong UV upturn was also shown by the FOS spectrum of
an innocent G2V star, which certainly did not have it on its own.
While searching for the vanishing UV upturn of ellipticals the
red-scatterd light problem of FOS was discovered instead (Windhorst et
al. 1994). This lead to a novel approach to the calibration of FOS --
and lately of ESO instruments for the VLT -- which makes more use of
first physical principles, and less recourse to least square fits
(Bushouse, Rosa, \& M\"uller 1995; Rosa 1997;
Ballester \& Rosa 1997).
\begin{figure}
\epsfysize=9cm
\hspace{2.0cm}\epsfbox{uvsaitf2.ps}
\caption[h]{The flux sampled by the F555W and F814W filters, and the 3
$\sigma$ upper limit to the flux through the F218W filter for the
brightest elliptical galaxy in the WFPC2 field of view in the $z=0.37$
cluster A895, as a function of the restframe wavelength. The SED of
NGC~4649 is also shown. The three
measured fluxes have been de-redshifted and normalized in such a way
to have the F814W flux to match the continuum of NGC~4649.}
\end{figure}
The FOS scattered light problem had the effect of reducing
dramatically the S/N ratio for UV observations of high-$z$
ellipticals, and therefore attempts at detecting the vanishing upturn
effect moved to WFPC2, now equipped with Wood's filters.
A first attempt was made by a group including R. Gilmozzi, E. Held, R. Viezzer
and ourselves. WFPC2 images of the cluster Abell 895 ($z=0.37$) were
obtained through the Wood's filter F218W, and through the
F555W and F814W filters.
No detectable flux from cluster ellipticals was found in a coadded
10,000 second integration through the F218W filter.
The result is shown in Fig. 2 for the brightest cluster member
(reproduced from Renzini 1996), with the 3-$\sigma$ upper limit
falling disappointingly on top of the expected upturn if such
galaxies had the same rest frame $(1550-V)$ color of the local
elliptical NGC 4649.
Similarly disappointing was the result of an analogous experiment by
Buson et al. (1998), who
imaged the Abell 851 cluster ($z=0.41)$ through the F218W
and F702W filters (corresponding to $\sim$ the rest
frame $(1550-V)$ color). Again, the F218W data are not deep enough to
detect the cluster ellipticals even if they were to maintain the same
rest frame $(1550-V)$ color of the bluest ellipticals at zero redshift.
The failures of these attempts has to be ascribed to the low sensitivity
of WFPC2 when used in conjunction with Wood's filters (that indeed we
nicknamed wood's filters). An alternative approach has been recently
pursued by Brown et al. (1998) for a sample of ellipticals in the
$z=0.375$ cluster
A370. The combination of two long-pass filters of FOC (F130LP and F370LP)
has allowed them to isolate the contribution of the emission shortward of
$\sim$ 2700 \AA\ in the rest frame, hence sampling the UV upturn.
Surprizingly, no appreciable evolution compared to nearby ellipticals has been
detected, and Brown et al. conclude that this result excludes some models
of the upturn, while others are still acceptable provided that
the bulk of stars
in these galaxies formed at $z\ifmmode{\mathrel{\mathpalette\@versim>} 4$.
More observations are needed to study in detail the evolution of the
UV rising branch with increasing redshift, and to derive informations
on the nature and the age of the UV bright stars. Since FOS and GHRS have been
removed from HST, STIS may now offer a better chance to detect
the vanishing UV upturn effect.
\section{Theoretical Modelling}
\subsection{Stellar Evolutionary Sequences}
In 1990 only a handful of P--AGB, P--EAGB, and HHB+AGB--manqu\'e
evolutionary sequences existed in the literature.
In the last decade a large effort has been devoted to
construct extensive sets of evolutionary tracks, primarily
with the aim of understanding the UV upturn
phenomenon. Hundreds of stellar evolutionary
sequences for low mass stars, with up to super solar metallicities and
helium abundances have been computed to isolate the range of
parameteres which produce P--EAGB and HHB objects
(e.g. Castellani \& Tornamb\'e 1991; Horch, Demarque \& Pinsonneault 1992;
Castellani, Limongi \& Tornamb\'e 1992, 1995; Dorman, Rood and O' Connell
1993, hereinafter DRO93; Fagotto et al. 1994a,b,c; Yi, Demarque \& Kim 1997a).
Basically, the overall evolutionary picture
illustrated in GR90 has been confirmed. The average temperature at which the
helium burning occurs is essentially controlled by the envelope mass of the
stars at helium ignition, being
hotter the lower the envelope mass. High values of the helium
abundance favor the
production of hot helium burners, and widen the range of envelope
masses for which this condition is satisfied.
According to DRO93, for
$Z$= \hbox{$Z_\odot$}\, stellar models with HB envelope masses \mbox{$M_{\rm env}^{\rm HB}$}\
$\ifmmode{\mathrel{\mathpalette\@versim<}$ 0.05 \mbox{$M_{\odot}$}\ evolve either as P--EAGB or as HHB
and AGB--manqu\'e. This critical value for
the envelope mass increases with the helium abundance
(see also Yi et al. 1997a), reaching values as high as 0.15
\mbox{$M_{\odot}$}\ for $(Y,Z)$ = (0.45,0.06). It follows that at high $(Y,Z)$ the
condition on
the envelope mass necessary to produce hot stars is more easily met.
Two interesting aspects of the evolution of low mass stars have been
disclosed, which were not considered in GR90: 1) for high $Z$ and
especially $Y$,
the evolution of HB stars presents a pronounced dichotomy, with some
stars starting the evolution on the red side of the HB, spending there
a fraction of their HB phase, and then after
zipping to high temperatures where they burn the rest of their fuel
(Horch et al. 1992, but see also Sweigart \&
Gross 1976); and
2) for high assumed mass loss rates some
stars {\it peel off} the RGB and experience their core helium flash
at high effective temperatures (Castellani \& Castellani 1993;
D'Cruz et al. 1996).
The systematics with $Y$ and $Z$ of the post RGB evolution can be
appreciated in DRO93 and Yi et al. (1997a): up to $\approx$ \hbox{$Z_\odot$}
the dependence of the ZAHB \hbox{$T_{\rm eff}$}\ on \mbox{$M_{\rm env}^{\rm HB}$}\ is
relatively mild, and a flat distribution of envelope masses maps into an
even distribution in Log $T_{\rm eff}$ of stars on the HB.
Subsequent evolution remains confined in the red (in the blue)
for the more massive (less massive) HB objects, while the intermediate
HB (IHB) stars evolve along wide redward/blueward loops, thereby
providing intermediate temperature objects. Correspondingly,
for $Z \ifmmode{\mathrel{\mathpalette\@versim<}$ \hbox{$Z_\odot$} it is relatively difficult to produce the
2300 \AA\ minimum in the SED. However, as the metallicity
increases, the evolution of the HB objects tends to become more
skewed either towards the red, or to the blue. The effect is very
strong for large values of the \mbox{$\Delta Y/\Delta Z$}\ parameter. At ($Y,Z$)= (0.46,0.06)
the IHB objects virtually disappear, and the bulk of stars are
either redder than Log $T_{\rm eff} \simeq$ 3.7 or bluer than Log
$T_{\rm eff} \simeq$ 4.2 (see Figures 2 and 3 in DRO93). This
behavior may help producing the 2300 \AA\ minimum in the SED.
As illustrated in the introduction, when the mass loss parameter
$\eta$ is sufficiently large, the evolution
on the RGB is aborted before the core mass has grown enough to
trigger the central helium flash. In GR90 it was assumed that in this
case further evolution would just take the model star to the (helium)
white dwarf stage. Actual computations in this $\eta$
range show, instead, that there are
models which succeed in igniting helium after departing from the RGB,
either while crossing the HRD, or during the subsequent cooling phase
towards the WD stage (Castellani \& Castellani 1993).
Thus, there is a mass range (or a $\eta$ range) for which
the helium core flash occurs in the hot region of the HRD (hot helium
flashers, in D'Cruz et al. 1996 nomenclature).
Subsequent evolution of these objects (hereinafter HHeF) is the same
as for HHB star, with a very low envelope mass. If they exist, the HHeF
have the minimum envelope mass that HB stars can have, hence
naturally defining the hot end of the horizontal branch. Stars with more
massive envelopes will ignite helium at the tip of the RGB,
to appear on the ZAHB with lower \hbox{$T_{\rm eff}$}. Stars less
massive than the HHeF will fail helium ignition, thus becoming helium WDs.
At super solar metallicites, HHeF are produced
for $\eta \ifmmode{\mathrel{\mathpalette\@versim>} 0.7$, which is $\sim$ 2 times larger
than the value which fits the properties of the HB in
globular clusters (D'Cruz et al. 1996). After helium ignition,
the HHeF are found on the
ZAHB at Log $T_{\rm eff}$ $\sim$ 4.4 for supersolar $Z$, which seems to
be the maximum possible temperature for HB models (Castellani,
Degl'Innocenti and Pulone 1995).
\subsection{Synthetic UV Upturns}
Inspired by the $(1550-V)$ -- \mbox{Mg$_2$~} correlation most authors have
explored under which conditions HHB and related stars are produced in
metal rich and super metal rich populations.
Dorman, O'Connell and Rood (1995, hereinafter DOR95) assume
the mass loss on the RGB as principal actor in
originating the UV rising branch. Two are the main points which
support this picture: (1) the presence of
extended HB in the CMD of globular clusters, which require a spread in
RGB mass loss by $\sim$ 3 0$\%$ among stars within the same cluster; and (2)
the population of hot subdwarfs in the solar vicinity, which shows
that at $\sim$ solar metallicity HHB stars and their progeny can
occasionally be produced.
Thus, in DOR95 view, the hot stars in ellipticals are
(moderately) old, $\sim$ solar
metallicity objects which happen to loose 2-3 times more mass than the
average Reimers rate. Questioning the real
significance of the \mbox{Mg$_2$~} index as a metallicity indicator,
DOR95 generically ascribe the origin of the correlation
$(1550-V)$ -- \mbox{Mg$_2$~} to either an age or a metallicity
spread among ellipticals.
However, no attempt is made to eplore the effect of
a metallicity distribution on the UV SED of
ellipticals. If a large dispersion of the mass loss rate applies to
all $Z$ components, it seems difficult to avoid a sizeable contribution of
IHB stars in the UV spectral range.
In Tantalo et al. (1996) models HHB stars are produced at high $Z$
basically because a large \mbox{$\Delta Y/\Delta Z$}\ is assumed. These authors construct
self consistent
chemo-spectro-photometric models for the ellipticals, which thus
contain a metallicity distribution as computed from the chemical
evolution. The final integrated properties of the models depend not
only on the assumptions on the parameters governing the stellar
evolution (e.g. the mass loss), but also on those important for the chemical
evolution (i.e. star formation rate, IMF, stellar yield, depth of the
galactic potential, supernova feedback, galactic winds etc.).
Tantalo et al. most massive galaxy models present
a strong UV upturn developing as early as 5.6 Gyr.
This value of the age is extremely sensitive to the specific choice of
the parameters $\eta$ = 0.45 and \mbox{$\Delta Y/\Delta Z$} = 2.5, which cause the
SSP model at $Z = 0.1$ to produce HHB stars already at 5.6 Gyr.
On the other hand, all other SSP models in Tantalo et al. grid
(with $Z < 0.1$) produce HHB stars only for ages in excess of $\sim$ 12 Gyr.
Therefore, the
UV properties of the composite model also depend critically on the precise
population of the highest metallicity bin (see also Yi, Demarque
\& Oemler 1998).
Finally, Yi, Demarque \& Oemler (1997b,1998) propose a model in which
all the relavant parameters are allowed to vary while searching for a
best fit. They finally favour
a positive (but moderate) \mbox{$\Delta Y/\Delta Z$} (=2-3), a modest trend of $\eta$ with
metallicity (ranging from $\sim$ 0.5 to $\sim$ 0.7--1 for $Z$ ranging
from 0.02 \hbox{$Z_\odot$} to $\ifmmode{\mathrel{\mathpalette\@versim>}$ \hbox{$Z_\odot$}) ; a mass dispersion of the HB of
$\sim$ 0.06 \mbox{$M_{\odot}$} (calibrated on GCs
properties); and a metallicity distribution, as suggested by chemical
evolution models. With these prescriptions, Yi et al. (1998) reach a
reasonable fit with the observations, at ages $\ifmmode{\mathrel{\mathpalette\@versim>}$ 10 Gyr.
The fit is better when adopting the $Z$ distribution
from infall models, as opposed to closed box models. Indeed, in the
latter case too much flux is produced in the mid-UV, due to the broad
distribution of HB temperatures of metal poor stars.
While in all the above attempts the UV emission arises from stars in
the high-$Z$ tail of the metallicity distribution, Lee (1994) and Park
and Lee (1997) maintain
that the UV flux
originates from the emission of metal poor stars. Considering that the
stellar populations in galaxies are characterized by a metallicity
distribution, they explore the possibility that the optical
light comes from the high $Z$ and the UV light come from the
low $Z$ components.
The relatively low strength of absorption features in
the UV-rising branch mentioned in Section 3.2 is in agreement
with this picture.
The trend of increasing $(UV-V)$ color with increasing
\mbox{Mg$_2$~} (hence $< Z >$) would have an indirect origin, resulting from
the brighter ellipticals
(with stronger \mbox{Mg$_2$~} indices) being older than the fainter ones, as it
may be expected in some cosmological simulations.
However, in the Park and Lee model, ages as old as of $\sim$20 Gyr are
needed to produce a UV output such as that of giant ellipticals, which looks
uncomfortably too old. Another problem comes from
the strongest UV upturns being shown by galaxies
with \mbox{Mg$_2$~}$\ifmmode{\mathrel{\mathpalette\@versim>}$ 0.3, which cannot have a major metal poor component,
as the metallicity distribution shoud be trimmed below
Z$\approx$ 0.5 Z$_{\odot}$ (Greggio 1997).
Thus the low $Z$ tail may not be
present in the central regions of the most powerful ellipticals
(see also Chiosi, Vallenari \& Bressan 1997; Yi et al. 1998).
Moreover, as already noticed, it is more difficult to low-$Z$
populations to produce the 2300 \AA\ minimum in the SED, as the $T_{\rm
eff}$ of HB stars is a mild function of the envelope mass. Indeed,
the $(1500-2500)$ color of
the UV rising branch in ellipticals is systematically bluer than in
metal poor globular clusters (DOR95). This shows that the average
temperature of the hot stars in ellipticals is higher than the average
temperature of HB stars in GCs. Only high ($Y,Z$) models seem
to reach blue enough colors (DOR95, Yi et al. 1995, 1998).
Finally, we notice that two metal rich globular clusters in the
Galactic bulge have been found to host a sizable population of HHB
stars (Rich et al. 1997). This has shown that it is indeed possible to
produce numerous HHB stars at $Z \sim$ \hbox{$Z_\odot$}, although this
may be due to some yet unidentified dynamical process in these
particular clusters.
To summarize, most of the theoretical work has just shown
quantitatively, in a detailed level, that the options are
equivalent: hot stars are produced at high metallicities in old
stellar populations if \mbox{$\Delta Y/\Delta Z$}\ is large and/or if the mass loss rate
moderately increases with metallicity.
Which combination of the two effects is at work remains unclear. As we will
discuss later, high $Y$ in combination with high $Z$ could
be a necessary ingredient in order to avoid too much light in the
mid-UV from IHB stars.
\par\noindent
\section {Discussion and Conclusions}
In summary, the current observational evidences that need to be
explained include:
\par\noindent
\begin{itemize}
\item In those ellipticals with the strongest UV upturns
$L_{\rm UV} /L_{\rm T} \simeq$ 0.015.
\par\noindent
\item The UV SED requires a small temperature range
for the hot stars in the nuclei of ellipticals, peaking at $\sim
25,000$ K.
\par\noindent
\item Among ellipticals with \mbox{Mg$_2$~} $\ifmmode{\mathrel{\mathpalette\@versim>}$ 0.3, the fraction
of hot evolved objects going throuh the HHB channel of
evolution varies sizeably, possibly ranging from $\sim 1\%$ to $\sim 10\%$
for \mbox{Mg$_2$~} increasing from 0.3 to 0.36.
\par\noindent
\item The shape of the LF of the UV bright stars is similar in M32 and
in the bulge of M31. In both galaxies resolved stars account for only
of a small fraction of the UV flux.
\end{itemize}
\par
In this section we discuss the hints on the hot stars in
ellipticals from these observational evidences.
\subsection {On the hot stars in ellipticals and their origin}
Thanks to the HUT spectra reaching shorter wavelengths compared to
IUE, a smaller $\hbox{$L_{\rm UV}$}/\hbox{$L_{\rm T}$}$ is derived, hence a smaller average fuel consumption
$< F_{\rm UV}>_{Z}$ is required
for the hot stars in the most powerful ellipticals, compared
to GR90 estimates. However the average fuel consumption remains much
larger than what provided by P--AGB stars, and the best candidates hot
stars remain P--EAGB and/or HHB+AGB--manqu\'e objects.
HHB stars and their progeny tend to be favored essentially because
their $F_{\rm UV}$ is larger than the required $< F_{\rm j}>_{Z}$,
so only a (small) fraction of the
stellar evolutionary flux has to go through a very hot helium burning
stage. Besides, P--EAGB stars are likely to be distributed over the
whole \hbox{$T_{\rm eff}$}\ range from $\sim$ 5000 to $\sim$ 70000 K (see tracks in
Castellani and Tornamb\'e 1991), thus providing too much flux both in the
mid-UV and in the most extreme UV spectral range.
If the bulk of hot stars in giant Es are HHB + AGB--manqu\'e,
some $\sim 10 \%$ of the evolving population in the
UV brightest ellipticals has to evolve through channel (iii) (see
Section 2). This
constrains the combination of the parameters $\eta$ and \mbox{$\Delta Y/\Delta Z$}, plus
all the parameters which play a role in determining the metallicity
distribution in the central regions of the most powerful ellipticals,
in particular its high metallicity tail.
Can we learn something on these parameters
from the UV SED?
As repeatedly noticed here, the observations require that the HHB
stars are characterized by a narrow range of \mbox{$M_{\rm env}^{\rm HB}$}. In this respect,
stellar populations of high metallicities, possibly coupled with large
helium abundances, are favored. This stems from an observational
argument, since the SEDs of galactic (low $Z$) globulars with extended HBs
tend to be flatter than those typical of giant ellipticals.
At the same $(1500-V)$ color (i.e. for the same average $F_{\rm UV}$),
the $(1500 - 2500)$ color of GCs are redder than those of giant
ellipticals, with only high $Z$ SSP models matching the $(1500
- 2500)$ color (DOR95, Yi et al. 1995, 1998).
There is also a theoretical argument in favor of
the high $Z$ hypothesis, based on the shape of the relation between
\mbox{$T_{\rm eff}^{\rm HB}$}\ and \mbox{$M_{\rm env}^{\rm HB}$}\ of HB stars. As \mbox{$M_{\rm env}^{\rm HB}$}\ decreases \mbox{$T_{\rm eff}^{\rm HB}$}\ keeps low untill a
threshold value is reached, after which the relation becomes
becomes extremely steep (cf. Fig. 3.1 in Renzini 1977).
As a consequence there is
a very narrow range of \mbox{$M_{\rm env}^{\rm HB}$} which corresponds to intermediate
effective temperatures.
Perhaps more importantly, at high metallicites HB stars appears to exhibit a
bimodal behavior, with most of the HB (as well as the subsequent
shell helium burning stage) being spent either at high or low \hbox{$T_{\rm eff}$},
virtually avoiding the intermediate regime (see tracks in DRO93). This
tendency is reinforced when high $Y$ combines
with high $Z$ (Yi et al. 1997b).
In conclusion, the presence of the 2300 \AA\ minimum and the relatively
steep slope of the UV SED in ellipticals speak in favor of high $Z$ and $Y$
HHB stars and their progeny.
At this point we notice that the strong \mbox{Mg$_2$~} index in the nuclei of
giant Es requires that the $Z$ distribution has a small (if any)
component at $Z \ifmmode{\mathrel{\mathpalette\@versim<} 0.5$ \hbox{$Z_\odot$} (Casuso et al. 1996, Greggio 1997). The
SED in the UV offers another argument in support of this picture (see
also Bressan, Chiosi \& Fagotto 1994; Tantalo et al. 1996).
Turning now to the question of producing HHB stars at $Z \ge $ \hbox{$Z_\odot$} in
less than a Hubble time, an enhancement of the mass loss rate
parameter over the value which fits the GCs properties ($\eta_{\rm
GC}$) seems difficult to avoid. At low \mbox{$\Delta Y/\Delta Z$}\ D'Cruz et al. (1996)
require an enhancement of a factor 2--3; at \mbox{$\Delta Y/\Delta Z$} =2--3 Yi et al.
require an enhancement of a factor $\sim 2$. It's interesting to
notice that, due to this large values of $\eta$, a small mass loss
dispersion easily produces the hot helium flashers (see section
3). This class of objects naturally provides an upper limit to the
\hbox{$T_{\rm eff}$}\
distribution on the HB (Castellani \& Castellani 1993), offering an
elegant solution to the problem of why the SED of giant ellipticals
shows the turnover at $\lambda \simeq 1000$ \AA. Indeed, if the HB were
populated down to the helium MS, stars would be distributed all the
way up to \hbox{$T_{\rm eff}$} $\sim$ 50000 and beyond, hence producing a hard UV
spectrum shortward of Ly$\alpha$, which is not observed.
Thus, the hot stars in the nuclei of giant Es are likely to be objects in the
helium burning phase which happened to undergo a particularly heavy
mass loss while on the RGB. Their large $Z$ and $Y$ would produce
an evolution confined in a narrow range of effective temperatures.
The hot edge of this range would be populated by objects which (due to mass
loss) failed helium ignition on the RGB, but succeeded later during
the evolution towards the WD stage.
These stars would belong to
the high-$Z$ tail of the distribution in the GR90 picture, or,
alternatively, to the high mass loss tail of the
distribution of stars around $\sim$
\hbox{$Z_\odot$}\ (DOR95). In the first case, the mass loss parameter $\eta$ should
increase with the metallicity; in the second a large dispersion of $\eta$
at Z$\sim$ \hbox{$Z_\odot$} is needed.
The first option more naturally accounts for the $(1550-V)$ vs \mbox{Mg$_2$~}
correlation, which we are going to consider next.
Finally, we attach a great significance to the fact that the bulk of
the UV emission in M31
bulge and M32 comes from objects which are fainter than the detection
threshold with FOC. Indeed, this leaves little alternative to HHB
stars as the main UV producers (cf. Fig. 1).
\subsection {On the $(UV-V)$ -- \mbox{Mg$_2$~} correlation}
Among the various possibilities, the CSP in the nuclei of Es with
\mbox{Mg$_2$~}$\ifmmode{\mathrel{\mathpalette\@versim>}$ 0.3 can be modeled by a family of closed box
models, provided they are pre-enriched to $Z \sim$ \hbox{$Z_\odot$}\ (Greggio
1997). In these models, the metallicity distribution is $f(Z) \propto
exp(-Z/y)$, with $Z$ varying between a minimum value $Z_{\rm m}$
($\sim$ 0.5 \hbox{$Z_\odot$}), and a maximum value $Z_{\rm M}$. Here $y$ is the
yield as
defined by Tinsley (1980). Since \mbox{Mg$_2$~} is measured in the optical,
where low $Z$ stars have more weight, its value is very
sensitive to $Z_{\rm m}$. The UV flux, instead, would be more
sensitive to $Z_{\rm M}$, if generated by stars in the high-$Z$ tail of the
distribution. Therefore, the $(UV-V)$ -- \mbox{Mg$_2$~} correlation requires
that $Z_{\rm m}$ and $Z_{\rm M}$ are well correlated, e.g. they both
increase with galaxy mass (luminosity), which seems plausible.
If HHB stars are produced only above a threshold metallicity at the
present age of the stellar populations in ellipticals, then
the $(UV-V)$ -- \mbox{Mg$_2$~} correlation can result from the metallicity
distribution shifting to higher and higher values in galaxies with higher
and higher \mbox{Mg$_2$~} \ (GR90).
As for the galaxies with \mbox{Mg$_2$~}$\ifmmode{\mathrel{\mathpalette\@versim<} 0.3$, there are very few
of them in the Burstein et al sample, and they define a correlation
with a different slope. From the \mbox{Mg$_2$~} index one expects their
metallicity distribution to
be shifted to lower values, and thus it would be interesting to
know whether their UV
SED allows for a larger contribution from low $Z$ stars
(i.e. with intermediate temperatures, hence leading to flatter UV
upturns). To date this problem has not been quantitatively investigated.
\subsection{The Evolution with Redshift Holds the Key}
The detection of the redshift evolution of the UV upturn remains
perhaps the most attractive opportunity for the future. By detecting
the effect we could in fact catch two birds with one stone. If indeed
the UV upturn fades away at $z\simeq 0.3\pm0.1$, this will represent
the decisive test for the HHB+AGB--manqu\'e origin of the upturn in
$z\simeq 0$ ellipticals. Moreover, the empirical
determination of the derivative $d(UV-V)/dz$ (hence of $d(UV-V)/dt$)
for galaxies of given value of
the central velocity dispersion $\sigma_\circ$, could be used to set
constraints on the age dispersion among local ellipticals that would
possibly be much tighter than those set by either optical colors or
the
fundamental plane relations.
The approach would be the same that Bower, Lucey \& Ellis (1992) have
pioneered to set such constraints using the small dispersion about the
average $(U-V)-\sigma_\circ$ relation of local cluster ellipticals,
with one advantage. Indeed, $U-V$ evolves very slowly in old
populations, i.e., by 0.02-0.03 mag/Gyr, while e.g. $(1550-V)$ should
evolve 10, perhaps 20 times faster. In principle, rest frame $UV-V$
colors
could set $\sim 20$ times tighter constraints to age dispersions.
However, the time derivative of
$(1550-V)$ as determined from synthetic populations is extremely model
dependent, which therefore makes extremely attractive its direct,
empirical determination. We speculate that extensive studies of the UV
upturn for cluster vs. field ellipticals up to $z\sim 0.5$ could
greatly help tightening current constraints on the star formation
history of early-type galaxies.
| proofpile-arXiv_065-7972 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\@startsection{section}{1}{\z@}%
{-3.5ex plus -1ex minus -.5ex}{1.5ex plus.3ex}{\bf }}
\def\subsection{\@startsection{subsection}{1}{\z@}%
{-3.5ex plus-1ex minus-.5ex}{1.5ex plus.3ex}{\bf }}
\begin{document}
{\Large\bf
A numerical study \\ of wave-function and matrix-element
statistics \\ in the Anderson model of localization
}\vspace{.4cm}\newline{\bf
V. Uski$^1$, B. Mehlig$^2$, and R.A. R\"omer$^1$
}\vspace{.4cm}\newline\small
$^1$Institut f\"ur Physik, Technische Universit\"at, D-09107 Chemnitz,
Germany\\
$^2$Theoretical Physics, University of Oxford, 1 Keble Road, OX1 3NP, UK
\vspace{.2cm}\newline
Received 6 October 1998, revised 16 October 1998, accepted in final form
23 October 1998 by M. Schreiber
\vspace{.4cm}\newline\begin{minipage}[h]{\textwidth}\baselineskip=10pt
{\bf Abstract.}
We have calculated wave functions and matrix elements
of the dipole operator in the two- and three-dimensional Anderson
model of localization and have studied their statistical
properties in the limit of weak disorder.
In particular, we have considered two cases.
First, we have studied the fluctuations
as an external Aharonov-Bohm flux is varied.
Second, we have considered the influence
of incipient localization. In both cases,
the statistical properties of the eigenfunctions
are non-trivial, in that the
joint probability distribution function
of eigenvalues and eigenvectors does no longer
factorize. We report on detailed comparisons
with analytical results, obtained within
the non-linear sigma model and/or
the semiclassical approach.
\end{minipage}\vspace{.4cm} \newline {\bf Keywords:}
Disorder, semiclassical approximation, wave function statistics
\newline\vspace{.2cm} \normalsize
\section{Introduction}
Disordered quantum systems exhibit irregular fluctuations of eigenvalues,
wave functions and matrix elements.
The statistical properties of wave functions and matrix elements
are of particular interest.
Both are of direct
experimental relevance. Fluctuations
of wave-function amplitudes determine
the fluctuations of the conductance through
quantum dots. The effect
of an external perturbation is described
by matrix elements of the perturbing operator
in the eigenstates of the system.
In the metallic regime (which is characterized by a large conductance
$g \gg 1$), wave-function and matrix-element fluctuations are described
by random matrix theory (RMT) \cite{meh67,efe97}. In Dyson's ensembles
\cite{meh67} such fluctuations are particularly simple since
the joint probability distribution function
of eigenvector components and eigenvalues factorizes \cite{por65}
and the statistical properties of the eigenvectors
are determined by the invariance properties of
the random matrix ensembles. One finds that
the wave-function amplitudes are distributed according to
the Porter--Thomas distribution \cite{por65}.
Non-diagonal matrix elements of an observable
$\widehat{A}$ are
Gaussian distributed \cite{por65} around
zero with variance $\sigma_{\rm off}^2$.
Diagonal matrix elements are also Gaussian distributed,
with variance
$\sigma_{\rm diag}^2 = (\beta/2) \sigma_{\rm off}^2$
where $\beta = 1$ in the Gaussian orthogonal ensemble (GOE)
of random matrices and $\beta = 2,4$ in the
Gaussian unitary (GUE) and symplectic ensembles.
The variance of non-diagonal matrix elements
is essentially given by a time integral
of a classical autocorrelation function
\cite{wilk87}
and does not depend on the symmetry properties.
Of particular interest are those cases where
the fluctuations of eigenvalues and eigenvectors are no longer independent.
In the following we analyse two such situations. First,
we consider the effect of an Aharonov-Bohm flux which
breaks time-reversal invariance and
drives a transition from the GOE to the GUE.
The statistical properties of diagonal matrix
elements in the transition regime
between GOE and GUE were calculated
in \cite{meh98,tan94,kea98},
those of level velocities in \cite{tan94,bra94,usk98}.
The statistical
properties of wave functions
in the transition regime
were derived in \cite{fal94}.
Here we compare these predictions
with numerical results obtained
for the Anderson model.
Second, we study the influence of increased disorder
on the statistical properties of wave functions.
The question how the distribution
of wave-function amplitudes deviates
from the RMT predictions at
smaller conductances $g$ has
recently been discussed in \cite{mir93}, within the
framework of the non-linear sigma model
\cite{fal95a,fyo95,fal95b}, and
using a direct optimal fluctuation
method \cite{smol97}.
Here we compare our numerical
results for distributions
of wave-function amplitudes
in the $d=3$ dimensional Anderson model
with the perturbative results
of \cite{fyo95}.
\section{The Anderson model of localization}
We consider the Anderson model of localization \cite{and58}
in $d=2$ and $d=3$ dimensions which is defined
by the tight-binding Hamiltonian on a
square or cubic lattice with $N$ sites
and unit lattice spacing
\begin{equation}\label{eq:H}
\widehat H= \sum_n|n\rangle\epsilon_n\langle n| + \sum_{n\neq
m}|n\rangle t_{nm} \langle m|\,,
\end{equation}
and $|n\rangle$ represent the Wannier state at site $n$.
The on-site potential $\epsilon_n$ is
assumed to be random and
taken to be uniformly distributed between $-W/2$ and $W/2$.
The hopping parameters $t_{nm}$ connect only nearest-neighbour
sites (and $t=1$).
In the presence of an Aharonov Bohm-flux $\varphi$,
the hopping parameters acquire an
additional phase. The flux $\varphi$ is measured in units of the flux quantum
$\varphi_0=hc/e$ and we define
$\phi = \varphi/\varphi_0$. The presence of
an Aharonov-Bohm flux $\phi$ breaks time-reversal invariance.
We have determined the eigenvalues $E_\alpha$
and the eigenfunctions $\psi_\alpha(\bf{r})$
using a modified Lanczos algorithm \cite{cul85}.
The statistical properties of
eigenvalues, eigenfunctions
and matrix elements \cite{note}
$A_{\alpha\beta} = \langle \psi_\alpha | \widehat A |\psi_\beta\rangle$
of an observable $\widehat A$ depend on the strength
of the disorder potential. In the metallic regime
(which is characterized by a large conductance $g$)
these fluctuations are described by RMT
on energy scales smaller
than the Thouless energy $E_{\rm D} = g \Delta$,
where $\Delta$ is the mean level spacing. This
is the regime we consider in Sec.\ 3, while
Sec.\ 4 deals with $g^{-1}$ corrections
to the distribution of wave-function
amplitudes.
\section{The GOE to GUE transition}
In this section,
we discuss numerical results
for the fluctuations of
matrix elements, level velocities and wave functions
in the transition region between GOE and GUE.
We have calculated the smoothed variances
\begin{equation}
C_{\mathrm v}(\epsilon,\phi) =
\langle |\widetilde{d}_{\mathrm v}(E,\phi;\epsilon)|^2\rangle_E
\hspace*{1cm}
C_{\mathrm m}(\epsilon,\phi) =
\langle |\widetilde{d}_{\mathrm m}(E,\phi;\epsilon)|^2\rangle_E
\end{equation}
where $\widetilde{d}_{\mathrm{v,m}}(E,\phi;\epsilon)$ are
the fluctuating parts of the densities
\begin{eqnarray}
d_{\mathrm v}(E,\phi;\epsilon)&=&\sum_\alpha
\frac{\partial E_\alpha}{\partial\phi}\,
\delta_\epsilon[E-E_\alpha(\phi)]\\
d_{\mathrm m}(E,\phi;\epsilon)&=&\sum_\alpha A_{\alpha\alpha}\,\,
\delta_\epsilon[E-E_\alpha(\phi)]
\end{eqnarray}
with $\delta_\epsilon(E)=
(\sqrt{2\pi}\epsilon)^{-1}\exp(-E^2/2\epsilon^2)$.
It is assumed that
$\langle A_{\alpha\alpha}\rangle =
\langle \partial E_\alpha/\partial\phi\rangle=0$.
$C_{\mathrm m}$ was calculated
exactly in \cite{meh98}.
Here
we compare with semiclassical
expressions for $C_{\mathrm m}$ \cite{meh98,kea98}
and $C_{\mathrm v}$ \cite{usk98}
obtained within the diagonal approximation
and valid for $ \Delta < \epsilon < g\Delta$ \cite{note2}
(see also \cite{bra94}).
In the case of $C_{\mathrm m}$ we considered the matrix elements of
the dipole operator $\widehat{A}=\hat{x}$.
\begin{figure}[t]
\centerline{\epsfysize=5.5cm\epsfbox{cv.W0170.eps}
\hfill
\epsfysize=5.5cm\epsfbox{cm.W0170.eps}
}
\caption{\label{fig:cv:cm}\protect\small Velocities
and matrix elements in the transition
regime between GOE and GUE for the $27\times 27$
Anderson model with $W=1.7$ (symbols),
compared to the corresponding
semiclassical expressions (lines).
The numerical values of $\epsilon$ are $0.158$
($\bigtriangledown$), $0.224$ ($\times$), $0.316$ ($\lhd$), $0.447$
($\circ$), $0.631$ ($\Box$), $0.891$
($\Diamond$), $1.26$ ($\bigtriangleup$), $1.78$ ($+$), $2.51$
($\ast$).}
\end{figure}
The numerical data were obtained
by averaging over 69 realisations of disorder in
the $27\times 27$ Anderson model with $W=1.7$
and over all states in the energy interval
$[-3.4,-1.9]$. The two-dimensional case was considered in order to be
able to obtain a good numerical accuracy at each flux value in a
tolerable computing time. In general we observe good agreement.
For a more detailed discussion of the results see \cite{usk98}.
In Fig.\ 2(a) we show corresponding results
for the distribution function
of wave-function amplitudes which is
defined as follows
\begin{figure}
\centerline{\epsfysize=5.5cm\epsfbox{phi.eps}
\hfill\epsfysize=5.5cm\epsfbox{co.eps}}
\caption{\label{fig:co}
\protect\small
Wave-function statistics in the $13\times 13 \times 13$
Anderson model. (a) GOE to GUE transition
of the distribution function
in the metallic regime.
The Porter-Thomas distributions
are shown as dashed lines.
The predictions of \protect\cite{fal94}
are show as solid lines.
(b) Deviations $\Delta f(t)$
from the Porter--Thomas distribution
for the orthogonal case, and
several values of disorder
strength, $W=1,2,3,4$ and $5$. The results are
averaged over the energy interval $[-1.7,-1.4]$.
The dashed vertical lines denote the zeros of the first order correction term
in Eq.\ (\protect\ref{eq:co}). The solid
lines are fits according to Eq.\ (\protect\ref{eq:co}).
}
\end{figure}
\begin{equation}
f(t)=\Delta\left\langle\sum_\alpha\delta(t-|\psi_\alpha({\bf r})|^2N)\,
\delta(E-E_\alpha)\right\rangle\,.
\end{equation}
Within RMT one obtains for $f(t)$ in the limiting cases
of GOE and GUE
\begin{eqnarray}
f_{\mathrm PT}^{\rm GOE}(t)&=&{1\over \sqrt{2\pi t}}\exp(-t/2)\,,\\
f_{\mathrm PT}^{\rm GUE}(t)&=&\exp(-t)
\end{eqnarray}
(Porter--Thomas distributions \cite{por65}).
An expression in the transition
region between GOE and GUE was derived in \cite{fal94}.
We have computed $f(t)$ numerically for
several values of $\phi$ in the $13 \times 13 \times 13$
Anderson model at $W=0.5$ (in the metallic regime).
Our results are shown in Fig.\ 2(a), where
we have plotted $2\sqrt{t} f(t)$ versus $\sqrt{t}$.
The results are in good agreement with the analytical formulae
(6), (7) and those given in \cite{fal94}.
It was predicted in \cite{tan94} that
the distribution of velocities
ceases to be Gaussian in the transition
regime between GOE and GUE.
The deviations are small, however,
and we have not been able
to reduce the statistical errors
of our numerical results
to an extent that a meaningful comparison
with the results of \cite{tan94} becomes possible.
\section{Deviations from RMT}
Within the non-linear sigma model it is possible
to derive $g^{-1}$--corrections to RMT.
For distributions of wave-function
amplitudes this was done in \cite{fyo95},
where in the orthogonal case one obtains
\begin{equation}\label{eq:co}
f(t)=
f^{\rm GOE}_{\mathrm PT}(t)\left[1+a_dg^{-1}\left(3/2-3t+
t^2/2\right)+{\cal O}\left(g^{-2}\right)\right]\,.
\end{equation}
In the case discussed in Ref. \cite{fyo95},
$a_3 \sim L/l$ where $L$ is
the linear dimension of the system
and $l$ is the mean free path.
Eq. (\ref{eq:co}) is valid provided $t \ll \sqrt{g/a_d}$.
We have performed numerical simulations in the $d=3$ Anderson model,
using different values of disorder, and have
computed the distribution $f(t)$ by
averaging over 400 realizations of disorder. The deviations from
the Porter--Thomas distribution, $\Delta f(t)=f(t)/
f^{\rm GOE}_{\mathrm PT}(t)-1$,
is shown in Fig.~\ref{fig:co}(b).
In all cases, the deviations exhibit a
characteristic form: the probability of
finding small and large amplitudes
is enhanced, while the distribution
function is reduced near its maximum.
We find that $\Delta f(t)$
can be fitted using Eq.\ (\ref{eq:co}). At increasingly
large disorder, deviations occur at lower values of $t$.
In all cases, however, the zeroes of Eq.\ (\ref{eq:co})
are well reproduced.
Since at weak disorder $l\sim W^{-2}$ one might expect
$a_3/g \sim W^4$. In the present case, however, we
find $a_3/g \sim W^2$. In order to resolve this
discrepancy, more accurate numerical data at small
values of $W$ are needed. Corresponding results have been obtained
for the unitary case.
\section{Conclusions}
We have analyzed RMT fluctuations of matrix elements and
level velocities in the $d=2$ dimensional Anderson model,
in the transition regime between GOE and GUE and have
found good agreement between the predictions of
RMT and our numerical results.
For the distribution of wave-function amplitudes
in $d=3$
we have studied deviations from RMT, in the
form of $g^{-1}$--corrections,
as suggested in \cite{fyo95}. Our numerical
results can be fitted by the expressions derived
in \cite{fyo95}, the dependence of
the fit parameter $a_3$ on $W$ however, differs
from what might be expected.
In this context it will be very interesting
to determine the tails of the distribution
functions and compare with
the predictions in \cite{fal95a,fal95b}
and \cite{smol97}.
\vspace{0.6cm}\newline{\small
Financial support by the DFG through
Sonderforschungsbereich 393 is gratefully acknowledged.
V.U. thanks the DAAD for the financial support.
} | proofpile-arXiv_065-7992 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{0. Introduction.}
We consider the one-dimensional Schr\"odinger equation
$$
-\Psi''+q_n(x)\Psi=E\Psi
\eqno(0.1)
$$
with an $n$-cell (finite-periodic) potential $q_n(x)$, i.e.
$q_n(x)=\chi_n(x)q(x)$, where
$q(x)$ is a real-valued integrable periodic potential with period $a$,
$\chi_n(x)$ denotes the characteristic function of the interval $[0,na]$,
$n\in {\Bbb N}$. We study
\begin{enumerate}
\item the scattering problem on the whole line
\item the Sturm-Liouville problem on the interval $[0,na]$, i.e. the
spectral problem on $[0,na]$ with the boundary conditions
$$
\Psi(0)\cos\alpha-\Psi'(0)\sin\alpha=0
\eqno(0.2)
$$
$$
\Psi(na)\cos\beta-\Psi'(na)\sin\beta=0, \qquad \alpha\in \Bbb R,
~~~\beta\in\Bbb R,
$$
and
\item the spectral problem on $[0,na]$ with periodic $(0.3a)$ or
skew-periodic $(0.3b)$ boundary conditions
$$
\Psi(0)=\Psi(na), \qquad \Psi'(0)=\Psi'(na),
\eqno(0.3a)
$$
$$
\Psi(0)=-\Psi(na), \qquad \Psi'(0)=-\Psi'(na),
\eqno(0.3b)
$$
\end{enumerate}
We discuss relations between spectral data for these problems and spectral
data for the one-dimensional Schr\"odinger equation on the whole line with
the related periodic potential $q$.
In the present paper we obtain, in particular, the following estimates:
\begin{itemize}
\item
if $F_{sc}^{(n)}(\Omega)$ is the distribution function of discrete
spectrum for the scattering problem for $(0.1)$ on the whole line
(the number of eigenvalues in $\Omega \subset ]-\infty,0]$ for
this problem), then
$$
\Bigl|F_{sc}^{(n)}(]-\infty,E[)-[\pi^{-1}nap(E)]\Bigr|\le 1 \qquad\mbox
{for }E\le 0,
\eqno(0.4)
$$
\item
if $F^{(n)}(\Omega)$ is the distribution function of discrete spectrum
for $(0.1)$ on $[0,na]$ with $(0.2)$ (the number of eigenvalues in
$\Omega\subset\Bbb R$ for this problem), then
$$
\Bigl|F^{(n)}(]-\infty,E])-[\pi^{-1}nap(E)]\Bigr|\le 1 \qquad\mbox
{for }E\in\Bbb R,\ 0\le\alpha\le\beta\le\pi,\ \beta\ne 0,\ \alpha\ne\pi,
\eqno(0.5a)
$$
$$
\Bigl|F^{(n)}(]-\infty,E[)-[\pi^{-1}nap(E)]-1\Bigr|\le 1 \qquad\mbox
{for }E\in\Bbb R,\ \ 0<\beta<\alpha<\pi,
\eqno(0.5b)
$$
\item
if $F^{(n)}(\Omega)$ is the distribution function of discrete spectrum for
$(0.1)$ on $[0,na]$ with $(0.3a)$ or $(0.3b)$ (the sum of multiciplities
of eifenvalues in $\Omega\subset\Bbb R$ for this problem), then
$$
[\pi^{-1}nap(E)]\le F^{(n)}(]-\infty,E[) \le [\pi^{-1}nap(E)]+1
\qquad\mbox{for }E\in\Bbb R,
\eqno(0.6)
$$
where $p(E)$ is the real part of the global quasimomentum for the
related periodic potential $q(x), [r]$ is the integer part of $r\ge 0$.
\end{itemize}
To obtain $(0.4), (0.5)$ we use, in particular, the technique presented
in Chapter 8 of \cite{CL} and some arguments of \cite{JM}. The estimate
$(0.6)$ follows, actually, from well-known results presented in Chapter 21
of \cite{T}.
The estimates $(0.4) - (0.6)$ and additional estimates for the distribution
functions of discrete spectrum are given in Theorems 1, 2, Corollaries 1, 2,
and by the formulas $(2.22), (2.23), (2.29)-(2.32)$ in Section 2 of the
present paper.
As a corollary of $(0.5), (0.6)$ one can obtain the following formula of
\cite{Sh}:
$$
\lim_{n\to\infty}(na)^{-1} F^{(n)}(]-\infty,E])=\pi^{-1}p(E),
\quad E\in\Bbb R,
\eqno(0.7)
$$
where $F^{(n)}(\Omega)$ is the distribution function for $(0.1)$ on
$[0,na]$ with $(0.2)$ or $(0.3a)$ or $(0.3b)$, $p(E)$ is the real part
of the global
quasimomentum for the related periodic potential. The formula $(0.7)$
for the case of smooth potential is a particular case of results of
\cite{Sh} about density of states of multidimensional selfadjoint
elliptic operators with almost periodic coefficients. The formula $(0.7)$
(for the case of continuous potential) follows from result of \cite{JM}
about density of states for the one-dimensional Schr\"odinger equation
with almost periodic potential. Note that the methods of \cite{Sh}
and \cite{JM} are very different.
Remark. Less precise estimates instead of (0.4) and (0.5) follow
directly from (0.6) and well-known results (see Theorems 1.1, 2.1, 3.1
of Chapter 8 of \cite{CL} and, for example, $\S 1$ of Chapter 1 of
\cite{NMPZ}) about zeros of eigenfunctions of the one-dimensional
Schr\"odinger operator. Probably, one can generalize such an approach to
the multidimensional case. Concerning the distribution function of discrete
spectrum for the multidimensional Schr\"odinger operator with a finite
periodic potential with periodic boundary conditions see the proof of
Theorem XIII.101 of \cite{RS}. Concerning results about zeros of
eigenfunctions of multidimensional Schr\"odinger operator see \cite{Ku} and
subsequent references given there and also $\S 6$ of Chapter \Roman{six}
of \cite{CH}.
The transmission resonances for the scattering problem for $(0.1)$
on the whole line are also considered in the present paper.
An energy $E$ is a transmission resonance iff $E\in\Bbb R_+$ and the
reflection coefficients are equal to zero at this energy. The main
features of the transmission resonances for an $n$-cell scatterer
were discussed in \cite{SWM}, \cite{RRT}.
In the present paper (Proposition 1, the
formula $(2.25)$) we give the following additional results about the
transmission resonances: if $E\in\Bbb R_+$ is a double eigenvalue for
$(0.1)$ on $[0,na]$ with $(0.3a)$ or $(0.3b)$, then $E$ is a transmission
resonances, and all $n$-dependent transmission resonances have this origin;
there are no transmission resonances in the forbidden energy set for the
the related periodic potential; if $q(x)\not\equiv 0$, then
$$
(na)^{-1}\Phi_{sc}^{(n)}(]0,E])-\pi^{-1}(p(E)-p(0))= O(n^{-1})
$$
as $n\to\infty$, where $\Phi_{sc}^{(n)}(\Omega)$
is the number of transmission resonances in $\Omega\subset\Bbb R_+$
for an $n$-cell scatterer, $p(E)$ is the real part of the global
quasimomentum for related periodic potential $q(x)$.
We consider also the one-dimensional Schr\"odinger equation
$$
-\psi^{\prime\prime} + q(x)\psi = E\psi,\ \ x\in{\Bbb R},
\eqno(0.8)
$$
with a potential consisting of $n$ not necessarily identical cells. More
precisely, we suppose that : ${\Bbb R}=\cup_{j=1}^nI_j$,
where $I_1=] -\infty, x_1]$, $I_j=[x_{j-1},x_j]$ for
$1<j<n$, $I_n=[x_{n-1},+\infty [$,
$-\infty < x_{j-1}<x_j< +\infty$ for $1<j<n$; $q(x)=\sum_{j=1}^nq_j(x)$,
where $q_j\in L^1({\Bbb R})$, $q_j={\bar q}_j$, $supp\,q_j\subseteq I_j$
for $1\le j\le n$ and, in addition, $(1+|x|)q_1(x)$ and
$(1+|x|)q_n(x)$ are also integrable on ${\Bbb R}$.
In the present paper (Theorem 3) we obtain, in particular, the following
estimate
$$|F(] -\infty, E[)-\sum_{j=1}^nF_j(] -\infty, E[)|\le n-1\ \ {\rm for}\ \
E\le 0, \eqno(0.9)$$
where $F(] -\infty, E[)$, $(F_j(] -\infty, E[)$, resp.) denotes the
distribution function of discrete spectrum for the scattering problem
for (0.8) (for the one-dimensional Schr\"odinger equation with the
potential $q_j$, resp.) on the whole line.
In addition, for $E=0$ we have the estimate (2.36) obtained earlier in
\cite{AKM2} as a development of results of \cite{K} and \cite{SV}.
Additional indications conserning preceeding works are given in Section 2 of
the present paper. In connection with results discussed in the present
paper it is useful to see also the review given in \S 17 of
\cite{RSS} and \cite{KS} and the results given in \cite{ZV}.
\section*{1. Definitions, notations, assumptions and some known facts.}
We consider the one-dimensional Schr\"odinger equation
$$
-\frac{d^2}{dx^2}\Psi+q_n(x)\Psi=E\Psi, \quad x\in\Bbb R,
\eqno(1.1)
$$
where $q_n(x)$ is an $n$-cell potential, i.e.
$$
q_n(x)=\sum_{j=0}^{n-1} q_1(x-ja), \qquad a\in\Bbb R_+ ,
\eqno(1.2)
$$
$$
q_1\in L^1(\Bbb R), \qquad q_1=\bar q_1,\qquad\hbox{supp }q_1\in[0,a] .
\eqno(1.3)
$$
{\bf First,} we consider the scattering problem for the equation $(1.1)$
on the
whole line: we consider wave functions describing scattering with
incident waves for positive energies and bound states for negative
energies. We recall some definitions and facts of the scattering theory
for the Schr\"odinger equation
$$
-\Psi''+v(x)\Psi=E\Psi
\eqno(1.4)
$$
where
$$
v\in L^1(\Bbb R), \quad v=\bar v, \qquad \int\limits_{\Bbb R}
(1+|x|)|v(x)|dx<\infty
\eqno(1.5)
$$
(see, for example, \cite{F}). Let an incident wave be described by
$e^{ikx}, k\in\Bbb R, k^2=E>0$. Then the scattering is described by the
wave function $\Psi^+(x,k)$ defined as a solution of $(1.4)$ such that
$$
\Psi^+(x,k)=e^{ikx}-\frac{\pi i}{|k|}e^{i|k||x|}f(k,|k|\frac{x}{|x|})+
{\it o}(1)\quad\hbox{as }x\to\infty
\eqno(1.6)
$$
for some $f(k,{\it l}), {\it l}\in\Bbb R, {\it l}^2=k^2$, which is the
scattering amplitude. The following formulas connect the scattering amplitude
$f$ and the scatering matrix ${\it S}(k)=
({\it s}_{ij}(k)), k\in\Bbb R_+$:
$$
\begin{array}{l}
{\it s}_{11}(k)=1-\pi ik^{-1}f(-k,-k), \quad {\it s}_{12}(k)=
-\pi ik^{-1}f(-k,k), \\
\mathstrut\\
{\it s}_{21}(k)=-\pi ik^{-1}f(k,-k), \quad {\it s}_{22}(k)=
1-\pi ik^{-1}f(k,k).
\end{array}
\eqno(1.7)
$$
The bound states energies $E_j$ are defined as the discrete spectrum
and the bound states $\Psi_j(x)$ are defined as related eigenfunctions
of the spectral problem $(1.4)$ in $L^2(\Bbb R)$. We recall that, under
assumption $(1.5)$,
$$
S(k),\ \ k\in{\Bbb R}_+, \ \hbox{ is unitary and }\ \
{\it s}_{11}(k)={\it s}_{22}(k),
\eqno(1.8)
$$
each eigenvalue $E_j$ is negative and simple and the total number $m$
of these eigenvalues is finite
$$
E_1<E_2<\ldots<E_m<0,\ \ \ m<\infty.
\eqno(1.9)
$$
For $v=q_n$ we will write $\Psi^+, f, S, {\it s}_{ij}, E_j, \Psi_j$
as $\Psi_n^+, f_n, S_n, {\it s}_{ij}^{(n)}, E_j^{(n)}, \Psi_j^{(n)}$.
{\bf Second,} we consider the spectral problem $(1.1)$ on the interval
$[0,na]$ with the boundary conditions
$$
\Psi(0)\cos\alpha-\Psi'(0)\sin\alpha=0
$$
$$
\Psi(na)\cos\beta-\Psi'(na)\sin\beta=0, \quad\alpha\in\Bbb R,
\quad\beta\in\Bbb R.
\eqno(1.10)
$$
Without loss of generality we may assume that
$$
0\le\alpha <\pi, \quad 0<\beta\le\pi.
\eqno(1.11)
$$
{\bf Third,} we consider the spectral problem $(1.1)$ on the
interval $[0,na]$ with the boundary conditions
$$
\Psi(0)=\Psi(na), \quad\Psi'(0)=\Psi'(na)
\eqno(1.12a)
$$
or with the boundary conditions
$$
\Psi(0)=-\Psi(na), \quad\Psi'(0)=-\Psi'(na).
\eqno(1.12b)
$$
On the other hand, we consider the one-dimensional Schr\"odinger
equation
$$
-\frac{d^2}{dx^2}\Psi+q(x)\Psi=E\Psi, \quad x\in\Bbb R,
\eqno(1.13)
$$
where $q$ is the following periodic potential
$$
q(x)=\sum_{j=-\infty}^{\infty} q_1(x-ja).
\eqno(1.14)
$$
We recall some definitions and facts of spectral theory for the equation
$(1.13)$ on the whole line (see, for example, \S 17 of \cite{RSS} and
Chapter \Roman{two} of \cite{NMPZ}).
The monodromy operator $M(E)$ is defined as the translation operator by
the period $a$ in the two-dimensional space of solutions of $(1.13)$
at fixed $E$. If a basis in this space is fixed, then one can consider
$M(E)$ as a $2\times 2$ matrix. For all $E$ $det M(E)=1$.
The eigenvalues of $M(E)$ are of the form
$$
\lambda_1=1/\lambda_2=\left(Tr\;M(E)+\sqrt{\bigl( (Tr\;M(E)\bigr)^2-4}
\right)/2=e^{i\varphi (E)},
\eqno(1.15)
$$
where
$$
2\cos{\varphi(E)}=Tr\;M(E).
\eqno(1.16)
$$
The Bloch solutions are defined as eigenvectors of $M(E)$.
The allowed and forbidden Bloch zones are defined by the formulas
$$
\bigcup_{j\in\ J}\Lambda_j^a=\Lambda^a=\left\{ E\in\Bbb R
\bigl|\bigr.\left| Tr\;M(E)\right|\le 2\right\} \qquad\mbox{allowed zones}
\eqno(1.17)
$$
$$
\bigcup_{j\in\ J}\Lambda_j^f=\Lambda^f=\left\{ E\in\Bbb R
\bigl|\bigr.\left| Tr\;M(E)\right| > 2\right\} \qquad\mbox{forbidden zones}
\eqno(1.18)
$$
where either $J=\Bbb N$ or $J=\{ 1,\dots ,m\},\,\,m\in\Bbb N;\;\;\Lambda_j^a,
\Lambda^f$ are connected intervals (closed for the case $(1.17)$ and open
for the case $(1.18)$) such that
$$
\sup \limits_{E\in\Lambda_j^a} E < \inf\limits_{E\in\Lambda_{j+1}^a} E,\quad
\sup\limits_{E\in\Lambda_j^f} E < \inf\limits_{E\in\Lambda_{j+1}^f} E,
\quad\mbox{for}\quad j,\, j+1\in J,
\eqno(1.19)
$$
in addition, $\Lambda_1^f=]-\infty,\lambda_0 [$.
The real part of the global quasimomentum $p(E)$ is
defined as a real-valued continuous
nondecreasing function such that: $p(E)$ is constant in each forbidden
zone $\Lambda_j^f, \, p(E)=0$ for $E\in\Lambda_1^f$, the phase $\varphi(E)
=ap(E)$ is a solution of $(1.16)$ for each allowed zone $\Lambda_j^a$.
Note that
$$
\pi^{-1}ap(E)={\it l}_j\in\Bbb N\cup 0\ \hbox{ for } E\in\bar\Lambda_j^f
\eqno(1.20)
$$
(the closure of $\Lambda_j^f$) for each $j\in J,\,{\it l}_1=0,\,
{\it l}_j <{\it l}_{j+1}$ for $j,j+1\in J$.
\section*{2. The main new results and the preceeding results.}
In the present paper we discuss relations between spectral data
for $(1.1)$ for the first, the second or the third case
described above and spectral data for $(1.13)$.
To start with, we discuss some results of \cite{SWM}, \cite{RRT},
\cite{R}, \cite{SV}. In \cite{SWM} the following formulas are given,
in particular:
$$
\frac{R_n}{T_n}=\frac{\displaystyle\sin{n\varphi}}
{\displaystyle\sin{\varphi}}
\,\,\frac{R_1}{T_1}
\eqno(2.1)
$$
$$
\begin{array}{l}
\frac{1}{T_n}=\frac{\displaystyle 1}{\displaystyle\sin{\varphi}}
\left(\frac{1}{T_1}\,\sin{n\varphi}-
\sin{(n-1)\varphi}\right),\\
\mathstrut\\
R_n=R_1\left( 1-\frac{\displaystyle\sin{(n-1)\varphi}}
{\displaystyle\sin{\varphi}}T_1\right)^{-1},
\end{array}
\eqno(2.2)
$$
\medskip
$$
M(E)=\left(\begin{array}{rr}
\frac{\displaystyle 1}{\displaystyle{T_1}}\;&
-\frac{\displaystyle\bar{R_1}}{\displaystyle{T_1}}\\
\mathstrut & \mathstrut \\
-\frac{\displaystyle{R_1}}{\displaystyle{T_1}\;}&
\frac{\displaystyle 1}{\displaystyle{T_1}}
\end{array}\right)
\eqno(2.3)
$$
\bigskip
\noindent
in the basis of solutions $\psi_{\pm}(x,k)$ such that $\psi_{\pm}(0,k)
=1$, $\psi_{\pm}^{\prime}(0,k)=\pm ik$,
$$
\cos\varphi = Re(1/T_1),
\eqno(2.4)
$$
where $T_n={\it s}_{22}^{(n)}(E) e^{ikna},\;R_n={\it s}_{21}^{(n)}(E),\;
\varphi=\varphi(E)$ is the Bloch phase from $(1.15),\,(1.16),\;
E=k^2,\;k\in\Bbb R_+$.
The formulas $(2.1)$--$(2.4)$ (taking into account $(1.8)$) describe
relations between the scattering matrix ${\it s}_{ij}^{(n)}(k),\;
k\in\Bbb R_+$, for $(1.1)$ and spectral data for $(1.13)$ in a very
complete way. A proper discussion is given in \cite{SWM}. Some similar
results are given also in \cite{RRT}. Conserning more old results
in this direction see \cite{R} and references given in \cite{SWM},
\cite{RRT}. The discrete spectrum for the scattering problem for $(1.1)$
on the whole line was discussed in \cite{R}, \cite{SV}. The paper
\cite{R} deals with the particular case when
$q_1(\frac{a}{2}+x)=q_1(\frac{a}{2}-x)$ and results of \cite{R} conserning
the discrete spectrum imply a lower bound for the distribution function
of discrete spectrum $F_{sc}^{(n)}(\Sigma)$ for this case. In \cite{SV}
the total number of bound
states for an $n$-cell scatterer $q_n$ is given in terms of certain
quantities characterizing the single scatterer $q_1$. However,
in \cite{SV} the distribution function
$$
F_{sc}^{(n)}(\Sigma)=\;\#\{E_{j}^{(n)}\in\Sigma\}
\eqno(2.5)
$$
(the number of bound states with energies in an interval $\Sigma\subset]
-\infty,\,0[$) is not considered for $\Sigma\ne]-\infty,\,0[$ and
manifestations of the Bloch zone structure for $E_{j}^{(n)}$ are
not discussed.
In the present paper we obtain, in particular, the following result.
{\bf Theorem 1.}{\it Under assumptions $(1.2),\;(1.3),\;(1.14)$, the
following formulas hold:
$$
\left| F_{sc}^{(n)}(]-\infty,\,E[)-\Bigl[\frac{nap(E)}{\pi}\Bigr]\right|\le 1
\quad\mbox{for }E\in ]-\infty,\,0]\;,
\eqno(2.6)
$$
$$
\left[\frac{nap(E)}{\pi}\right]\le F_{sc}^{(n)}(]-\infty,\,E[)
\le\left[\frac{nap(E)}
{\pi}\right]+1\quad\mbox{for }E\in]-\infty,\;0]\setminus\bar\Lambda^f,
\eqno(2.7)
$$
where $F_{sc}^{(n)}(\Sigma)$ is the distribution function of discrete
spectrum for the scattering problem $(1.1),\;\; p(E)$ is the real part
of the global
quasimomentum and $\bar\Lambda^f$ is the closure of the forbidden
energy set for the spectral problem $(1.13),\;[r]$ is the integer part
of $r\ge 0$.}
The proof of {\bf Theorem 1} is given in Section 4.
Using $(2.6)$ we obtain the following corollary.
{\bf Corollary 1.} {\it Under assumptions $(1.2),\,(1.3),\,(1.14)$, the
following formulas hold:
$$
\begin{array}{l}
F_{sc}^{(n)}(\bar\Lambda^f_{j}\bigcap ]-\infty,\,0[) \le 2
\quad\mbox{for }j\in J,\\
F_{sc}^{(n)}(\bar\Lambda^f_{1}\bigcap ]-\infty,\,0[) \le 1 ,
\end{array}
\eqno(2.8)
$$
where $\bar\Lambda_j^f$ is the closure of the forbidden zone $\Lambda_j^f$
for $(1.13)$.}
The proof of {Corollary 1} is given in Section 4.
Consider now the eigenvalues $E_j^{(n)}$ and the distribution function
$$
F^{(n)}(\Sigma)=\;\# \{E_{j}^{(n)}\in\Sigma\}
\eqno(2.9)
$$
(the number of eigenvalues in an interval $\Sigma\subset\Bbb R$) for the
spectral problem $(1.1),\;(1.10)$.
{\bf Theorem 2.}{\it Under assumptions $(1.2),\;(1.3),\;(1.11),\;(1.14)$,
the following formulas hold:
$$
\left[\frac{nap(E)}{\pi}\right]-1\le F^{(n)}(]-\infty,\,E[) \le\left[
\frac{nap(E)}{\pi}\right]\quad\mbox{for }E\in\Bbb R ,\;\alpha =0,\;\;
\beta =\pi,
\eqno(2.10a)
$$
$$
F^{(n)}(]-\infty,\,E[)=\Bigl[\frac{nap(E)}{\pi}\Bigr]\quad\mbox{for }
E\in\Bbb R \setminus\bar\Lambda^f,\;\;\alpha =0,\;\;\beta =\pi,
\eqno(2.10b)
$$
$$
\left| F^{(n)}(]-\infty,\,E[)-\Bigl[\frac{nap(E)}{\pi}\Bigr]\right|\le 1
\quad\mbox{for }E\in\Bbb R,\;\;\alpha <\beta,
\eqno(2.11a)
$$
$$
\left[\frac{nap(E)}{\pi}\right]\le F^{(n)}(]-\infty,\,E[) \le\left[
\frac{nap(E)}{\pi}\right]+1\quad\mbox{for }E\in\Bbb R \setminus\bar\Lambda^f,
\;\alpha <\beta,
\eqno(2.11b)
$$
$$
\left| F^{(n)}(]-\infty,\,E[)-\Bigl[\frac{nap(E)}{\pi}\Bigr] -1\right|\le 1
\quad\mbox{for }E\in\Bbb R,\;\;\beta <\alpha ,
\eqno(2.12a)
$$
$$
\left[\frac{nap(E)}{\pi}\right] +1\le F^{(n)}(]-\infty,\,E[) \le\left[
\frac{nap(E)}{\pi}\right]+2\quad\mbox{for }E\in\Bbb R \setminus\bar\Lambda^f,
\;\beta <\alpha,
\eqno(2.12b)
$$
$$
\left[\frac{nap(E)}{\pi}\right]\le F^{(n)}(]-\infty,\,E[) \le\left[
\frac{nap(E)}{\pi}\right]+1\quad\mbox{for }E\in\Bbb R ,\;\alpha =\beta,
\eqno(2.13a)
$$
$$
F^{(n)}(]-\infty,\,E[)=\Bigl[\frac{nap(E)}{\pi}\Bigr] +1\quad\mbox{for }
E\in\Bbb R \setminus\bar\Lambda^f,\;\;\alpha =\beta ,
\eqno(2.13b)
$$
where $F^{(n)}(\Sigma)$ is the distribution function for the spectral
problem $(1.1),\;(1.10),\;p(E)$ is the real part of the
global quasimomentum and
$\bar\Lambda^f$ is the closure of the forbidden
energy set for the spectral problem $(1.13),\;[r]$ is the integer part
of $r\ge 0$.}
The proof of {\bf Theorem 2} is given in Section 4.
Using $(2.10)$--$(2.13)$ we obtain the following corollary.
{\bf Corollary 2.} {\it Under assumptions $(1.2),\,(1.3),\,(1.11),\,
(1.14)$, the following formulas hold:
$$
F^{(n)}(\bar\Lambda_1^f)=0\quad\mbox{for }\alpha =0,\;\;\beta =\pi,
\eqno(2.14a)
$$
$$
F^{(n)}(\bar\Lambda_j^f)=1\quad\mbox{for }j\in J\setminus 1,\;
\alpha =0,\;\;\beta =\pi,
\eqno(2.14b)
$$
$$
F^{(n)}(\bar\Lambda_j^f)\le 2\quad\mbox{for }j\in J,\;\alpha <\beta ,
\eqno(2.15a)
$$
$$
F^{(n)}(\bar\Lambda_1^f)\le 1\quad\mbox{for }\alpha <\beta ,
\eqno(2.15b)
$$
$$
F^{(n)}(\bar\Lambda_j^f)\le 2\quad\mbox{for }j\in J,\;\beta <\alpha,
\eqno(2.16a)
$$
$$
F^{(n)}(\bar\Lambda_j^f)=1\quad\mbox{for }j\in J,\;\alpha =\beta ,
\eqno(2.16b)
$$
where $\bar\Lambda_j^f$ is the closure of the forbidden zone $\Lambda_j$
for $(1.13)$.}
The proof of the {\bf Corollary 2} is given in Section 4.
Consider now the eigenvalues $E_j^{(n)}$ for $(1.1)$ with $(1.12a)$,
the eigenvalues $\tilde E_j^{(n)}$ for $(1.1)$ with $(1.12b)$, and
the related distribution functions
$$
F^{(n)}(\Omega)=\sum_{E_j^{(n)}\in\,\Omega} m(E_j^{(n)}),\qquad
\tilde F^{(n)}(\Omega)=\sum_{\tilde E_j^{(n)}\in\,\Omega}
m(\tilde E_j^{(n)}),
\eqno(2.17)
$$
where $\Omega$ is a subset of $\Bbb R$, $\;m(E_j^{(n)}),\;
m(\tilde E_j^{(n)})\in\,\{1,2\}$ are the multiplicities of $E_j^{(n)},\;
\tilde E_j^{(n)}$.
Under assumptions $(1.2),\,(1.3),\,(1.14)$, the following statements
are valid:
$$
\begin{array}{l}
\hbox{\it a number E is a simple eigenvalue for (1.1) with (1.12a) iff}\\
\mathstrut\\
(2\pi )^{-1}nap(E)\in{\Bbb N}\cup 0,\quad E\in\bar\Lambda^f
\setminus\Lambda^f,
\end{array}
\eqno(2.18)
$$
\\
$$
\begin{array}{l}
\hbox{a number E is a double eigenvalue for (1.1) with (1.12a) iff}\\
\mathstrut\\
(2\pi )^{-1}nap(E)\in{\Bbb N}\cup 0,\quad E\in{\Bbb R}\setminus\bar
\Lambda^f,
\end{array}
\eqno(2.19)
$$
\\
$$
\begin{array}{l}
\hbox{a number E is a simple eigenvalue for (1.1) with (1.12b) iff}\\
\mathstrut\\
(2\pi )^{-1}(nap(E)-\pi )\in{\Bbb N}\cup 0,\quad E\in\bar\Lambda^f
\setminus\Lambda^f,
\end{array}
\eqno(2.20)
$$
\\
$$
\begin{array}{l}
\hbox{a number E is a double eigenvalue for (1.1) with (1.12b) iff}\\
\mathstrut\\
(2\pi )^{-1}(nap(E)-\pi )\in{\Bbb N}\cup 0,\quad E\in{\Bbb R}
\setminus\bar\Lambda^f,
\end{array}
\eqno(2.21)
$$
if $F^{(n)}(\Omega)$ is the distribution function for $(1.1)$ with
$(1.12a)$, then
$$
\begin{array}{l}
[(2\pi )^{-1}nap(E)]\le F^{(n)}(]-\infty ,E])\le [(2\pi )^{-1}nap(E)]+1,\\
\mathstrut\\
F^{(n)}(\Lambda^f)=0,
\end{array}
\eqno(2.22)
$$
if $\tilde F^{(n)}(\Omega)$ is the distribution function for $(1.1)$ with
$(1.12b)$, then
$$
\begin{array}{l}
[(2\pi )^{-1}nap(E)]\le \tilde F^{(n)}(]-\infty ,E])\le [(2\pi )^{-1}
nap(E)]+1,\\
\mathstrut\\
\tilde F^{(n)}(\Lambda^f)=0,
\end{array}
\eqno(2.23)
$$
where $p(E)$ is the real part of the global quasimomentum and
$\bar\Lambda^f$ is the
closure of the forbidden energy set $\Lambda^f$ for $(1.13),\;[r]$
is the integer part of $r\ge 0$.
Using known properties of $p(E)$ and $E_j=E_j^{(1)},\;\tilde E_j=
\tilde E_j^{(1)}$ (see, for example, \cite{RSS}, \S 17 and \cite{CL},
Chapter 8), we obtain these statements, first, for $n=1$.
Then we reduce the general case to the case $n=1$ considering
$q$ from $(1.14)$ as a potential with period $na$. One can obtain
also these statements using well-known results presented in
Chapter 21 of \cite{T} and the
definitions of $p(E)$ and $F^{(n)}(\Omega),\;\tilde F^{(n)}(\Omega) $.
Consider now the points of perfect transmission (transmission resonances)
for the scattering problem $(1.1)$, i.e. the points $\lambda_j^{(n)}
\in\Bbb R_+$ such that $|{\it s}_{ii}^{(n)}(\lambda_j^{(n)})|=1$.
In \cite{SWM} it is shown (using (2.1)) that, for $E\in{\Bbb R}_+$,
$$
E \ \ \ \hbox{is a point of perfect transmission, i.e.}
\ \ \ |{\it s}_{ii}^{(n)}(E)|=1,
$$
$$
\hbox{\bf{either} \ \ if \ }|{\it s}_{ii}^{(1)}(E)|=1
\hbox{\ \ \ \ {\bf or \ \ \ \ if} \ }
\sin{n\varphi (E)}=0,\;\sin\varphi (E)\ne 0,
\eqno(2.24)
$$
where $\varphi (E)$ is defined by $(1.16)$. The same result is given also
in \cite{RRT}.
In the present paper in connection with transmission resonances we
obtain the following result.
{\bf Proposition 1.}{\it Under assumptions $(1.2),\,(1.3),\,(1.14)$,
the following statements are valid:
$$
\begin{array}{l}
\hbox{\it if}\ E\in\Bbb R_+\ \hbox{\it is a double eigenvalue for}\
(1.1)\ \hbox{\it with}\ (1.12a)\ \hbox{\it or with}\ (1.12b),\\
\hbox{\it then}\ |{\it s}_{ii}^{(n)}(E)|=1,
\end{array}
\eqno(2.25)
$$
$$
\begin{array}{l}
\hbox{\it if}\ \sin{n\varphi (E)}=0,\;\sin\varphi (E)\ne 0,\\
\hbox{\it then}\ E\ \hbox{\it }\ \hbox{\it is a double eigenvalue for}\
(1.1)\ \hbox{\it with}\ (1.12a)\ \hbox{\it or with}\ (1.12b),
\hphantom{a}
\end{array}
\eqno(2.26)
$$
$$
\begin{array}{l}
\hbox{\it if}\ |{\it s}_{ii}^{(n)}(E)|=1,\;\,E\in\Bbb R_+,\\
\hbox{\it then}\ E\in\Lambda^a,
\hphantom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa}
\end{array}
\eqno(2.27)
$$
\\
where $s_{ii}^{(n)}(E)$ is the transmission coefficient for $(1.1)$,
$\varphi (E)$ is the Bloch phase and $\Lambda^a$ is the allowed energy
set for $(1.13)$.}
To prove the statement $(2.25)$ we calculate ${\it s}_{ii}^{(n)}(E)$ using
that in this case each solution of $(1.1)$ on $[0,\, na]$ satisfies
$(1.12a)$ or $(1.12b)$. The statement $(2.26)$ follows from $(2.19)$,
$(2.21)$ and properties of $\varphi (E)$. The statement $(2.27)$ follows
for example, from $(2.24)$, $(2.4)$ and properties of $\varphi (E)$.
For $q_1(x)\not\equiv 0$ we consider also the distribution function
of transmissions resonances
$$
\Phi_{sc}^{(n)}(\Sigma )=\;\#\{\lambda_j^{(n)}\in\Sigma\}\;\;
(\mbox{the number of transmission resonances in }\Sigma\subset\Bbb R_+).
\eqno(2.28)
$$
Under assumptions $(1.2),\,(1.3),\,(1.14)$, as a corollary of
{\bf Theorems 1, 2} the statements $(2.18)$--$(2.24)$ and
{\bf Proposition 1} we obtain the following statements:
{\it if $F_{sc}^{(n)}(\Sigma)$ is the distribution function of discrete
spectrum for the scattering problem $(1.1)$, then, for $E\ge 0$,
$$
\begin{array}{l}
\lim\limits_{n\to\infty}(na)^{-1}F_{sc}^{(n)}(]-\infty,E[)=\pi^{-1}p(E),\\
\mathstrut\\
(na)^{-1}F_{sc}^{(n)}(]-\infty,E[)-(na)^{-1}\le \pi^{-1}p(E)\le
(na)^{-1}F_{sc}^{(n)}(]-\infty,E[)+2(na)^{-1},
\end{array}
\eqno(2.29)
$$
if $\Phi_{sc}^{(n)}(\Sigma)$ is the distribution function of transmission
resonances for the scattering problem $(1.1)$, $q_1(x)\not\equiv 0$,
then, for $E\ge 0$,
$$
\begin{array}{l}
\lim\limits_{n\to\infty}(na)^{-1}\Phi_{sc}^{(n)}(]0,E])=
\pi^{-1}(p(E)-p(0)),\\
\mathstrut\\
(na)^{-1}\Phi_{sc}^{(n)}(]0,E])-\pi^{-1}(p(E)-p(0))={\it O}(n^{-1}),
\;\mbox{as }n\to\infty,
\end{array}
\eqno(2.30)
$$
if $F^{(n)}(\Sigma)$ is the distribution function for $(1.1)$, with
$(1.10)$, then, for $E\in\Bbb R$,
$$
\lim_{n\to\infty}(na)^{-1}F^{(n)}(]-\infty,E])=\pi^{-1}p(E),\\
\eqno(2.31a)
$$
$$
(na)^{-1}F^{(n)}(]-\infty,E])-(na)^{-1}\le \pi^{-1}p(E)\le
$$
$$
\le (na)^{-1}F^{(n)}(]-\infty,E])+2(na)^{-1},
\eqno(2.31b)
$$
$$
\quad\mbox{for }0\le\alpha\le\beta\le\pi ,\;0<\beta ,\;\alpha <\pi ,
$$
\smallskip
$$
(na)^{-1}F^{(n)}(]-\infty,E]) \le \pi^{-1}p(E)\le
$$
$$
\le (na)^{-1}F^{(n)}(]-\infty,E])+3(na)^{-1},
\eqno(2.31c)
$$
$$
\quad\mbox{for }0<\beta <\alpha <\pi ,
$$
if $F^{(n)}(\Sigma)$ is the distribution function for $(1.1)$, with
$(1.12a)$ or $(1.12b)$ then, for $E\in\Bbb R$,
$$
\lim_{n\to\infty}(na)^{-1}F^{(n)}(]-\infty,E])=\pi^{-1}p(E),
\eqno(2.32a)
$$
$$
(na)^{-1}F^{(n)}(]-\infty,E])-(na)^{-1}\le \pi^{-1}p(E)\le
$$
$$
\le (na)^{-1}F^{(n)}(]-\infty,E])+(na)^{-1},
\eqno(2.32b)
$$
where $p(E)$ is the real part of the global quasimomentum
for $(1.13)$.}
The formulas
$(2.31a),\; (2.32a)$ for the case of smooth potential are a particular case
of results of \cite{Sh} about density of states of multidimensional
selfadjoint elliptic operators with almost periodic coefficients.
Consider now the one-dimensional Schr\"odinger equation
$$
-\psi^{\prime\prime} + q(x)\psi = E\psi,\ \ x\in{\Bbb R},
\eqno(2.33)
$$
with a potential consisting of $n$ not necessarily identical cells.
More precisely, we suppose that
$$
{\Bbb R}=\bigcup\limits_{j=1}^nI_j,\ \ n\in{\Bbb N},
\eqno(2.34a)
$$
$$
\begin{array}{l}
I_1=] -\infty, x_1],\ I_j=[x_{j-1}, x_j]\ \ {\rm for}\ \ 1<j<n,\
I_n=[x_{n-1}, -\infty [,\\
\mathstrut\\
-\infty< x_{j-1}< x_j < +\infty\ \ {\rm for}\ \ 1< j< n,
\end{array}
\eqno(2.34b)
$$
$$
q(x)=\sum_{j=1}^nq_j(x),
\eqno(2.34c)
$$
$$
\begin{array}{l}
q_j\in L^1({\Bbb R}),\ \ q_j={\bar q}_j,\ \ supp\,q_j\subseteq I_j\ \
{\rm for}\ \ 1\le j\le n,\\
\mathstrut\\
\int_{\Bbb R}(1+|x|)|q_1(x)|dx < \infty,\ \
\int_{\Bbb R}(1+|x|)|q_n(x)|dx < \infty.
\end{array}
\eqno(2.34d)
$$
In the present paper we obtain, in particular, the following result.
{\bf Theorem 3.}
{\it Under assumptions (2.34), the following estimate holds:
$$
|F(] -\infty, E[) - \sum_{j=1}^nF_j(] -\infty, E[)|\le n-1\ \
{\rm for}\ \ E\le 0,
\eqno(2.35)
$$
where $F(] -\infty, E[)\ \ (F_j(] -\infty, E[)$, respectively) denotes
the distribution function of discrete spectrum for the scattering problem
for (2.33) (for the one-dimensional Schr\"odinger equation with the
potential $q_j$, respectively) on the whole line.}
In addition, for $E=0$ there is the following estimate
$$1-n+\sum_{j=1}^nF_j(] -\infty, 0[)\le F(] -\infty, 0[)\le
\sum_{j=1}^nF_j(] -\infty, 0[) \eqno(2.36)$$
given earlier in \cite{AKM2} as a development of results of \cite{K}
and \cite{SV}.
The estimate (2.36) is more precise than (2.35) for $E=0$. However,
for fixed $E<0$ and, at least, for $n=2$ the estimate (2.35) is the best
possible, in general.
The proof of Theorem 3 is given in Section 4.
\section*{3. Auxiliary results.}
To prove {\bf Theorems 1, 2, 3} we use auxiliary results given below
separated into five parts.
\medskip
{\large\Roman{one}.} We consider the Schr\"odinger equation
$$
-\Psi''+v(x)\Psi=E\Psi,\;\;x\in\Bbb R,
\eqno(3.1)
$$
where
$$
v=\bar v,\quad v\in L_{{\it loc}}^1({\Bbb R}),\quad E\in\Bbb R.
\eqno(3.2)
$$
Under assumptions $(3.2)$, the following formulas hold:
$$
\Bigl|\arg\bigl(\Psi (x)+\bigg.i\Psi'(x)\bigr)\biggr|_0^y-
\arg\bigl(\varphi (x)+\biggl.i\varphi'(x)\bigr)\biggr|_0^y\:\Bigr|<\pi ,
\eqno(3.3)
$$
$$
0<\frac{\pi}{2}-\arctan\,\frac{\varphi'(0)}{\varphi(0)}-
\arg\bigl(\varphi (x)+\biggl.i\varphi'(x)\bigr)\biggr|_0^y\: -
\pi N(\varphi,]0,y[)\le\pi\quad\mbox{if }\varphi (0)\ne 0,
$$
$$
0<-\arg\bigl(\varphi (x)+\biggl.i\varphi'(x)\bigr)\biggr|_0^y\: -
\pi N(\varphi,]0,y[)\le\pi\quad\mbox{if }\varphi (0)=0,
\eqno(3.4)
$$
for any non-zero real-valued solutions $\Psi $ and $\varphi$ of
$(3.1)$, where $\arg f(x)$ denotes an arbitrary continuously dependent
on $x$ branch of the argument of $f(x),\;\arctan r\in ]-\frac{\pi }{2},\;
\frac{\pi }{2}[,$ for any $r\in{\Bbb R}$, $N(\varphi,]0,y[)$ denotes
the number of zeroes of $\varphi (x)$ in $]0,y[,\;y>0$.
For the case of bounded potential these results were used, actually,
in Chapter 8 of \cite{CL} and in Section 4 of \cite{JM}.
\medskip
{\large\Roman{two}.} Under assumptions $(1.2),\,(1.3),\,(1.14)$,
the following formulas hold:
$$
\Bigl|\arg\bigl(\varphi (x,E)+\biggl.i\varphi'(x,E)\bigr)\biggr|_0^{na}
\:+nap(E)\Bigr| <\pi ,\quad\mbox{for}\;\; E\in\bar\Lambda^f ,
\eqno(3.5a)
$$
$$
0\le -\pi^{-1}\arg\bigl(\varphi (x,E)+\biggl.i\varphi'(x,E)\bigr)
\biggr|_0^{na}
\:-[\pi^{-1}nap(E)]<1,\quad\mbox{for}\;\; E\in {\Bbb R}\setminus\bar\Lambda^f ,
\eqno(3.5b)
$$
for any non-zero real-valued solutions $\varphi$ of $(1.1)$, where
one takes an arbitrary continuously dependent on $x$ branch of the
argument, $p(E)$ is the real part of the global quasimomentum
and $\bar\Lambda^f$ is the closure of the forbidden energy set
for $(1.13)$, $[r]$ is the integer part of $r\ge 0$.
The estimate $(3.5a)$ follows from $(3.3)$ and the formula
$$
-arg\,(\psi(x,E)+i\psi^{\prime}(x,E))\big|_0^{na}=nap(E)
$$
for $E\in {\bar\Lambda}^f$ and any non-zero Bloch solution $\psi(x,E)$ of
$(1.13)$. We obtain $(3.5b)$ using
(1) the left-hand side of inequalities $(3.4)$,
(2) Theorem 1.2 of Chapter 8 of \cite{CL},
(3) the representation of the monodromy operator $M(E)$ for $E\in{\Bbb R}
\backslash {\bar\Lambda}^f$ as the rotation matrix on the angle $ap(E)$
clockwise for an appropriate basis in the space of solutions
(identified with the space of the Cauchy data at $x=0$) to $(1.13)$ at
fixed $E$ ,
(4) the fact that the integer part $[-\pi^{-1}arg\,(\chi_1(\varphi(s),
\varphi^{\prime}(s))+i\chi_2(\varphi(s),\varphi^{\prime}(s)))\big|_0^y]$
\noindent
(where
$(\chi_1(\varphi(s),\varphi^{\prime}(s)),\chi_2(\varphi(s),
\varphi^{\prime}(s)))$ are the coordinates of the Cauchy data
\noindent
$(\varphi(s),\varphi^{\prime}(s))$ at $x=s$ of a
non-zero real-valued solution $\varphi$ of $(1.13)$ at fixed
$E$ with respect to a fixed (independent of $s$) basis (in the
space of the Cauchy data at $x=s$) for which the change of variables
$(\varphi,\varphi^{\prime})\to (\chi_1,\chi_2)$
has a positive determinant) is independent of the basis.
\medskip
{\large\Roman{three}.} Under assumptions $(1.5)$, the following formula
holds:
$$
F_{sc}(]-\infty,E[)=N(\varphi_\pm(\bullet ,E),]-\infty ,\infty [),\quad
E\le 0,
\eqno(3.6)
$$
where $F_{sc}(]-\infty,E[)$ is the number of bound states with energies
in $]-\infty,E[$ for the equation $(1.4),\;\;\varphi_\pm (x,E)$ are
solutions of $(1.4)$ such that
$$
\begin{array}{l}
\varphi_+(x,E)=e^{-\kappa x}(1+o(1)),\ \kappa=i\sqrt{E}\ge 0,\ \ {\rm as}\ \
x\to +\infty\cr
\varphi_-(x,E)=e^{\kappa x}(1+o(1)),\ \kappa=i\sqrt{E}\ge 0,\ \ {\rm as}\ \
x\to -\infty,
\end{array}
$$
$N(\varphi(\bullet ,E),]-\infty ,\infty [)$ is the number of zeroes
of $\varphi (x,E)$ in $]-\infty ,\infty [$ (with respect to $x$).
If, in addition to (1.5), $v(x)\equiv 0$ for $x<x_1$, then
$$
0\le N(\psi_-(\bullet,E), ] -\infty, \infty [) - N(\varphi_-(\bullet,E),
] -\infty, \infty [)\le 1,\ \ E\le 0,
\eqno(3.7)
$$
where $\psi_-(x,E)$ is the solution of (1.4) such that
$$
\psi_-(x,E)=e^{-\kappa x},\ \ \kappa=i\sqrt{E}\ge 0, \ \ {\rm for}\ \
x<x_1.
$$
One can obtain $(3.6)$ generalizing the proof of the Theorem 2.1
of Chapter 8 of \cite{CL} and using properties of $\varphi (x,E)$ given
in Lemma 1 of Section 2 of \cite{DT}.
The same arguments that prove (3.3), (3.4) prove also (3.7) (taking into
account that
$$
{\pi\over 2} - \arctan{\psi_-^{\prime}(x_1,E)\over \psi_-(x_1,E)}\ge
{\pi\over 2} - \arctan{\varphi_-^{\prime}(x_1,E)\over \varphi_-(x_1,E)},
$$
where $\arctan\,r\in ] -\pi/2, \ \pi/2 [$ for $r\in{\Bbb R}$).
Remark.
For the case when $E$ is a bound state energy and, as a corollary,
$\varphi_{\pm}(x,E)$ is a bound state, the formula (3.6) was mentioned, for
example, in $\S 1$ of Chapter 1 of \cite{NMPZ}. Completing the present paper
we have found that the statement of the formula (3.6) in the general case
was given in Proposition 10.3 of \cite{AKM1}.
\medskip
{\large\Roman{four}.} Let
$$
\varphi (x,E)=ae^{\kappa x}+be^{-\kappa x}\quad\mbox{for }x\ge y,
\eqno(3.8)
$$
where $a,b\in {\Bbb R},\;a^2+b^2\ne 0,\;\kappa >0,\;y>0$.
Then
$$
\varphi (x)\ne 0\;\mbox{for }x\ge y\;\mbox{if }\varphi (y)>0,\;\;
\kappa\varphi (y)+\varphi'(y)\ge 0;
\eqno(3.9a)
$$
$$
\varphi (x)\;\mbox{ has a single zero for }x\ge y\;\mbox{if }
\varphi (y)\ge 0,\;\;\kappa\varphi (y)+\varphi'(y)<0;
\eqno(3.9b)
$$
$$
\varphi (x)\ne 0\;\mbox{for }x\ge y\;\mbox{if }\varphi (y)<0,\;\;
\kappa\varphi (y)+\varphi'(y)\le 0;
\eqno(3.9c)
$$
$$
\varphi (x)\;\mbox{ has a single zero for }x\ge y\;\mbox{if }
\varphi (y)\le 0,\;\;\kappa\varphi (y)+\varphi'(y)>0;
\eqno(3.9d)
$$
Let
$$
\varphi (x)=a+bx\;\;\mbox{for }x\ge y,
\eqno(3.10)
$$
where $a,b\in {\Bbb R},\;a^2+b^2\ne 0,\;y>0$.
Then
$$
\varphi (x)\ne 0\;\mbox{for }x\ge y\;\mbox{if }\varphi (y)>0,\;\;
\varphi'(y)\ge 0;
\eqno(3.11a)
$$
$$
\varphi (x)\;\mbox{ has a single zero for }x\ge y\;\mbox{if }
\varphi (y)\ge 0,\;\;\varphi'(y)<0;
\eqno(3.11b)
$$
$$
\varphi (x)\ne 0\;\mbox{for }x\ge y\;\mbox{if }\varphi (y)<0,\;\;
\varphi'(y)\le 0;
\eqno(3.11c)
$$
$$
\varphi (x)\;\mbox{ has a single zero for }x\ge y\;\mbox{if }
\varphi (y)\le 0,\;\;\varphi'(y)>0.
\eqno(3.11d)
$$
\medskip
{\large\Roman{five}.} We consider the Schr\"odinger equation
$$
-\Psi''+v(x)\Psi=E\Psi,\;\;x\in\ [0,y],
\eqno(3.12)
$$
where
$$
v\in L^1([0,y]),\quad v=\bar v,\;\;y>0,
\eqno(3.13)
$$
with boundary conditions
$$
\begin{array}{l}
\Psi (0)\cos\alpha -\Psi'(0)\sin\alpha =0,\\
\\
\Psi (y)\cos\beta -\Psi'(y)\sin\beta =0,\;\;\alpha\in\Bbb R,\;\;
\beta\in\Bbb R.
\end{array}
\eqno(3.14)
$$
Without loss of generality we may assume
$$
0\le\alpha <\pi,\quad 0<\beta\le\pi.
\eqno(3.15)
$$
Consider the eigenvalues $E_j$ and the distribution function
$$
F(\Sigma)=\,\#\{E_j\in\Sigma\}\quad\mbox{(the number of eigenvalues
in an interval }\Sigma\subset\Bbb R)
\eqno(3.16)
$$
for the spectral problem $(3.12),\;(3.14)$.
Consider the solution $\varphi (x,E)$ of $(3.12)$ such that
$$
\varphi (0,E)=\sin\alpha,\quad\varphi'(0,E)=\cos\alpha .
\eqno(3.17)
$$
Under assumptions $(3.13),\,(3.15)$, the following formulas hold:
$$
F(]-\infty ,E])=\bigl[-\pi^{-1} \biggl. \arg(\varphi +i\varphi')
\biggr|_0^{y}\bigr]\quad\mbox{for }\alpha =0,\;\beta =\pi ,
\eqno(3.18)
$$
$$
\bigl[-\pi^{-1} \biggl. \arg(\varphi +i\varphi')\biggr|_0^{y}\bigr]\le
F(]-\infty ,E])\le \bigl[-\pi^{-1} \biggl. \arg(\varphi +i\varphi')
\biggr|_0^{y}\bigr]+1\quad\mbox{for }\alpha <\beta ,
\eqno(3.19)
$$
$$
\bigl[-\pi^{-1} \biggl. \arg(\varphi +i\varphi')\biggr|_0^{y}\bigr]+1\le
F(]-\infty ,E])\le \bigl[-\pi^{-1} \biggl. \arg(\varphi +i\varphi')
\biggr|_0^{y}\bigr]+2\;\mbox{for }\alpha <\beta ,
\eqno(3.19)
$$
$$
F(]-\infty ,E])=\bigl[-\pi^{-1} \biggl. \arg(\varphi +i\varphi')
\biggr|_0^{y}\bigr]+1\quad\mbox{for }\alpha =\beta ,
\eqno(3.21)
$$
where $[r]$ is defined by $(4.6)$.
For the case of bounded potential one can obtain these results using
the proof of Theorem 2.1 of Chapter 8 of \cite{CL}.
\section*{4. Proofs of Theorems 1, 2, 3 and Corollaries 1, 2.}
{\bf Proof of Theorem 1.} Consider the solution $\varphi (x,E)$
of $(1.1)$ such that
$$
\varphi (x,E)=e^{\kappa x},\quad\kappa =i\sqrt{E}\ge 0,
\quad\mbox{for }x\le 0.
\eqno(4.1)
$$
Note that
$$
\varphi (0,E)=1,\quad\varphi'(0,E)=\kappa .
\eqno(4.2)
$$
Due to $(3.4),\;(4.2)$ the following formulas hold:
$$
\biggl. \arg(\varphi +i\varphi')\biggr|_0^{y}<\frac{\pi }{2}-\arctan\,\kappa ,
\eqno(4.3)
$$
$$
\begin{array}{l}
N(\varphi ,]0,y[)=\bigl[-\pi^{-1} \biggl. \arg(\varphi +i\varphi')
\biggr|_0^{y}\bigr]\\
\\
\mbox{if }\quad -\frac{\pi }{2}\le \arctan\,\frac{\varphi'(y)}{\varphi (y)}
\le \arctan\,\kappa ,
\end{array}
\eqno(4.4)
$$
$$
\begin{array}{l}
N(\varphi ,]0,y[)=\bigl[-\pi^{-1} \biggl. \arg(\varphi +i\varphi')
\biggr|_0^{y}\bigr]+1\\
\\
\mbox{if }\quad \arctan\,\kappa < \arctan\,\frac{\varphi'(y)}{\varphi (y)}
< \frac{\pi }{2} ,
\end{array}
\eqno(4.5)
$$
where $y>0$,
$$
\begin{array}{l}
[r]\ \ \hbox{ is the integer part of }\ r \ \hbox{ for }\ r\ge 0,\\
\mathstrut \\
\lbrack r \rbrack = -1\ \ \ \ \hbox{ for }\ \ \; -1< r < 0,
\end{array}
\eqno(4.6)
$$
$\arctan\,(\varphi'(y)/\varphi (y))=-\pi /2$ means that $\varphi (y)=0$.
Due to $(1.20),\;(3.5)$ the following formulas hold:
$$
[\pi^{-1}nap(E)]-1\le \bigl[-\pi^{-1} \biggl. \arg(\varphi +i\varphi')
\biggr|_0^{na}\bigr]\le [\pi^{-1}nap(E)]\quad\mbox{for }E\in\bar\Lambda^f,
\eqno(4.7)
$$
$$
\bigl[-\pi^{-1} \biggl. \arg(\varphi +i\varphi')\biggr|_0^{na}\bigr]
=[\pi^{-1}nap(E)]\quad\mbox{for }E\in\Bbb R\setminus\Lambda^f,
\eqno(4.8)
$$
where $[r]$ is defined by $(4.6)$ (we recall that $p(E)\ge 0$ for
$E\in\Bbb R$).
Due to $(4.4),\;(4.5),\;(4.7),\;(4.8)$ the following formulas hold:
$$
[\pi^{-1}nap(E)]-1\le N(\varphi ,]0,na[)\le [\pi^{-1}nap(E)]+1
\quad\mbox{for }E\in\bar\Lambda^f,
\eqno(4.9)
$$
$$
[\pi^{-1}nap(E)]\le N(\varphi ,]0,na[)\le [\pi^{-1}nap(E)]+1
\quad\mbox{for }E\in\Bbb R\setminus\bar\Lambda^f,
\eqno(4.10)
$$
and, in addition,
$$
\begin{array}{l}
\mbox{if }N(\varphi ,]0,na[)=[\pi^{-1}nap(E)]+1,\\
\\
\mbox{then }\arctan\,\kappa < \arctan\,\frac{\varphi'(na)}{\varphi (na)}
< \frac{\pi }{2} ,
\end{array}
\eqno(4.11)
$$
The function $\varphi (x,E)$ is of the form $(4.1)$ for $x\le 0$, of
the form $(3.8)$ for $x\ge na, E<0$, and of the form $(3.10)$ for
$x\ge na, E=0$. Thus, the function $\varphi (x,E)$ has no zeroes for
$x\le 0$ and has at most one zero for $x\ge na$. Thus,
$$
N(\varphi ,]-\infty ,na[)=N(\varphi ,]0,na[),
\eqno(4.12a)
$$
$$
0\le N(\varphi ,]-\infty ,\infty [)-N(\varphi ,]0,na[)\le 1.
\eqno(4.12b)
$$
{From} $(4.11),\;(3.9),\;(3.11),\;(4.12)$ it follows that
$$
\begin{array}{l}
\mbox{if }\;\;N(\varphi ,]0,na[)=[\pi^{-1}nap(E)]+1,\\
\\
\mbox{then }\;\;N(\varphi ,]-\infty ,\infty [)=N(\varphi ,]0,na[).
\end{array}
\eqno(4.13)
$$
The formulas $(2.6),\;(2.7)$ follow from $(3.6),\;(4.9),\;(4.10),\;
(4.12b),\;(4.13)$.
{\bf Proof of the Corollary 1.} Consider the energies $z_i,\;
i=-1,0,\dots ,2(\# J-1)$, such that
$$
z_{-1}=-\infty ,\quad\Lambda_j^f=]z_{2j-3},z_{2j-2}[,\quad j\in J,
\eqno(4.14)
$$
where $\# J$ is the number of forbidden zones. Due to properties of $p(E)$,
for any $j\in J$ and $n\in\Bbb N$ there is $\delta^{(n)}>0$ ($\delta^{(n)}$
depends also on $p(E)$ and $a$) such that
$$
\left[\frac{nap(E)}{\pi}\right]=n{\it l}_j,\quad {\it l}_j\in\Bbb N\cup 0,
\quad\mbox{for }E\in\bar\Lambda_j^f\cup [z_{2j-3},z_{2j-3}+\delta^{(n)}[.
\eqno(4.15)
$$
Due to $(2.6),\;(4.15)$
$$
\begin{array}{l}
F_{sc}^{(n)}(]-\infty ,E[)\in\{n{\it l}_j-1,\,n{\it l}_j,\;n{\it l}_j+1\},
\quad {\it l}_j\ge 1,\\
\\
F_{sc}^{(n)}(]-\infty ,E[)\in\{n{\it l}_j,\;n{\it l}_j+1\},
\quad {\it l}_j=0,\\
\\
\mbox{for }E\in \bigl(\bar\Lambda_j^f\cup [z_{2j-3},z_{2j-3}+
\delta^{(n)}[\,\bigr)\,\cap\, ]-\infty ,0].
\end{array}
\eqno(4.16)
$$
The formula $(2.8)$ follows from $(1.20),\;(4.16)$ and the fact that
$E_j^{(n)}<0$.
{\bf Proof of Theorem 2.} Consider the solution $\varphi (x,E)$ of $(1.1)$
such that
$$
\varphi (0,E)=\sin\alpha,\quad\varphi'(0,E)=\cos\alpha .
\eqno(4.17)
$$
Due to $(1.20),\;(3.5)$ the following formulas hold:
$$
[\pi^{-1}nap(E)]-1\le \bigl[-\pi^{-1} \biggl. \arg(\varphi +i\varphi')
\biggr|_0^{na}\bigr]\le [\pi^{-1}nap(E)]\quad\mbox{for }E\in\bar\Lambda^f,
\eqno(4.18)
$$
$$
\bigl[-\pi^{-1} \biggl. \arg(\varphi +i\varphi')\biggr|_0^{na}\bigr]
=[\pi^{-1}nap(E)]\quad\mbox{for }E\in\Bbb R\setminus\bar\Lambda^f,
\eqno(4.19)
$$
where $[r]$ is defined by $(4.6)$. The formulas $(2.10)$--$(2.13)$
follow from $(3.19)$--$(3.21)$, $(4.18)$, $(4.19)$.
{\bf Proof of Corollary 2.} Due to properties of $p(E)$,
for any $j\in J\setminus 1$ and $n\in\Bbb N$ there is $\varepsilon^{(n)}>0$
($\varepsilon^{(n)}$ depends also on $p(E)$ and $a$) such that
$$
\left[\frac{nap(z_{2j-2})}{\pi}\right]-\left[\frac{nap(z_{2j-3}-
\varepsilon )}{\pi}\right]=1
\eqno(4.20)
$$
for $0<\varepsilon\le\varepsilon^{(n)}$, where $z_i$ are the same as
in the proof of {\bf Corollary 1}.
The formula $(2.14b)$ follows from $(2.10),\;(4.20)$. The formula
$(2.14a)$ follows from $(2.10a)$ and $(1.20)$ with $j=1$.
Due to $(2.11),\;(4.20),\;(1.20)$, for $\alpha <\beta$,
$$
\begin{array}{l}
F^{(n)}(]-\infty ,E[)\in\{n{\it l}_j-1,\,n{\it l}_j,\;n{\it l}_j+1\},
\quad\mbox{for }j\in J\setminus 1,\\
\\
E\in ]z_{2j-3}+\varepsilon^{(n)}[\,\cup\,\bar\Lambda_j^f,\\
\\
F^{(n)}(]-\infty ,E[)\in\{n{\it l}_j,\;n{\it l}_j+1\},\quad
\mbox{for }j=1,\;E\in\bar\Lambda_1^f.
\end{array}
\eqno(4.21)
$$
The formula $(2.15)$ follows from $(4.21)$.
The deduction of others formulas of {\bf Corollary 2} is similar.
{\bf Proof of Theorem 3.} Suppose, first, that $n=2$. Consider the solution
$\varphi_+(x,E)$ of (2.33) such that
$$
\varphi_+(x,E)=e^{-\kappa x}(1+o(1))\ \ {\rm as}\ \ x\to +\infty,
$$
where (here and below in this proof) $\kappa=i\sqrt{E}\ge 0$.
Note that
$$
\varphi_+(x,E)=\varphi_{+,2}(x,E)\ \ {\rm for}\ \ x\ge x_1,
\eqno(4.22)
$$
where (here and below in this proof) $\varphi_{\pm,j}$,\ $j=1,2$, denotes
the solution of (1,4) with $v=q_j$ such that
$$
\begin{array}{l}
\varphi_{+,j}(x,E)=e^{-\kappa x}(1+o(1))\ \ {\rm as}\ \ x\to +\infty,\\
\mathstrut\\
\varphi_{-,j}(x,E)=e^{\kappa x}(1+o(1))\ \ {\rm as}\ \ x\to -\infty.
\end{array}
$$
Using (3.6) for $v=q_j$ and (4.22) we obtain that
$$
N(\varphi_+(\cdot,E), [x_1, +\infty [)\le F_2(] -\infty, E[),
\eqno(4.23)
$$
$$
N(\varphi_{-,1}(\cdot,E), ] -\infty, x_1[)\le F_1(] -\infty, E[),
\eqno(4.24)
$$
where (here and below in this proof) $N(\varphi(\cdot,E),I)$ denotes the
number of zeros of $\varphi(x,E)$ in an interval $I$ (with respect to $x$).
Using the interlacing property of zeros of solutions to (1.4) (see
$\S 1$ of Chapter 8 of \cite{CL}) we obtain that
$$
N(\varphi_+(\cdot,E), ] -\infty, x_1[)\le N(\varphi_{-,1}(\cdot,E), ]
-\infty, x_1[)+1.
\eqno(4.25)
$$
{From} (4.23)-(4.25) it follows that
$$
N(\varphi_+(\cdot,E), ] -\infty, +\infty [)\le F_1(] -\infty, E[) +
F_2(] -\infty, E[) + 1.
\eqno(4.26)
$$
Consider now the solution $\varphi_{x_1}(x,E)$ of (2.33) such that
$$\varphi_{x_1}(x_1,E)=e^{-\kappa x_1},\ \ \varphi_{x_1}^{\prime}(x_1,E)=
-\kappa e^{-\kappa x_1}.$$
Note that
$$
\begin{array}{l}
\varphi_{x_1}(x,E)=\varphi_{+,1}(x,E)\ \ {\rm for}\ \ x\le x_1,\\
\mathstrut\\
\varphi_{x_1}(x,E)=\psi_{-,2}(x,E)\ \ {\rm for}\ \ x\ge x_1,
\end{array}
\eqno(4.27)
$$
where $\psi_{-,2}(x,E)$ is the solution of (1,4) with $v=q_2$ such that
$$
\psi_{-,2}(x,E)=e^{-\kappa x}\ \ {\rm for}\ \ x\le x_1.
$$
Using (3.6) for $v=q_j$ and (3.7) for $v=q_2$ we obtain that
$$
\begin{array}{l}
N(\varphi_{+,1}(\cdot,E), ] -\infty, x_1[)=F_1(] -\infty, E[),\\
\mathstrut\\
N(\psi_{-,2}(\cdot,E), ]x_1, +\infty [)\ge F_2(] -\infty, E[).
\end{array}
\eqno(4.28)
$$
{From} (4.27), (4.28) it follows that
$$N(\varphi_{x_1}(\cdot,E), ] -\infty, +\infty [)\ge F_1(] -\infty, E[) +
F_2(] -\infty, E[).\eqno(4.29)$$
Using the interlacing property of zeros of solutions to (1.4) we
obtain that
$$
N(\varphi_+(\cdot,E), ] -\infty, +\infty [)
\ge N(\varphi_{x_1}(\cdot,E), ] -\infty,
+\infty [) -1.
\eqno(4.30)
$$
{From} (4.29), (4.30) it follows that
$$
F_1(] -\infty, E[) + F_2(] -\infty, E[) -1\le
N(\varphi_+(\cdot,E),] -\infty, +\infty[).
\eqno(4.31)
$$
{From} (3.6), (4.26), (4.31) it follows that
$$
|F(] -\infty, E[) - \sum_{j=1}^2F_j(] -\infty, E[)|\le 1.
$$
Thus, (2.35) is proved for $n=2$.
We obtain (2.35) for the general case by induction.
The proof of Theorem 3 is completed.
Remark.
The main idea of the proof of Theorem 3 is similar to the main idea of the
short proof of (2.36) presented in \cite{AKM2} (with a reference to a referee
of \cite{AKM2}).
| proofpile-arXiv_065-7997 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Fundamental stellar parameters such as masses and radii of well
detached double-lined spectroscopic eclipsing binaries can be
determined very accurately (Andersen 1991). Therefore, from accurate
(1-2\%) stellar mass and radius determinations of such objects, one
can compute surface gravities to very high and otherwise inaccessible
confidence levels. Indeed, Henry \& Mc Carthy (1993) have discussed
the data available for visual binaries of solar mass and below. Only
$\alpha$ Cen B (G2V,0.90 M$_{\odot}$) has a mass known with an
accuracy comparable to that for favorable eclipsing binaries. This
point shows the importance of choosing such double-lined eclipsing
binaries in order to obtain surface gravities with the highest
possible accuracy. Moreover, these binaries are of great interest to
perform accurate tests of stellar evolutionary models (see e.g.
Lastennet {\al} 1996, Pols {\al} 1997, Lastennet \& Valls-Gabaud 1998)
used to derive cluster ages. The knowledge of all possible stellar
parameters for such single stars is the basis of the modelling of the
global physical properties and evolution of star clusters or galaxies.
Nevertheless, while masses and radii are accurately known, the
effective temperatures -- and consequently, the luminosities of these
stars -- strongly depend upon the calibration used to relate
photometric indices with {\teff}. As a matter of fact, for such
binaries the temperatures given in the literature come from various
calibration procedures and are indeed highly inhomogeneous.
Furthermore, due to the lack of empirical calibrations at different
metallicities, solar metallicity is often assumed for photometric
estimations of {\teff}. In this regard, synthetic photometry
performed from large grids of stellar atmosphere models, calibrated in
{\teff}, {\lg}, and {\feh}, provides a powerful tool of investigation.
In this paper, we explore simultaneous solutions of {\teff} and
{\feh}, and we address the question of the reliability of
metallicity-independent effective temperature determinations.
We have selected 20 binary systems (40 stars) for which we have $uvby$
Str{\"o}mgren photometry with estimated errors (see Table 1).
For this sample, previous estimates of effective temperature are not
homogeneous, originating from various calibrations established for
different spectral-type domains: Morton \& Adams (1968), Relyea \&
Kurucz (1978), Osmer \& Peterson (1974), Grosb{\o}l (1978), Davis \&
Shobbrook (1977), Popper (1980), Moon \& Dworetsky (1985), Saxner \&
Hammarb\"ack (1985), Jakobsen (1986), Magain (1987), Napiwotzki {\al}
(1993), and Edvardsson {\al} (1993). Moreover, all these studies are
of course historically not fully independent. As an example, the
{\teff} of Moon \& Dworetsky (1985) is estimated using the {\teff},
(B$-$V)$_0$ calibration of Hayes (1978) and the {\teff}, c$_0$
calibration of Davis \& Shobbrook (1977). However, this does not
mean that these calibrations allow to derive very similar
temperatures. As highlighted by Andersen \& Clausen (1989) concerning
the O-type components of EM Carinae, the temperature calibration of
Davis \& Shobbrook (1977), Jakobsen (1986), and Popper (1980) do not
agree particularly well. A similar comparison of these three
calibrations made by Clausen \& Gim\'enez (1991) with the massive
B-type components of CW Cephei leads to $\Delta${\teff}$\sim$5500K !
Thus, a new {\em and} homogeneous determination of effective
temperature is of primordial interest for such well-known objects. In
order to re-derive in a homogeneous way the {\teff} of these stars, we
have used the {\em Ba}sel {\em S}tellar {\em L}ibrary (hereafter {\em
``BaSeL''}) which provides empirically calibrated model spectra over a
large range of stellar parameters (Lejeune {\al} 1997, 1998a).
In Section \ref{sect:modellib}, we will describe the models used to
perform our calculation of {\teff} from $uvby$ Str{\"o}mgren photometry.
Sect. \ref{sect:effectemp} will be devoted to the description of the
method and the presentation of the results.
\section{Model colours}
\label{sect:modellib}
The BaSeL models cover a large range of fundamental parameters: 2000 K
$\leq$ {\teff} $\leq$ 50,000 K, $-$1.02 $\leq$ {\lg} $\leq$ 5.5, and
$-$5.0 $\leq$ {\mh} $\leq$ +1.0. This library combines theoretical
stellar energy distributions which are based on several original grids
of blanketed model atmospheres, and which have been corrected in such
a way as to provide synthetic colours consistent with extant empirical
calibrations at all wavelengths from the near-UV through the far-IR
(see Lejeune {\al} 1997, 1998a). For our purpose, we have used the
new version of the BaSeL models for which the correction procedure of
the theoretical spectra has been extended to higher temperatures
({\teff} $\ge$ 12,000 K), using the {\teff}--(B$-$V) calibration of
Flower (1996), and to shorter wavelengths (Lejeune {\al} 1998b).
Because the correction procedure implies modulations of the
(pseudo-)continuum which are smooth between the calibration
wavelengths, the final grid provides colour-calibrated flux
distributions (9.1 $\leq \lambda \leq$ 160,000 nm, with a mean
resolution of 1 $\sim$ 2 nm from the UV to the visible) which are also
suitable for calculating medium-band synthetic photometry, such as
Str\"omgren colours. Thus, synthetic Str{\"o}mgren photometry was
performed using the passband response functions ($u, v, b, y$) given
in Schmidt-Kaler (1982). Theoretical (u$-$b), (\by), {\mun} $=$
(v$-$b)$-$(\by), and {\cun} $=$ (u$-$v)$-$(v$-$b) indices have been
computed, where the zero-points were defined by matching the observed
colours (u$-$b $=$ 1.411, \by $=$ 0.004, {\mun} $=$ 0.157,
{\cun} $=$ 1.089; Hauck \& Mermilliod 1980) of Vega with those
predicted by the corresponding Kurucz (1991) model for {\teff} $=$
9400 K, {\lg} $=$ 3.90, {\mh} $=$ $-$0.50.
\section{Effective temperature determination}
\label{sect:effectemp}
\subsection{Str{\"o}mgren data}
Among the approximately sixty SB2 systems gathered in Lastennet
(1998), only 20 have both individual $uvby$ Str{\"o}mgren photometric
indices and uncertainties for each component. Uncertainties are a key
point in the calculation presented later (Sect. \ref{sect:method}).
The photometry used for the 20 systems of our working sample (see
Table 1)
is from the recent Table 5 of Jordi {\al} (1997), who have taken the
individual indices directly from the literature but have also added
their own results for three systems (YZ Cas, WX Cep, and IQ Per).
\begin{table*}[h]
\begin{center}
\caption[]{Str{\"o}mgren photometry for the sample (after Table 5 of Jordi {\al} 1997).
Some useful notes about reddening are given in the last column.}
\begin{flushleft}
\begin{tabular}{lccccll}
\hline\noalign{\smallskip}
System & (b$-$y) & m$_1$ & c$_1$ & {\lg} & E(b$-$y)$^\dag$ & E(b$-$y) \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
BW Aqr & 0.345$\pm$0.015 & 0.15$\pm$0.03 & 0.45$\pm$0.03 & 3.981$\pm$0.020 & 0.03 & 0.03 $^{(a)}$ \\
& 0.325$\pm$0.015 & 0.16$\pm$0.03 & 0.45$\pm$ 0.03 & 4.075$\pm$0.022 & 0.03 & 0.03 $^{(a)}$ \\
AR Aur & -0.043$\pm$0.010 & 0.142$\pm$0.012 & 0.857$\pm$ 0.015 & 4.331$\pm$0.025 & 0. & 0. $^{(z)}$ \\
& -0.021$\pm$0.010 & 0.162$\pm$0.012 & 0.892$\pm$0.015 & 4.280$\pm$0.025 & 0. & 0. $^{(z)}$ \\
$\beta$ Aur & -0.003$\pm$0.026 & 0.162$\pm$0.053 & 1.124$\pm$0.057 & 3.930$\pm$0.010 & 0. & 0. $^{(z)}$ \\
& 0.005$\pm$0.026 & 0.206$\pm$0.053 & 1.121$\pm$0.057 & 3.962$\pm$0.010 & 0. & 0. $^{(z)}$ \\
GZ Cma & 0.077$\pm$0.010 & 0.193$\pm$0.020 & 1.066$\pm$0.025 & 3.989$\pm$0.012 & 0.047 & 0.047$\pm$0.02 $^{(b)}$ \\
& 0.091$\pm$0.010 & 0.216$\pm$0.020 & 1.002$\pm$0.025 & 4.083$\pm$0.016 & 0.047 & 0.047$\pm$0.02 $^{(b)}$ \\
EM Car & 0.310$\pm$0.010 & -0.038$\pm$0.010 & -0.089$\pm$0.010 & 3.857$\pm$0.017 & 0.44 & 0.44 $^{(c)}$ \\
& 0.310$\pm$0.010 & -0.047$\pm$0.010 & -0.076$\pm$0.010 & 3.928$\pm$0.016 & 0.44 & 0.44 $^{(c)}$ \\
YZ Cas & 0.004$\pm$0.006 & 0.186$\pm$0.009 & 1.106$\pm$0.011 & 3.995$\pm$0.011 & 0. & 0. $^{(z)}$ \\
& 0.248$\pm$0.081 & 0.196$\pm$0.166 & 0.309$\pm$0.238 & 4.309$\pm$0.010 & 0. & 0. $^{(z)}$ \\
WX Cep & 0.330$\pm$0.007 & 0.105$\pm$0.012 & 1.182$\pm$0.023 & 3.640$\pm$0.011 & 0.3 & 0. $^{(z)}$ \\
& 0.271$\pm$0.022 & 0.080$\pm$0.036 & 1.190$\pm$0.060 & 3.939$\pm$0.011 & 0.3 & 0. $^{(z)}$ \\
CW Cep & 0.333$\pm$0.010 & -0.071$\pm$0.015 & 0.037$\pm$0.015 & 4.059$\pm$0.024 & 0.46 & 0.46 $^{(d)}$ \\
& 0.339$\pm$0.010 & -0.064$\pm$0.015 & 0.045$\pm$0.015 & 4.092$\pm$0.024 & 0.46 & 0.46 $^{(d)}$ \\
RZ Cha & 0.314$\pm$0.016 & 0.149$\pm$0.027 & 0.480$\pm$0.027 & 3.909$\pm$0.009 & 0.003 & 0. $^{(z)}$ \\
& 0.304$\pm$0.017 & 0.165$\pm$0.029 & 0.468$\pm$0.029 & 3.907$\pm$0.010 & 0.003 & 0. $^{(z)}$ \\
KW Hya & 0.105$\pm$0.005 & 0.243$\pm$0.007 & 0.919$\pm$0.005 & 4.079$\pm$0.013 & 0.01 & 0. $^{(e),(z)}$ \\
& 0.244$\pm$0.011 & 0.210$\pm$0.007 & 0.490$\pm$0.047 & 4.270$\pm$0.010 & 0.01 & 0. $^{(e),(z)}$ \\
GG Lup & -0.049$\pm$0.007 & 0.097$\pm$0.011 & 0.450$\pm$0.012 & 4.301$\pm$0.012 & 0.020 & 0.020 $^{(f)}$ \\
& -0.019$\pm$0.019 & 0.141$\pm$0.032 & 0.811$\pm$0.036 & 4.364$\pm$0.010 & 0.020 & 0.020 $^{(f)}$ \\
TZ Men & -0.025$\pm$0.007 & 0.140$\pm$0.010 & 0.941$\pm$0.010 & 4.225$\pm$0.011 & 0. & 0. $^{(z)}$ \\
& 0.185$\pm$0.007 & 0.176$\pm$0.015 & 0.689$\pm$0.015 & 4.303$\pm$0.009 & 0. & 0. $^{(z)}$ \\
V451 Oph & 0.084$\pm$0.010 & 0.083$\pm$0.020 & 0.940$\pm$0.020 & 4.038$\pm$0.015 & 0.115 & 0.115 $^{(g)}$ \\
& 0.103$\pm$0.010 & 0.109$\pm$0.020 & 0.992$\pm$0.020 & 4.196$\pm$0.015 & 0.115 & 0.115 $^{(g)}$ \\
V1031 Ori & 0.10$\pm$0.01 & 0.17$\pm$0.02 & 1.13$\pm$0.03 & 3.560$\pm$0.008 & 0.05 & 0. $^{(h)}$ \\
& 0.05$\pm$0.01 & 0.16$\pm$0.02 & 1.13$\pm$0.03 & 3.850$\pm$0.019 & 0.05 & 0. $^{(h)}$ \\
IQ Per & 0.056$\pm$0.004 & 0.079$\pm$0.005 & 0.635$\pm$0.011 & 4.208$\pm$0.019 & 0.11 & 0.10$\pm$0.01 $^{(i)}$ \\
& 0.165$\pm$0.049 & 0.089$\pm$0.103 & 0.819$\pm$0.186 & 4.323$\pm$0.013 & 0.11 & 0.10$\pm$0.01 $^{(i)}$ \\
AI Phe & 0.528$\pm$0.010 & 0.308$\pm$0.010 & 0.379$\pm$0.010 & 3.593$\pm$0.003 & 0.015 & 0.015$\pm$0.02 $^{(j)}$ \\
& 0.316$\pm$0.010 & 0.172$\pm$0.010 & 0.421$\pm$0.010 & 4.021$\pm$0.004 & 0.015 & 0.015$\pm$0.02 $^{(j)}$ \\
$\zeta$ Phe & -0.07$\pm$0.02 & 0.13$\pm$0.03 & 0.49$\pm$0.03 & 4.122$\pm$0.009 & 0. & 0. $^{(z)}$ \\
& -0.01$\pm$0.02 & 0.11$\pm$0.03 & 0.77$\pm$0.03 & 4.309$\pm$0.012 & 0. & 0. $^{(z)}$ \\
PV Pup & 0.201$\pm$0.024 & 0.171$\pm$0.041 & 0.628$\pm$0.041 & 4.257$\pm$0.010 & 0.06 & 0. $^{(z)}$ \\
& 0.201$\pm$0.025 & 0.159$\pm$0.043 & 0.640$\pm$0.043 & 4.278$\pm$0.011 & 0.06 & 0. $^{(z)}$ \\
VV Pyx & 0.016$\pm$0.006 & 0.156$\pm$0.010 & 1.028$\pm$0.010 & 4.089$\pm$0.009 & 0.016 & 0.016 $^{(k)}$ \\
& 0.016$\pm$0.006 & 0.156$\pm$0.010 & 1.028$\pm$0.010 & 4.088$\pm$0.009 & 0.016 & 0.016 $^{(k)}$ \\
DM Vir & 0.317$\pm$0.007 & 0.171$\pm$0.010 & 0.480$\pm$0.012 & 4.108$\pm$0.009 & 0.017 & 0.017 $^{(l)}$ \\
& 0.317$\pm$0.007 & 0.171$\pm$0.010 & 0.480$\pm$0.012 & 4.106$\pm$0.009 & 0.017 & 0.017 $^{(l)}$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{flushleft}
\end{center}
\small $^\dag$ this work (cf. Sect \ref{sect:red}) \\ $^{(a)}$ Clausen
(1991); $^{(b)}$ Popper {\al} (1985); $^{(c)}$ Andersen \& Clausen
(1989); $^{(d)}$ Clausen \& Gim\'enez (1991); $^{(e)}$ Our value is
consistent with E(b$-$y)$=$0.009$\pm$0.008 for A-stars (Crawford,
1979); $^{(f)}$ Andersen {\al} (1993) using the (b$-$y)$_0$$-$c$_0$
relation of Crawford (1978); $^{(g)}$ Clausen {\al} (1986) determined
the reddening from the [u$-$b]$-$(b$-$y)$_0$ relation for early-type
stars of Str{\"o}mgren \& Olsen (unpublished) and the
c$_0$$-$(b$-$y)$_0$ relation of Crawford (1973), which give nearly
identical results; $^{(h)}$ Andersen {\al} (1990) used E(b$-$y)$=$ 0.0
but quote E(b$-$y)$=$0.025 as a possible value; $^{(i)}$
E(B$-$V)$=$0.14$\pm$0.01 (Lacy \& Frueh 1985); $^{(j)}$
E(B$-$V)$=$0.02$\pm$0.02 (Hrivnak \& Milone 1984); $^{(k)}$ Andersen
{\al} (1984) using the calibrations of Grosb{\o}l (1978); $^{(l)}$
Moon \& Dworetsky (1985); $^{(z)}$ At the best of our knowledge,
systems for which interstellar reddening has been neglected or
considered as insignificant in the literature.\\ Note: we assume
E(b$-$y)$=$0.73$\times$E(B$-$V) after Crawford (1975). \normalsize\\
\end{table*}
\subsection{Methodology}
\label{sect:method}
To compute synthetic colours from the BaSeL models, we need effective
temperature (\teff), surface gravity ({\lg}), and metallicity
({\feh}). Consequently, given the observed colours (namely, b$-$y,
m$_1$, and c$_1$), we are able to derive {\teff}, {\lg}, and {\feh}
from a comparison with model colours. As the surface gravities can be
derived very accurately from the masses and radii of the stars in our
working sample, only two physical parameters have to be derived
({\teff} and {\feh}).
This has been done by minimizing the $\chi^2$-functional, defined as
\beqa
\chi^2 (T_{\rm eff}, [Fe/H]) = \sum_{i=1}^{n} \left[ \left(\frac{\rm
colour(i)_{\rm syn} - colour(i)}{\sigma(\rm colour(i))}\right)^2
\right],
\eeqa
where $n$ is the number of comparison data, colour(1)= (\by)$_0$,
colour(2)= m$_0$, and colour(3) = c$_0$. The best $\chi^2$ is
obtained when the synthetic colour, colour(i)$_{\rm syn}$, is equal to
the observed one.
Reddening has been taken into account following Crawford (1975):
(b$-$y)$_0$ = (b$-$y) $-$ E(b$-$y), m$_0$ = m$_1$ + 0.3 $\times$
E(b$-$y), c$_0$ = c$_1$ $-$ 0.2 $\times$ E(b$-$y), in order to derive
the intrinsic colours from the observed ones. With $n=3$
observational data (\by, m$_1$, c$_1$) and $p=2$ free parameters
({\teff} and {\feh}), we expect to find a $\chi^2$-distribution with
$q=n-p=1$ degree of freedom. Finding the central minimum value
$\chi^{2}_{\rm min}$, we form the $\chi^2$-grid in the ({\teff},
{\feh})-plane and compute the boundaries corresponding to 1 $\sigma$,
2 $\sigma$, and 3 $\sigma$ respectively. As our sample contains only
stars belonging to the Galactic disk,
we have explored a restricted range of metallicity, $-$1.0 $\leq$
{\feh} $\leq$ +0.5.
\begin{figure}[!hb]
\centerline{\psfig{file=8144.01.f1,width=\columnwidth,height=8.2truecm,rheight=8.5truecm,angle=-90.}}
\caption{Simultaneous solutions of {\teff} and {\feh} for BW
Aqr A (assuming {\lg} = 3.981): matching (b$-$y) ({\it upper
left}), m$_1$ ({\it upper central}), c$_1$ ({\it upper
right}), (b$-$y), and c$_1$ ({\it lower left}), (b$-$y), and
m$_1$ ({\it lower central}), (b$-$y), m$_1$, and c$_1$ ({\it
lower right}). Best fit ({\it black dot}) and 1-$\sigma$ ({\it
solid line}), 2-$\sigma$ ({\it dashed line}), and
3-$\sigma$({\it dot-dashed line}) confidence levels are also
shown. Previous estimates of {\teff} from Clausen (1991) are
indicated as vertical dotted lines in all panels.}
\label{fig:example}
\end{figure}
Figure \ref{fig:example} illustrates the different steps of the
method, here for BW Aqr A. All possible combinations of observational
data (as indicated on the top of each panel) are explored, hence
varying the number of degrees of freedom for minimizing the $\chi^2$.
The top panels show the results obtained for matching uniquely one
colour index ({\by}, {\mun}, or {\cun}). In these cases, $q = n - p
= -1$, which simply means that it is impossible to fix both {\teff}
and {\feh} with only one observational quantity, as indeed is
illustrated by the three top panels of Fig.\ref{fig:example}. From
(\by) only, the effective temperature boundaries appear to be very
similar across the whole metallicity range, highlighting the fact that
this index is traditionally used to derive {\teff}. Alternatively,
the {\mun} index only provides constraints on the metallicity of the
star. Used together (lower central panel), these two indices outline
restricted ``islands'' of solutions in the ({\teff}, {\feh})-plane,
and hence offer a good combination to estimate these parameters. The
{\cun} index has been originally designed to estimate the surface
gravity, but it also appears to be a good indicator of temperature in
the parameter range explored for BW Aqr A (upper right panel). On
the lower right panel, {\em all} the available observational
information ({\by}, {\mun}, {\cun}, and {\lg}) can be exploited. The
range of {\teff} values that we then derive for BW Aqr A agree well
with previous estimates (as indicated by the vertical dotted lines),
and the same is true for its metallicity, which is compatible with the
Galactic disk stars. Finally, in order to take full advantage of all
the observational information available for the stars in our sample,
we choose to estimate {\teff} and {\feh} from a $\chi^2$ minimization
performed on the three colour indices.
\subsection{Surface gravity accuracy and influence of reddening}
\subsubsection{Surface gravity}
As our results depend not only on the accuracy of the photometric
data, but also on that of the surface gravity determination, we
analysed the effect of a variation of {\lg} upon the predicted {\teff}
and {\feh} values. We investigated this ``{\lg} effect'' for the AR
Aur system, for which the known value of {\lg} has the largest
uncertainties in our working sample: for instance, the surface gravity
of the coolest component of AR Aur (AR Aur B) is {\lg} $=$
4.280$\pm$0.025.
For {\lg} $=$ 4.280, the central {\teff} value predicted is about
10,500K. If we consider {\lg}$-$0.025 dex (left panel in
Fig. \ref{fig:araur}) or {\lg}$+$0.025 dex (right panel in Fig.
\ref{fig:araur}), neither the central {\teff} value nor the pattern of
contours change significantly.
\begin{figure}[ht]
\centerline{
\psfig{file=8144.01.f2,width=4.2cm,height=3.6truecm,rheight=3.9truecm,angle=-90.}
\psfig{file=8144.02.f2,width=4.2cm,height=3.6truecm,rheight=3.9truecm,angle=-90.}
}
\caption{Influence of {\lg} on the simultaneous solution of
{\teff} and {\feh} for AR Aur B. Two different {\lg} values
are considered: {\lg} $=$ 4.255 ({\it left panel}), {\lg} $=$
4.305 ({\it right panel}). These values are 0.025 dex higher
or lower than the true {\lg} (4.280).} \label{fig:araur}
\end{figure}
This example shows that our results for ({\teff},{\feh}) will not
change due to variations of surface gravity within the errors listed
in Table 1.
\subsubsection{Interstellar reddening}
\label{sect:red}
Interstellar reddening is of prime importance for the determination of
both {\teff} and {\feh}. A great deal of attention was therefore
devoted to the E(b$-$y) values available in the literature, for each
star of our sample. We explore different reddening values (as
described in Sect. \ref{sect:method}), and we compare their resulting
$\chi^2$-scores. For the following systems, we adopted the published
values, in perfect agreement with our results: BW Aqr, AR Aur, $\beta$
Aur, GZ Cma, EM Car, CW Cep, GG Lup, TZ Men, V451 Oph, AI Phe, $\zeta$
Phe, VV Pyx, and DM Vir. As we did not find any indication about the
interstellar reddening of YZ Cas, we kept E(b$-$y) $=$ 0 as a quite
reasonable hypothesis. We have neither found any data on interstellar
reddening for the WX Cephei system. But the hypothesis of no
significant reddening for WX Cep is ruled out by the very high
$\chi^2$-value obtained in reproducing simultaneously the quadruplet
(b$-$y, m$_1$, c$_1$, \lg) of the observed data. From the different
reddening values explored in Table 2, we find that E(b$-$y) $=$ 0.32
for WX Cep A and E(b$-$y) $=$ 0.28 for WX Cep B provide the best
solutions.
\begin{table}[h]
\begin{center}
\caption[]{Influence of reddening on the $\chi^2$ of the
components of the WX Cephei system. Best $\chi^2$ are in bold
characters. The hypothesis of no reddening is definitively
ruled out.}
\begin{flushleft}
\begin{center}
\begin{tabular}{crr}
\hline\noalign{\smallskip}
& WX Cep A & WX Cep B \\
E(b$-$y) & $\chi^2$-values & $\chi^2$-values \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
0.00 & 912.890 & 115.290 \\
0.26 & 29.039 & 0.933 \\
0.28 & 10.974 & {\bf 0.682} \\
0.30 & 4.549 & 1.210 \\
0.32 & {\bf 4.009} & 2.651 \\
0.34 & 4.322 & 4.972 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{flushleft}
\end{center}
\label{tab:WXCEP}
\end{table}
The influence of reddening variations is illustrated in Figure
\ref{fig:wxcep}. While for the system, WX Cep AB, an average value
E(\by) $=$ 0.30 appears justified from the results of Table 2 -- and
will indeed be adopted in the remainder of this paper --, Figure 3
shows how for the individual component, WX Cep A, small changes
$\Delta$E(\by)$= \pm$0.02 away from its own optimum value E(\by) $=$
0.32 induce significant changes in the possible solutions of the
({\teff}, {\feh})-couples. In particular, going to E(\by) $=$ 0.30
(upper right panel) implies a dramatic jump in predicted {\feh} from a
(plausible) metal-normal (lower left) to a (rather unlikely ?)
metal-poor composition at or near the lower limit ({\feh} $= -1$) of
the exploration range.
\begin{figure}[ht]
\centerline{\psfig{file=8144.01.f3,width=4.2cm,height=3.6truecm,rheight=3.9truecm,angle=-90.}
\psfig{file=8144.02.f3,width=4.2cm,height=3.6truecm,rheight=3.9truecm,angle=-90.}}
\centerline{\psfig{file=8144.03.f3,width=4.2cm,height=3.6truecm,rheight=3.9truecm,angle=-90.}
\psfig{file=8144.04.f3,width=4.2cm,height=3.6truecm,rheight=3.9truecm,angle=-90.}}
\caption{Influence of reddening on the simultaneous solutions
of {\teff} and {\feh} for WX Cep A. Different reddening
values are considered: E(b$-$y) $=$ 0.00 ({\it upper left}),
E(b$-$y) $=$ 0.30 ({\it upper right}), E(b$-$y) $=$ 0.32 ({\it
lower left}), and E(b$-$y) $=$ 0.34 ({\it lower right}).
Previous determination of {\teff} from Popper (1987) (using
Popper's 1980 calibrations) is also shown for comparison ({\it
vertical dotted lines}).} \label{fig:wxcep} \end{figure}
For the other four systems for which interstellar reddening has also
been previously neglected in the literature, we found small, but not
significant, E(b$-$y) values: RZ Cha (0.003), KW Hya (0.01), V1031 Ori
(0.05), and PV Pup (0.06).
E(b$-$y) $=$ 0.11 has been adopted for IQ Per by comparing
different $\chi^2_{\rm min}$ solutions. This value is consistent
with E(b$-$y) $=$ 0.10$\pm$0.01, estimated from the published value of
E(B$-$V) $=$ 0.14$\pm$0.01 (Lacy \& Frueh 1985), assuming E(b$-$y) $=$
0.73$\times$E(B$-$V) after Crawford (1975). Adopted reddening values
for stars of our sample are listed in Table 1.
\subsection{General results and discussion}
In Figures \ref{fig:all1} and \ref{fig:all2} we show the full results
obtained (from {\by}, {\mun}, and {\cun}) for all the stars of the
sample in ({\teff}, {\feh}) planes. All the ({\teff},{\feh})-solutions
inside the contours allow to reproduce, at different confidence
levels, both the observed Str{\"o}mgren colours ({\by}, {\mun}, and
{\cun}) and the surface gravity with the BaSeL models. As a general
trend, it is important to notice that our {\teff} ranges do not
provide estimates systematically different from previous ones
(vertical dotted lines). Furthermore, the 3-$\sigma$ confidence
regions show that most previous {\teff} estimates are optimistic,
except for some stars (e.g., GG Lup A, TZ Men A, and V451 Oph A) for
which our method gives better constraints on the estimated effective
temperature. At a 1-$\sigma$ confidence level (68.3\%), our method
often provides better constraints for {\teff} determination. However,
it is worth noticing that for a few stars the match is really bad (see
$\chi^{2}_{\rm min}$-values labelled directly on Fig. \ref{fig:all1}
and \ref{fig:all2}). As already mentioned, with 3 observational data
(\by, m$_1$, c$_1$) and 2 free parameters ({\teff} and {\feh}), we
expect to find a $\chi^2$-distribution with 3$-$2$=$1 degree of
freedom and a typical $\chi^{2}_{\rm min}$-value of about 1. For some
stars (e.g. VV Pyx, DM Vir and KW Hya A), $\chi^{2}_{\rm min}$ is
greater than 10, a too high value to be acceptable because the
probability to obtain an observed minimum $\chi$-square greater than
the value $\chi^{2}$$ =$10 is less than 0.2{\%}. For this reason, the
results given for a particular star should not be used without carefully
considering the $\chi^{2}_{\rm min}$-value.
One of the most striking features appearing in nearly all panels of
Figs.
\ref{fig:all1} and \ref{fig:all2} is the considerable range of {\feh}
accepted inside the confidence levels. This is particularly true for
stars hotter than $\sim$ 10,000 K (as, for instance, EM Car A \& B and
GG Lup A \& B), for which optical photometry is quite insensitive to
the stellar metal content. For these stars, a large range in {\feh}
gives very similar $\chi^2$ values. In contrast, for the coolest stars
in our sample, our method provides straight constraints on their
metallicity. Actually, when observational metallicity indications are
available ($\beta$ Aur, YZ Cas, RZ Cha, AI Phe, and PV Pup), the
contour solutions are found in good agreement with previous estimated
{\feh} ranges (labelled as horizontal lines in Figs.
\ref{fig:all1} and \ref{fig:all2}).
The effective temperatures derived from our minimization procedure
cannot be easily presented in a simple table format, as they are
intrinsically related to metallicity. We nonetheless provide in Table
3, as an indication of the estimated stellar parameters for all the
stars in our sample, the best ($\chi^2_{\mathrm min}$) simultaneous
solutions ({\teff},{\feh}) for the three following cases: by using
{\by} and {\mun} (Case 1), {\by} and {\cun} (Case 2) and by using
{\by}, {\mun}, and {\cun} (Case 3). In Case 1 and Case 2, a typical
$\chi^{2}_{\rm min}$-value close to zero is theoretically expected,
and in Case 3, as previously mentioned, one expects a typical
$\chi^{2}_{\rm min}$-value of about 1. There are quite a few stars for
which $\chi^2_{\mathrm min}$ increases dramatically between Case 1 or
2 and Case 3 to a clearly unacceptable value (most notably AI Phe A
between Case 1 and Case 3). This point means that although a good fit
is obtained with two photometric indices, no acceptable
$\chi^2_{\mathrm min}$-value is obtained by adding one more index in
Case 3. Consequently, Case 1 or Case 2-solutions have to be chosen in
such cases. For these stars, even if the $\chi^2_{\mathrm min}$
solutions shown in Figs. \ref{fig:all1} and \ref{fig:all2} are not
reliable, it is interesting to notice that the contours derived are
however still in agreement with previous works.
The surprising result in Table 3 is that many solutions are very
metal-poor. This in fact means that the $\chi^{2}_{\rm min}$
solutions are not {\em necessarily} the most realistic ones. We must,
therefore, emphasize that the values presented in Table 3 should not
be used without carefully considering the confidence level contours
shown in Figs. \ref{fig:all1} and \ref{fig:all2}. For most stars in
our sample, {\teff} and {\feh} do not appear strongly correlated (i.e.
the confidence regions do not exhibit oblique shapes), but there are a
few cases for which the assumed metallicity leads to a different range
in the derived effective temperature (EM Car B, CW Cep A \& B, GG Lup
A, $\zeta$ Phe A). These results point out that the classical
derivation of {\teff} from calibration without exploring all {\feh}
values is not always a reliable method, even for hot stars.
\begin{figure*}[ht]
\centerline{\psfig{file=8144.01.f4,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.02.f4,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.03.f4,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.04.f4,width=4.truecm,height=4.5truecm,angle=-90.}}
\centerline{\psfig{file=8144.05.f4,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.06.f4,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.07.f4,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.08.f4,width=4.truecm,height=4.5truecm,angle=-90.}}
\centerline{\psfig{file=8144.09.f4,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.10.f4,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.11.f4,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.12.f4,width=4.truecm,height=4.5truecm,angle=-90.}}
\centerline{\psfig{file=8144.13.f4,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.14.f4,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.15.f4,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.16.f4,width=4.truecm,height=4.5truecm,angle=-90.}}
\centerline{\psfig{file=8144.17.f4,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.18.f4,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.19.f4,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.20.f4,width=4.truecm,height=4.5truecm,angle=-90.}}
\caption{Simultaneous solution of {\teff} and {\feh} matching
(b$-$y)$_0$, m$_0$, c$_0$, and {\lg}. The name of the star and
the $\chi^2_{\mathrm min}$ are labelled directly in each
panel. When available, effective temperature determinations
from previous studies ({\it vertical lines}) and observational
indications of metallicity ({\it horizontal lines}) are also
shown. }
\label{fig:all1}
\end{figure*}
\begin{figure*}[ht]
\centerline{\psfig{file=8144.01.f5,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.02.f5,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.03.f5,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.04.f5,width=4.truecm,height=4.5truecm,angle=-90.}}
\centerline{\psfig{file=8144.05.f5,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.06.f5,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.07.f5,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.08.f5,width=4.truecm,height=4.5truecm,angle=-90.}}
\centerline{\psfig{file=8144.09.f5,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.10.f5,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.11.f5,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.12.f5,width=4.truecm,height=4.5truecm,angle=-90.}}
\centerline{\psfig{file=8144.13.f5,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.14.f5,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.15.f5,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.16.f5,width=4.truecm,height=4.5truecm,angle=-90.}}
\centerline{\psfig{file=8144.17.f5,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.18.f5,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.19.f5,width=4.truecm,height=4.5truecm,angle=-90.}
\psfig{file=8144.20.f5,width=4.truecm,height=4.5truecm,angle=-90.}}
\caption{Same as Fig. \ref{fig:all1}.}
\label{fig:all2}
\end{figure*}
\begin{table*}[h]
\begin{center}
\caption[]{Best simultaneous ({\teff},{\feh}) solutions using
(b$-$y) and m$_1$ (Case 1), (b$-$y) and \\ c$_1$ (Case 2) or
(b$-$y), m$_1$, and c$_1$ (Case 3).}
\begin{flushleft}
\begin{tabular}{lrrrrrrrrr}
\hline\noalign{\smallskip}
& \multicolumn{3}{c}{Case 1} & \multicolumn{3}{c}{Case 2} & \multicolumn{3}{c}{Case 3} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Name & {\teff} & [Fe/H] & $\chi^{2}_{\rm min}$ & {\teff} & [Fe/H] & $\chi^{2}_{\rm min}$
& {\teff} & [Fe/H] & $\chi^{2}_{\rm min}$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
BW Aqr & 6220 & -0.5 & 0.01 & 6400 & 0.5 & 0.75 & 6400 & -0.2 & 4.73 \\
& 6400 & -0.4 & 0.00 & 6540 & 0.5 & 0.27 & 6520 & -0.2 & 2.59 \\
AR Aur & 11240 & 0.5 & 0.00 & 10760 & 0.5 & 0.66 & 10800 & 0.4 & 1.26 \\
& 10352 & -0.2 & 0.00 & 10568 & -0.6 & 0.00 & 10544 & -1.0 & 0.58 \\
$\beta$ Aur & 9260 & -0.8 & 0.13 & 9140 & -0.9 & 0.18 & 9140 & -0.9 & 0.32 \\
& 8900 & 0.2 & 0.00 & 9020 & -1.0 & 0.19 & 8960 & -0.7 & 0.35 \\
GZ Cma & 8480 & -1.0 & 0.06 & 8480 & -0.9 & 0.02 & 8480 & -1.0 & 0.08 \\
& 8340 & -0.6 & 0.00 & 8380 & -1.0 & 0.01 & 8360 & -0.5 & 0.04 \\
EM Car & 30640 & -0.9 & 2.72 & 35920 & 0.5 & 0.01 & 34800 & -1.0 & 5.97 \\
& 30000 & -1.0 & 0.50 & 32240 & 0.2 & 0.01 & 34000 & -0.9 & 1.94 \\
YZ Cas & 9080 & -1.0 & 0.00 & 9000 & -1.0 & 2.93 & 9000 & -1.0 & 2.94 \\
& 6820 & -0.2 & 0.00 & 6660 & -1.0 & 0.08 & 6660 & -0.7 & 0.16 \\
WX Cep & 8380 & -1.0 & 1.57 & 8180 & 0.2 & 0.00 & 8280 & -1.0 & 4.55 \\
& 9960 & 0.4 & 0.00 & 9480 & -0.2 & 1.10 & 9480 & -0.4 & 1.21 \\
CW Cep & 31000 & 0.4 & 0.00 & 26800 & -0.3 & 0.01 & 26800 & -0.3 & 1.30 \\
& 29400 & 0.5 & 0.25 & 24000 & 0.5 & 0.00 & 26200 & -0.2 & 0.89 \\
RZ Cha & 6240 & -0.6 & 0.02 & 6500 & 0.5 & 2.06 & 6500 & -0.3 & 7.50 \\
& 6340 & -0.4 & 0.01 & 6520 & 0.5 & 0.40 & 6520 & -0.2 & 2.90 \\
KW Hya & 7860 & -0.8 & 0.06 & 8120 & -1.0 & 34.60 & 8100 & -0.6 & 35.93 \\
& 6900 & 0.0 & 0.03 & 6940 & 0.4 & 0.02 & 6920 & 0.0 & 0.15 \\
GG Lup & 14320 & -0.5 & 1.70 & 14320 & -0.5 & 0.08 & 14320 & -0.5 & 1.71 \\
& 11000 & 0.5 & 0.00 & 11080 & 0.5 & 0.00 & 11080 & 0.5 & 0.01 \\
TZ Men & 10780 & -0.4 & 0.00 & 10255 & 0.2 & 0.01 & 10352 & -0.2 & 5.00 \\
& 7220 & -1.0 & 1.51 & 7440 & 0.4 & 43.51 & 7460 & -0.9 & 71.99 \\
V451 Oph & 11160 & -0.4 & 0.05 & 10620 & 0.3 & 0.05 & 10680 & 0.1 & 0.76 \\
& 10480 & -1.0 & 0.63 & 10080 & -0.8 & 0.25 & 10160 & -0.4 & 2.37 \\
V1031 Ori & 8080 & -1.0 & 5.62 & 7990 & -1.0 & 0.06 & 8050 & -1.0 & 5.91 \\
& 9120 & -0.8 & 0.02 & 9120 & -0.8 & 0.00 & 9120 & -0.8 & 0.02 \\
IQ Per & 13600 & -0.9 & 7.62 & 12760 & -0.7 & 0.10 & 12720 & -0.6 & 9.14 \\
& 8600 & -1.0 & 0.88 & 8120 & 0.5 & 0.09 & 8280 & -1.0 & 1.11 \\
AI Phe & 4860 & -0.9 & 0.11 & 4860 & -0.1 & 42.05 & 5400 & 0.2 & 63.46 \\
& 6360 & -0.3 & 0.03 & 6420 & 0.3 & 0.01 & 6480 & -0.3 & 3.37 \\
$\zeta$ Phe & 12820 & 0.5 & 0.00 & 13620 & 0.2 & 0.01 & 13460 & 0.5 & 0.15 \\
& 11000 & -1.0 & 1.50 & 11400 & -1.0 & 1.34 & 11400 & -1.0 & 2.10 \\
PV Pup & 7440 & -1.0 & 1.21 & 7520 & -1.0 & 0.09 & 7480 & -1.0 & 1.25 \\
& 7440 & -1.0 & 1.71 & 7520 & -0.5 & 0.01 & 7520 & -1.0 & 1.88 \\
VV Pyx & 9260 & -0.9 & 8.55 & 9020 & 0.5 & 0.18 & 9860 & -0.9 & 11.57 \\
& 9260 & -0.9 & 8.55 & 9020 & 0.5 & 0.18 & 9860 & -0.9 & 11.57 \\
DM Vir & 6360 & -0.3 & 0.15 & 6600 & 0.5 & 15.41 & 6620 & -0.2 & 38.85 \\
& 6360 & -0.3 & 0.15 & 6600 & 0.5 & 15.41 & 6600 & -0.2 & 38.31 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{flushleft}
\end{center}
\label{tab:res}
\end{table*}
\subsection{Comparison with Hipparcos parallax}
Very recently, Ribas {\al} (1998) have computed the effective
temperatures of 19 eclipsing binaries included in the Hipparcos
catalogue from their radii, Hipparcos trigonometric parallaxes, and
apparent visual magnitudes corrected for absorption. They used
Flower's (1996) calibration to derive bolometric corrections. Only 8
systems are in common with our working sample. The comparison with our
results is made in Table 4.
The {\teff} being highly related with metallicity, a direct comparison
is not possible because, unlike the Hipparcos-derived data, our
results are not given in terms of temperatures with error bars, but as
ranges of {\teff} compatible with a given {\feh}. Thus, the ranges
reported in Tab. 4
are given assuming three different hypotheses: {\feh}$=-$0.2, {\feh}
$=$ 0, and {\feh} $=$ 0.2. The overall agreement is quite
satisfactory, as illustrated in Fig. \ref{fig:hipp}. \\ The
disagreement for the temperatures of CW Cephei can be explained by the
large error of the Hipparcos parallax ($\sigma$$_{\rm \pi}$/$\pi$
$\simeq$70\%). For such large errors, the Lutz-Kelker correction
(Lutz \& Kelker 1973) cannot be neglected: the average distance is
certainly underestimated and, as a consequence, the {\teff} is also
underestimated in Ribas {\al}'s (1998) calculation. Thus, the
agreement with the results obtained from the BaSeL models is certainly
better than it would appear in Fig. \ref{fig:hipp} and Tab. 4.
Similar corrections, of slightly lesser extent, are probably also
indicated for the {\teff} of RZ Cha and GG Lup, which have
$\sigma$$_{\rm
\pi}$/$\pi >$ 10{\%} (11.6\% and 11.4\%, respectively). Finally, it
is worth noting that the system with the smallest relative error in
Tab. 4, $\beta$ Aur, shows excellent agreement between {\teff}
(Hipparcos) and {\teff} (BaSeL), which underlines the validity of
the BaSeL models.
\section{Conclusion}
The comprehensive knowledge of fundamental parameters of single stars
is the basis of the modelling of star clusters and galaxies. Most
fundamental stellar parameters of the individual components in SB2
eclipsing binaries are known with very high accuracy. Unfortunately,
while masses and radii are well determined, the temperatures strongly
depend on photometric calibrations. In this paper, we have used an
empirically-calibrated grid of theoretical stellar spectra (BaSeL
models) for simultaneously deriving homogeneous effective temperatures
and metallicities from observed data. Although a few stars show an
incompatibility between the observed and synthetic $uvby$ colours if
we try to match the three Str{\"o}mgren indices (\by),
\mun, and \cun, the overall determinations are satisfying. Moreover,
an acceptable solution is always possible when only considering two
photometric indices, as in Case 1 or Case 2 (see Table 3). The large
range of {\feh} associated with acceptable confidence levels makes it
evident that the classical method to derive {\teff} from
metallicity-independent calibrations should be considered with
caution. We found that, even for hot stars for which we expect
optical photometry to be nearly insensitive to the stellar
metal-content, a change in the assumed metallicity can lead to a
significant change in the predicted effective temperature range.
Furthermore, for cool stars, both {\teff} and {\feh} can be estimated
with good accuracy from the photometric method. The effects of
surface gravity and interstellar reddening have also been carefully
studied. In particular, an apparently minor error in reddening can
change dramatically the shape of the confidence contour levels, and,
therefore, the parameter values hence derived. By exploring the best
$\chi^2$-fits to the photometric data, we have re-derived new
reddening values for some stars (see Table 1). Finally, comparisons
for 16 stars with Hipparcos-based {\teff} determinations show good
agreement with the temperatures derived from the BaSeL models. The
agreement is even excellent for the star having the most reliable
Hipparcos data in the sample studied in this paper. These comparisons
also demonstrate that, while originally calibrated in order to
reproduce the broad-band (UBVRIJHKL) colours, the BaSeL models also
provide reliable results for medium-band photometry such as the
Str{\"o}mgren photometry. This point gives a significant weight to
the validity of the BaSeL library for synthetic photometry
applications in general.
\begin{table*}[h]
\begin{center}
\caption[]{Effective temperatures from Hipparcos (after Ribas
{\al} 1998) and from BaSeL models \\matching (\by)$_0$, m$_0$,
c$_0$, and {\lg} for the three following metallicities:
{\feh}$=-$0.2, {\feh} $=$ 0
\\and {\feh} $=$ 0.2.}
\begin{flushleft}
\begin{tabular}{lrrcrcrc}
\hline\noalign{\smallskip}
Name & & \multicolumn{2}{c}{{\feh}$=-$0.2} &
\multicolumn{2}{c}{{\feh} $=$ 0.} & \multicolumn{2}{c}{{\feh} $=$ 0.2} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
& {\teff}(Hipp.) [K] & {\teff}(BaSeL) [K] & $\sigma$ &
{\teff}(BaSeL) [K] & $\sigma$ & {\teff}(BaSeL) [K] & $\sigma$
\\
\noalign{\smallskip}
\hline\noalign{\smallskip}
$\beta$ Aur & 9230$\pm$150 & [8780,9620] & 1 & [8780,9560] & 1 & [8900,9500] & 1 \\
& 9186$\pm$145 & [8540,9500] & 1 & [8600,9440] & 1 & [8660,9320] & 1 \\
YZ Cas & 8624$\pm$290 & [9000,9120] & 2 & [8920,9240] & 3 & no solution & \\
& 6528$\pm$155 & [6100,7140] & 1 & [6180,7060] & 1 & [6260,7060] & 1 \\
CW Cep & 23804 & [26000,27200] & 1 & [25600,26600] & 1 & [24600,26600] & 2 \\
& 23272 & [25600,26800] & 1 & [25200 26200] & 1 & [24800,25400] & 1 \\
RZ Cha & 6681$\pm$400 & [6440,6560] & 1 & [6380,6600] & 2 & [6340,6640] & 3 \\
& 6513$\pm$385 & [6420,6580] & 1 & [6460,6540] & 1 & [6420,6580] & 2 \\
KW Hya & 7826$\pm$340 & [8080,8100] & 3 & no solution & & no solution & \\
& 6626$\pm$230 & [6780,7120] & 3 & [6860,6980] & 1 & [6860,7000] & 3 \\
GG Lup & 16128$\pm$2080 & [14080,14260] & 1 & [14020,14140] & 1 & [13780,14140] & 2 \\
& 12129$\pm$1960 & [10920,11320] & 1 & [10920,11320] & 1 & [10920,11320] & 1 \\
TZ Men & 9489$\pm$490 & [10300,10420] & 1 & [10300,10380] & 1 & [10260,10460] & 2 \\
& 6880$\pm$190 & [7340,7460] & 3 & no solution & & no solution & \\
$\zeta$ Phe & 14631$\pm$1150 & [13540,14020] & 1 & [13460,13860] & 1 & [13380,13860] & 1 \\
& 12249$\pm$1100 & [11240,11560] & 1 & [11280,11480] & 1 & [11040,11680] & 2 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{flushleft}
\end{center}
\label{tab:Hipp}
\end{table*}
\begin{figure*}[ht]
\centerline{\psfig{file=8144.01.f6,width=16.cm,height=5.3truecm,rheight=5.5truecm,angle=-90.}}
\caption{Hipparcos- versus BaSeL-derived effective
temperatures for $\beta$ Aur, YZ Cas, CW Cep, RZ Cha, KW Hya,
GG Lup, TZ Men, and $\zeta$ Phe. The errors are not shown on
the Hipparcos axis for CW Cephei (the hottest binary in these
figures). See text for explanation.}
\label{fig:hipp}
\end{figure*}
\begin{acknowledgements}
E. L. gratefully thanks the Swiss National Science Foundation for
financial support and, in particular, Professor R. Buser and the
Astronomisches Institut der Universit\"at Basel for their
hospitality. We acknowledge the referee, Dr Pols, for helpful
comments which have improved the clarity of this paper. This
research has made use of the Simbad database operated at CDS,
Strasbourg, France, and was supported by the Swiss National Science
Foundation.
\end{acknowledgements}
| proofpile-arXiv_065-8001 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
As our nearest galactic neighbors, the Magellanic Clouds offer a unique opportunity
to study the effects of different galactic environments on dust properties.
Their importance has increased with the recent discovery that the dust in
starburst galaxies appears to be
similar to that in the star forming bar of
the Small Magellanic Cloud (SMC) (Calzetti et. al. 1994;
Gordon, Calzetti \& Witt 1997, Gordon \& Clayton 1998 [GC]). Understanding
dust extinction properties
in nearby galaxies is a useful tool in interpreting and modeling
observations in a wide range of extragalactic systems.
Previous studies of the LMC extinction have all arrived at similar conclusions,
e.g. the average LMC extinction curve is characterized by a weaker 2175 \AA\ bump
and a stronger far--UV rise than the average Galactic extinction curve.
Two early studies (Nandy et. al. 1981;
Koornneef \& Code 1981) found little spatial variation in the LMC extinction
and computed an average LMC extinction curve. However, both samples were dominated
by stars near the 30 Doradus star forming region and it was thus unclear whether
their average curves applied to the LMC as a whole.
A study by Clayton \& Martin (1985) expanded the sample to include a larger number
of non--30 Dor stars and reported tentative evidence for differences between the extinction curves
observed in the 30 Dor region and the rest of the LMC.
Fitzpatrick (1985, hereafter F85) expanded the number of available
reddened stars to 19 including 7 outside of the 30 Dor region, allowing a more
detailed analysis of regional variations. F85 found a significant difference
between the UV extinction characteristic of the 30 Dor region and that outside
the 30 Dor region.
The average 30 Dor UV extinction curve was found to have a lower
bump strength and stronger far--UV rise
($\sim$2 units at $\lambda ^{-1} = 7 \mu$m$^{-1}$) than the non--30 Dor stars.
Fitzpatrick (1986, hereafter F86) expanded the sample by 8 lightly reddened stars
located outside
the 30 Dor region and confirmed the results of F85.
Clayton et al. (1996) measured the extinction of two LMC stars,
one in 30 Dor and one outside 30 Dor down to $\sim$ 1000 \AA.
Both extensions seem to be consistent with extrapolations of the
IUE extinction curves to shorter wavelengths.
As part of a program to quantify the range of extinction behavior in the Local
Group, we have reanalyzed the extinction in the Magellanic Clouds.
In particular, no analysis has been done since the discovery that the
UV extinction along most Galactic sightlines could be described by one parameter,
the ratio of total--to--selective extinction, $R_V=A_V/E(B-V)$
(Cardelli, Clayton, \& Mathis 1989, hereafter CCM).
It is of great interest whether such a relation exists for the Magellanic clouds.
In this paper, we discuss the results for the LMC. An analysis
of the SMC extinction appears in GC.
\section{The Data and the Computation of Extinction Curves}
\subsection{The Sample}
Our initial sample of reddened stars consisted of that defined by
F85. In an effort to expand the sample we searched the
updated electronic catalog of Rousseau et. al. (1978), available via
the SIMBAD database, which consists of $\sim 1800$ LMC stars. Two initial
cuts of the catalog were made: (1) stars with spectral types later
than B4 were discarded and (2) we required $B-V\ge 0$. The first criterion
limits the effects of spectral type mismatches in the resulting
extinction curves, which can be quite large for spectral types later
than about B3 (e.g. F85).
The second criterion removes unreddened or lightly reddened stars from
consideration. We note that all the F85 stars were included in the resulting
sample of $\sim 250$ stars while none of the F86 stars were
included as they all had $B-V < 0$. We then eliminated emission line
stars and composite--spectrum objects from our list. The remaining stars
were checked against the IUE database and all of those for which both
long and short wavelength low-dispersion spectra existed (54) were examined in more
detail. Only five stars from this sample were found to be both significantly
reddened and have high S/N IUE spectra.
These stars were added to our sample and their
IUE spectra are listed in Table 1.
We selected 67 unreddened comparison stars from the sample of LMC supergiants
in Fitzpatrick (1988) for use in constructing extinction curves using the pair
method (Massa, Savage \& Fitzpatrick 1983).
Approximate UV spectral types for all of our reddened stars and their
respective comparison stars (see below for a discussion of the selection of
extinction pairs) were estimated
from a visual comparison of the IUE spectra to the grid of LMC stars with UV spectral
types given in Neubig \& Bruhweiler (1998). The estimated UV spectral types
are reported in Table 2.
\begin{deluxetable}{lcc}
\tablewidth{0pt}
\footnotesize
\tablecaption{``New'' Reddened LMC Stars}
\tablehead{
& \colhead{SWP} & \colhead{LWP/LWR} \\
\colhead{SK} & \colhead{Images} & \colhead{Images}
}
\startdata
$-$66 88 & 39129,45383,45384 & LWP18165,23730 \nl
$-$68 23 & 39155 & LWP18198 \nl
$-$69 206 & 36552,39832 & LWP15751 \nl
$-$69 210 & 23270 & LWR17442 \nl
$-$69 279 & 08924 & LWR07672 \nl
\enddata
\end{deluxetable}
\begin{deluxetable}{lcccccccllc}
\tablecaption{Reddened/Unreddened Pairs: Properties}
\tablewidth{0pt}
\footnotesize
\tablehead{
& & \multicolumn{6}{c}{Photometry\tablenotemark{a}} & \multicolumn{2}{c}{Spectral Type\tablenotemark{b}} & \\
\colhead{SK} & \colhead{E(B$-$V)$_{Gal}$\tablenotemark{c}} & \colhead{V} & \colhead{B$-$V} & \colhead{U$-$V} & \colhead{J$-$V} & \colhead{H$-$V} & \colhead{K$-$V} & \colhead{Optical} & \colhead{UV} & \colhead{Key\tablenotemark{d}}
}
\startdata
$-$66 19 & 0.09: & 12.79 & 0.12 & $-$0.66 & $-$0.35 & $-$0.45 & -- & B4 I & B0 Ia & 1 \nl
$-$66 169 & 0.03 & 11.56 & $-$0.13 & $-$1.13 & -- & -- & -- & O9.7 Ia & O9 Ia & \nl
& & & & & & & & & & \nl
$-$66 88 & 0.06 & 12.70 & 0.20 & $-$0.45 & -- & -- & -- & B2: & B3 Ia & 2 \nl
$-$66 106 & 0.07 & 11.72 & $-$0.08 & $-$0.99 & -- & -- & -- & B2 Ia & B3 Ia & \nl
& & & & & & & & & & \nl
$-$67 2 & 0.06 & 11.26 & 0.08 & $-$0.69 & $-$0.18 & $-$0.21 & $-$0.28 & B1.5 Ia & B2 Ia & 3 \nl
$-$66 35 & 0.07: & 11.55 & $-$0.07 & $-$0.95 & -- & -- & -- & B1 Ia & B2 Ia & \nl
& & & & & & & & & & \nl
$-$68 23 & 0.06 & 12.81 & 0.22 & $-$0.39 & -- & -- & -- & OB & B4 Ia & 4 \nl
$-$67 36 & 0.07 & 12.01 & $-$0.08 & $-$0.89 & -- & -- & -- & B2.5 Ia & B3 Ia & \nl
& & & & & & & & & & \nl
$-$68 26 & 0.04 & 11.67 & 0.13 & $-$0.62 & -- & -- & -- & B8: I & B3 Ia & 5 \nl
$-$66 35 & 0.07: & 11.55 & $-$0.07 & $-$0.95 & -- & -- & -- & B1 Ia & B2 Ia & \nl
& & & & & & & & & & \nl
$-$69 108 & 0.08 & 12.10 & 0.27 & $-$0.22 & $-$0.57 & $-$0.67 & $-$0.75 & B3 I & B3 Ia & 6 \nl
$-$67 78 & 0.05 & 11.26 & $-$0.04 & $-$0.77 & -- & -- & -- & B3 Ia & B3 Ia & \nl
& & & & & & & & & & \nl
$-$70 116 & 0.05 & 12.05 & 0.11 & $-$0.61 & $-$0.37 & $-$0.43 & $-$0.57 & B2 Ia & B3 Ia & 7 \nl
$-$67 256 & 0.07 & 11.90 & $-$0.08 & $-$0.97 & -- & -- & -- & B1 Ia & B3 Ia & \nl
& & & & & & & & & & \nl
& & & & & & & & & & \nl
$-$68 129 & 0.07 & 12.77 & 0.03 & $-$0.81 & -- & -- & -- & B0.5 & O9 Ia & 8 \nl
$-$68 41 & 0.05 & 12.0 & $-$0.14 & $-$1.10 & -- & -- & -- & B0.5 Ia & B0 Ia & \nl
& & & & & & & & & & \nl
$-$68 140 & 0.04 & 12.72 & 0.06 & $-$0.77 & $-$0.26 & $-$0.31 & $-$0.37 & B0: & B0 Ia & 9 \nl
$-$68 41 & 0.05 & 12.0 & $-$0.14 & $-$1.10 & -- & -- & -- & B0.5 Ia & B0 Ia & \nl
& & & & & & & & & & \nl
$-$68 155 & 0.02 & 12.72 & 0.03 & $-$0.79 & -- & -- & -- & B0.5 & O8 Ia & 10 \nl
$-$67 168 & 0.03 & 12.08 & $-$0.17 & $-$1.17 & -- & -- & -- & O8 Iaf & O8 Ia & \nl
& & & & & & & & & & \nl
$-$69 206 & 0.08 & 12.84 & 0.14 & $-$0.62 & -- & -- & -- & B2: & O9 Ia & 11 \nl
$-$67 5 & 0.06 & 11.34 & $-$0.12 & $-$1.07 & -- & -- & -- & O9.7 Ib & B0 Ia & \nl
& & & & & & & & & & \nl
$-$69 210 & 0.07 & 12.59 & 0.36 & $-$0.23 & -- & -- & -- & B1.5: & B1 Ia & 12 \nl
$-$66 118 & 0.08 & 11.81 & $-$0.05 & $-$0.91 & -- & -- & -- & B2 Ia & B3 Ia & \nl
& & & & & & & & & & \nl
$-$69 213 & 0.08 & 11.97 & 0.10 & $-$0.65 & $-$0.26 & $-$0.29 & $-$0.33 & B1 & B1 Ia & 13 \nl
$-$70 120 & 0.06 & 11.59 & $-$0.06 & $-$0.94 & 0.21 & 0.25 & 0.14 & B1 Ia & B1.5 Ia & \nl
& & & & & & & & & & \nl
$-$69 228 & 0.06 & 12.12 & 0.07 & $-$0.69 & $-$0.10 & $-$0.14 & $-$0.14 & OB & B2 Ia & 14 \nl
$-$65 15 & 0.12 & 12.14 & $-$0.10 & $-$1.02 & -- & -- & -- & B1 Ia & B1 Ia & \nl
& & & & & & & & & & \nl
$-$69 256 & 0.07 & 12.61 & 0.03 & $-$0.80 & 0.03 & 0.04 & $-$0.02 & B0.5 & B1 Ia & 15 \nl
$-$68 41 & 0.05 & 12.0 & $-$0.14 & $-$1.00 & -- & -- & -- & B0.5 Ia & B0 Ia & \nl
& & & & & & & & & & \nl
$-$69 265 & 0.06 & 11.88 & 0.12 & $-$0.51 & -- & -- & -- & B3 I & B3 Ia & 16 \nl
$-$68 40 & 0.05 & 11.71 & $-$0.07 & $-$0.86 & -- & -- & -- & B2.5 Ia & B3 Ia & \nl
& & & & & & & & & & \nl
$-$69 270 & 0.05 & 11.27 & 0.14 & $-$0.52 & $-$0.32 & $-$0.40 & $-$0.46 & B3 Ia & B2 Ia & 17 \nl
$-$67 228 & 0.03 & 11.49 & $-$0.05 & $-$0.87 & -- & -- & -- & B2 Ia & B2 Ia & \nl
& & & & & & & & & & \nl
$-$69 279 & 0.02 & 12.79 & 0.05 & $-$0.79 & $-$0.19 & $-$0.28 & $-$0.34 & OB0 & O9 Ia & 18 \nl
$-$65 63 & 0.03 & 12.56 & $-$0.16 & $-$1.18 & -- & -- & -- & O9.7 I: & O9 Ia & \nl
& & & & & & & & & & \nl
$-$69 280 & 0.05 & 12.66 & 0.09 & $-$0.65 & $-$0.22 & $-$0.22 & $-$0.33 & B1 & B1.5 Ia & 19 \nl
$-$67 100 & 0.05 & 11.95 & $-$0.09 & $-$0.95 & -- & -- & -- & B1 Ia & B1 Ia & \nl
& & & & & & & & & & \nl
\enddata
\tablenotetext{a}{Optical photometry from Rousseau et. al. (1978), F85 and Fitzpatrick (1988). IR photometry from Morgan \& Nandy (1982)
and Clayton \& Martin (1985).}
\tablenotetext{b}{Optical spectral types from Rousseau et. al. (1978), F85 and Fitzpatrick (1988). UV spectral types estimated by comparison with LMC UV spectral types of Neubig \& Bruhweiler (1998).}
\tablenotetext{c}{Galactic foreground reddening from Oestreicher et. al. (1995); Colon designates uncertain value.}
\tablenotetext{d}{Key to position in Figure~\ref{fig_ha_map}.}
\end{deluxetable}
An implicit assumption of the pair method
is that the Galactic
foreground reddening is the same
for both the program and comparison stars and, hence, cancels out of the
resulting LMC extinction curve. As pointed out by several
authors (e.g. Schwering \& Israel 1991; Oestreicher, Gochermann, \& Schmidt--Kaler 1995), the Galactic
foreground towards the LMC is quite variable ranging from $E(B-V)_{Gal}=0.00$ to 0.17.
Schwering \& Israel (1991) constructed a foreground reddening map towards the LMC using HI data
and a relationship between $E(B-V)$ and the HI column density. They examined the
F85 and F86 stars at a spatial resolution of 48\arcmin\ (the resolution
of the HI data) and found systematically higher
Galactic foreground reddening associated with the comparison stars than the reddened stars. Accounting for
this systematic affect reduced the difference between the 30 Dor and non--30 Dor
extinction curves.
Oestreicher et. al. (1995) used reddenings to $\sim1400$ LMC foreground stars to construct
a Galactic foreground reddening map with a resolution of $\sim$10\arcmin.
We have quantified the differences in Galactic foreground
reddening for our sample using the higher resolution
map of Oestreicher et. al. (1995). For all but one of our pairs in the 30 Dor
sample, the difference in the Galactic foreground reddening between the
reddened and comparison stars, $|\Delta$E(B$-$V)$_{Gal}| \le 0.02$
while for the non--30 Dor sample,
$|\Delta$E(B$-$V)$_{Gal}| \le 0.03$ for all but one pair as well. There is
no systematic difference in the foreground reddening between program and comparison
stars in either sub--sample with the average
$\Delta$E(B$-$V)$_{Gal}$ being near 0 for both samples. The values for the
Galactic foreground component of the reddening for each star used in the analysis
is given in Table 2.
For the two pairs with large
foreground differences (SK $-$66 19/SK $-$66 169, SK $-$69 228/SK $-$65 15) we
have estimated the maximum effect on the extinction curve
to be less than the photometric uncertainties.
Therefore, we have not corrected the individual
curves for the differences in the Galactic foreground.
\subsection{The Extinction Curves}
Extinction curves were constructed using the standard pair method (e.g. Massa, Savage
\& Fitzpatrick 1983).
Short and long wavelength $IUE$ spectra were extracted using the
$IUE$ NEWSIPS reduction, co--added,
binned to the instrumental resolution of $\sim$5~\AA~ and merged at
the maximum wavelength in the short wavelength spectrum.
Uncertainties in the extinction curve contain terms that depend both
on the broadband photometric uncertainties as well as uncertainties in the
$IUE$ fluxes. The flux uncertainties are now calculated
directly in the NEWSIPS reduction.
For details of our error analysis, the reader is referred
to GC.
Previous studies suffered
from systematic temperature and luminosity mismatches between the unreddened/reddened
star pairs. These mismatches were evident in the imperfect line cancellations seen
in the extinction curves, especially the Fe~III blend near 5.1~$\micron^{-1}$.
This study minimizes
mismatches by using a larger sample of comparison stars
than was available to previous studies.
Comparison stars for each reddened star were selected to satisfy the three Fitzpatrick
criteria (F85); in addition, we required $\Delta (B-V) \ge 0.15$ between the
reddened and comparison stars to minimize the uncertainties in the extinction curve.
The first
criterion requires that $\Delta (U-B)/\Delta (B-V)$ be appropriate to dust reddening.
The average value of $\Delta (U-B)/\Delta (B-V)$ for the LMC
is $0.83\pm0.1$ (F85). Stars with $0.63 \le \Delta (U-B)/\Delta (B-V) \le 1.03$ were selected.
The second criterion requires that the difference in intrinsic $V$
magnitudes between the comparison and reddened stars be ``small'' ($\mid \Delta V \mid < 0.8$).
The $V$ magnitudes of our program stars were dereddened
assuming $R_V = 3.1$.
As all LMC stars are at roughly the same distance, this criterion amounts to assuring comparable
absolute magnitudes between the comparison and reddened stars thus minimizing luminosity
mismatches. The third criterion
requires that the comparison and reddened star UV spectra
be well--matched. This minimizes residual features in the extinction curve
not due to extinction.
This procedure resulted in 3--10
potential comparison stars for each reddened star. Each potential comparison star
was used to compute an extinction curve. The reddened/comparison star pair
which resulted in a curve with the smallest line residuals was adopted.
Five stars from the F85 sample had $\Delta (B-V) < 0.15$ and were discarded,
leaving a total of 19 reddened stars in our study.
These included three 30 Dor stars (SK
$-$68 126, $-$69 199 and $-$69 282) and two non--30 Dor stars (SK $-$68 107 and $-$71 52).
In addition, five stars have been added, three to the 30 Dor sample (SK $-$69 206,
$-$69 210 and
$-$69 279) and two to the non--30 Dor sample (SK $-$66 88 and $-$68 23).
We have indicated the positions of all of our stars
on an H$\alpha$ map of the LMC in Figure~\ref{fig_ha_map}. A key to the numbering of
the stellar positions
in Figure~\ref{fig_ha_map} is given in Table 2.
\begin{figure}[tbh]
\begin{center}
\plotone{lmc_Ha_map.eps}
\caption{Positions of reddened stars plotted on an H$\alpha$ image. A key to the numbering is
provided in Table 2. \label{fig_ha_map} }
\end{center}
\end{figure}
The final extinction
curves computed for each pair are shown in Figure~\ref{fig_ext_curves} and the
star pairs are listed in Table 2. The extinction curves have been
fit using the Fitzpatrick \& Massa (1990, hereafter FM) parameterization.
The FM fit is a six parameter fit including a linear background, a Drude profile representing
the 2175 \AA\ bump, and a far--UV curvature term. We emphasize that this parameterization is
empirical and the individual functions describing the extinction curve probably have
limited physical significance (Mathis \& Cardelli 1992).
The FM fits to individual extinction curves are plotted
in Figure~\ref{fig_ext_curves} and the best fit parameters for each curve are
given in Table 3; the functional form of the parameterization is given as a footnote to
Table 3.
In determining the uncertainties on the individual fit parameters we have considered the effects
of two sources of uncertainty, photometric and spectral mismatch. The photometric uncertainties
include those in the broad band optical photometry as well as those in the IUE
fluxes (for a detailed discussion of these uncertainties, see GC).
We estimate their effect on the FM parameters by shifting the extinction
curves upward by $1\sigma$ and downward by $1\sigma$ point--by--point. FM fits were
made to both of the shifted extinction curves and the error in each individual parameter
is taken as one--half the absolute value of the difference in the fit parameters
between the two curves. The photometric uncertainties contribute most significantly to
errors in the FM parameters $C_1, C_2, C_3$, and $C_4$; they have little effect on the
bump parameters $x_0$ and $\gamma$.
The effects of mismatch errors on the FM parameters were taken from Cardelli et. al.
(1992).
By varying the spectral type of the comparison star and fitting the resulting extinction
curve, they were able to estimate the uncertainties introduced in the FM fit parameters (Table
6 of Cardelli et. al. 1992). We adopt the quadrature sum of the these two sources of uncertainty
as our estimate of the uncertainties in the individual FM fit parameters (Table 3).
For weak features (ie. the weak bump lines of sight in our sample),
the uncertainties introduced by spectral mismatches may be underestimated by the
adopted uncertainties.
\begin{deluxetable}{lcccccccc}
\tablewidth{0pt}
\scriptsize
\tablecaption{FM Fit Parameters}
\tablehead{
& & \multicolumn{6}{c}{FM Fit Parameters\tablenotemark{a}} & \\
\colhead{SK} & \colhead{$\Delta$(B$-$V)} & \colhead{$x_{0}$} & \colhead{$\gamma$} & \colhead{$C_1$} & \colhead{$C_2$} & \colhead{$C_3$} & \colhead{$C_4$} & \colhead{$C_3/\gamma^2$}
}
\startdata
& \multicolumn{7}{c}{LMC--Average Sample} & \\
\cline{1-9}
& & & & & & & & \nl
$-$66 19 & 0.25 & 4.653$\pm$0.010 & 0.97$\pm$0.07 & $+$0.09$\pm$0.44 & 0.75$\pm$0.11 & 2.34$\pm$0.42 & 0.91$\pm$0.12 & 2.49$\pm$0.57 \nl
$-$66 88 & 0.28 & 4.579$\pm$0.019 & 1.03$\pm$0.06 & $-$0.88$\pm$0.38 & 1.00$\pm$0.13 & 2.77$\pm$0.46 & 0.48$\pm$0.10 & 2.61$\pm$0.53 \nl
$-$67 2 & 0.15 & 4.625$\pm$0.010 & 1.08$\pm$0.07 & $-$3.59$\pm$0.40 & 1.67$\pm$0.26 & 3.71$\pm$0.46 & 0.91$\pm$0.20 & 3.18$\pm$0.57 \nl
$-$68 23 & 0.30 & 4.513$\pm$0.037 & 1.05$\pm$0.06 & $+$0.11$\pm$0.42 & 0.65$\pm$0.10 & 4.28$\pm$0.84 & 0.71$\pm$0.14 & 3.88$\pm$0.88 \nl
$-$68 26 & 0.20 & 4.671$\pm$0.012 & 1.10$\pm$0.06 & $-$0.64$\pm$0.43 & 0.90$\pm$0.13 & 3.76$\pm$0.44 & 0.43$\pm$0.11 & 3.11$\pm$0.50 \nl
$-$68 129 & 0.17 & 4.587$\pm$0.011 & 0.73$\pm$0.06 & $-$1.48$\pm$0.39 & 1.26$\pm$0.19 & 1.50$\pm$0.42 & 0.72$\pm$0.16 & 2.81$\pm$0.91 \nl
$-$69 108 & 0.31 & 4.574$\pm$0.011 & 1.04$\pm$0.06 & $-$1.25$\pm$0.39 & 0.98$\pm$0.11 & 4.31$\pm$0.44 & 0.54$\pm$0.10 & 3.98$\pm$0.61 \nl
$-$69 206 & 0.26 & 4.519$\pm$0.034 & 0.65$\pm$0.05 & $-$1.40$\pm$0.38 & 1.23$\pm$0.14 & 1.08$\pm$0.43 & 0.38$\pm$0.11 & 2.56$\pm$1.09 \nl
$-$69 210 & 0.41 & 4.669$\pm$0.011 & 0.67$\pm$0.06 & $-$1.15$\pm$0.37 & 1.12$\pm$0.11 & 1.42$\pm$0.41 & 0.52$\pm$0.11 & 3.16$\pm$1.07 \nl
$-$69 213 & 0.16 & 4.570$\pm$0.017 & 0.77$\pm$0.05 & $-$2.62$\pm$0.37 & 1.56$\pm$0.24 & 2.08$\pm$0.46 & 0.83$\pm$0.21 & 3.51$\pm$0.90 \nl
& & & & & & & & \nl
Average\tablenotemark{b} & 0.25 & 4.596$\pm$0.017 & 0.91$\pm$0.05 & $-$1.28$\pm$0.34 & 1.11$\pm$0.10 & 2.73$\pm$0.37 & 0.64$\pm$0.06 & 3.13$\
pm$0.16 \nl
& & & & & & & & \nl
& \multicolumn{7}{c}{LMC 2 Sample} & \nl
\cline{1-9}
& & & & & & & & \nl
$-$68 140 & 0.20 & 4.559$\pm$0.022 & 1.07$\pm$0.09 & $-$1.02$\pm$0.40 & 1.13$\pm$0.17 & 1.62$\pm$0.41 & 0.77$\pm$0.14 & 1.41$\pm$0.43 \nl
$-$68 155 & 0.20 & 4.663$\pm$0.011 & 0.91$\pm$0.07 & $-$4.38$\pm$0.42 & 1.82$\pm$0.23 & 2.06$\pm$0.41 & 0.30$\pm$0.11 & 2.49$\pm$0.62 \nl
$-$69 228 & 0.17 & 4.658$\pm$0.016 & 1.26$\pm$0.13 & $-$2.33$\pm$0.38 & 1.20$\pm$0.18 & 2.30$\pm$0.42 & 0.17$\pm$0.09 & 1.45$\pm$0.40 \nl
$-$69 256 & 0.17 & 4.622$\pm$0.038 & 1.21$\pm$0.05 & $-$2.50$\pm$0.37 & 1.30$\pm$0.19 & 2.15$\pm$0.51 & 0.30$\pm$0.11 & 1.47$\pm$0.37 \nl
$-$69 265 & 0.19 & 4.627$\pm$0.018 & 0.92$\pm$0.10 & $-$2.47$\pm$0.39 & 1.37$\pm$0.18 & 0.88$\pm$0.41 & 0.18$\pm$0.11 & 1.04$\pm$0.54 \nl
$-$69 270 & 0.19 & 4.651$\pm$0.011 & 1.12$\pm$0.09 & $-$2.26$\pm$0.37 & 1.53$\pm$0.21 & 2.66$\pm$0.45 & 0.74$\pm$0.15 & 2.12$\pm$0.49 \nl
$-$69 279 & 0.21 & 4.603$\pm$0.016 & 0.84$\pm$0.06 & $-$2.73$\pm$0.37 & 1.36$\pm$0.16 & 1.33$\pm$0.42 & 0.17$\pm$0.10 & 1.88$\pm$0.65 \nl
$-$69 280 & 0.18 & 4.618$\pm$0.016 & 0.74$\pm$0.06 & $-$0.51$\pm$0.49 & 0.96$\pm$0.14 & 1.15$\pm$0.41 & 0.64$\pm$0.15 & 2.10$\pm$0.82 \nl
$-$70 116 & 0.19 & 4.637$\pm$0.024 & 1.42$\pm$0.10 & $-$1.22$\pm$0.45 & 1.09$\pm$0.15 & 3.13$\pm$0.41 & 0.54$\pm$0.13 & 1.55$\pm$0.30 \nl
& & & & & & & & \nl
Average\tablenotemark{b} & 0.19 & 4.626$\pm$0.010 & 1.05$\pm$0.07 & $-$2.16$\pm$0.36 & 1.31$\pm$0.08 & 1.92$\pm$0.23 & 0.42$\pm$0.08 & 1.72$\
pm$0.14 \nl
& & & & & & & & \nl
& \multicolumn{7}{c}{Milky Way average\tablenotemark{c}} & \nl
\cline{1-9}
& & & & & & & & \nl
& -- & 4.596$\pm$0.002 & 0.96$\pm$0.01 & 0.12$\pm$0.11 & 0.63$\pm$0.04 & 3.26$\pm$0.11 & 0.41$\pm$0.02 & 3.49$\pm$0.07 \nl
\enddata
\tablenotetext{a}{Analytic fit to extinction curve following FM: \nl
\begin{center}
$\frac{\Delta(\lambda-V)}{\Delta(B-V)}=C_1+C_2x+C_3D(x)+C_4F(x),$
\end{center}
where $x=\lambda^{-1}$, and \nl
\begin{center}
$D(x) = \frac{x^2}{(x^2-x_0^2)^2+x^2\gamma^2}.$
\end{center}
\begin{center}
$F(x) = 0.5329(x-5.9)^2+0.05644(x-5.9)^3~~~(x > 5.9)$
\end{center}
and $F(x) = 0$ otherwise.
}
\tablenotetext{b}{Uncertainties in the averages quoted as the standard deviation of the sample mean for the
respective samples, eg. $\sigma _{i}/\sqrt{N}$.}
\tablenotetext{c}{From the Galactic data of FM. Errors are the standard deviation of the sample mean.}
\end{deluxetable}
\begin{figure}[tbp]
\begin{center}
\plotfour{SK19-66_ext.eps}{SK88-66_ext.eps}
{SK2-67_ext.eps}{SK23-68_ext.eps}
{SK26-68_ext.eps}{SK108-69_ext.eps}
{SK116-70_ext.eps}{SK129-68_ext.eps}
\end{center}
\end{figure}
\begin{figure}[tbp]
\begin{center}
\plotfour{SK140-68_ext.eps}{SK155-68_ext.eps}
{SK206-69_ext.eps}{SK210-69_ext.eps}
{SK213-69_ext.eps}{SK228-69_ext.eps}
{SK256-69_ext.eps}{SK265-69_ext.eps}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\plotthree{SK270-69_ext.eps}{SK279-69_ext.eps}
{SK280-69_ext.eps}
\caption{Individual LMC extinction curves. Optical data are included and,
when available, IR. We have plotted the FM fits and CCM curves
offset by 4 and 9 units, respectively along with the re--binned
extinction curve. Where measured values of R$_V$ are available, CCM
curves for the measured R$_V$ (solid line) and R$_V\pm \sigma_{R_V}$
are plotted (dotted line). When no measured value of R$_V$ was
available, the ``best fit'' CCM curve is plotted. If no single
value of R$_V$ provided an adequate fit, a CCM curve with
R$_V = 3.1$ is plotted. \label{fig_ext_curves} }
\end{center}
\end{figure}
We determined $R_V$ values for all of the reddened stars in our sample which
had R, I, J, H, and/or K observations. Eleven reddened stars had measurements
in at least three of these bands (Morgan \& Nandy 1982; Clayton \& Martin 1985).
Intrinsic colors were taken from Johnson (1966) and Winkler (1997) assuming the
reddened stars' UV spectral types. The $R_V$ values were determined by assuming
all extinction laws take the form of Rieke \& Lebofsky (1985) (CCM). The
uncertainties were calculated from the range of $R_V$ values which were 67\%
probable using the reduced $\chi^2$ statistic (Taylor 1982). The $R_V$ values
and uncertainties are give in Table~4.
We do not include $-$69 256 in Table~4 or any of the subsequent analysis using
measured $R_V$ values as it value of $R_V$ is very uncertain (1.55$\pm$1.18).
\begin{deluxetable}{lcc}
\tablewidth{0pt}
\footnotesize
\tablecaption{Measured R$_V$ Values.}
\tablehead{
\colhead{SK} & \colhead{R$_V$} & \colhead{ $\sigma _{R_V}$ }
}
\startdata
$-$66 ~~19 & 2.46 & 0.25 \nl
$-$67 ~~~2 & 2.31 & 0.44 \nl
$-$69 108 & 2.61 & 0.15 \nl
$-$70 116 & 3.31 & 0.20 \nl
$-$68 140 & 2.76 & 0.35 \nl
$-$69 213 & 2.16 & 0.30 \nl
$-$69 228 & 2.23 & 0.74 \nl
$-$69 270 & 2.71 & 0.11 \nl
$-$69 279 & 2.43 & 0.31 \nl
$-$69 280 & 2.56 & 0.39 \nl
\enddata
\end{deluxetable}
\section{Discussion}
\subsection{Average Curves}
\subsubsection{30 Dor/Non--30 Dor}
A very important result from previous work on the LMC was the apparent difference
between UV extinction properties in the 30 Dor region and other sightlines in the
LMC (Clayton \& Martin 1985; F85, F86).
Reddened stars were assigned to the non--30 Dor
($d_{proj} \ge 1~kpc$, 7 objects) and 30 Dor
($d_{proj} < 1~kpc$, 12 objects)
samples based
on their projected distance from R~136 as in previous studies.
We have calculated average extinction curves
for our new 30 Dor and
non--30 Dor samples, weighting the individual curves by their uncertainties.
The FM parameters
of the average curves were calculated as the sample mean and the uncertainties
for the average FM parameters are the standard deviation of the mean
for the respective samples, eg. $\sigma _{i}/\sqrt{N}$.
Formal FM fits to the average curves yielded identical parameters within the uncertainties.
In Figure~\ref{fig_30dor_n30dor}, the new average extinction curves for 30 Dor and non--30 Dor are
shown with the results of F86 plotted for comparison. The extinction curves of
F85 and F86 are virtually the same but their uncertainty estimates are quite different.
At 7.0 $\micron^{-1}$, the difference between the Fitzpatrick 30 Dor and non--30 Dor
curves is 1.86 $\pm$ 0.41 (F85). In F86, the uncertainties are estimated to be about
twice as large making the difference about
2$\sigma$. Our results are similar to F86 but the 30 Dor curve is slightly lower and the
non--30 Dor curve slightly higher in our averages. We find the difference between
the average curves at 7.0 $\micron^{-1}$ to be 0.89 $\pm$ 0.53. So the significance of
differences in far--UV extinction between the 30 Dor and non--30 Dor samples is less,
being only slightly greater than 1.5$\sigma$.
The difference in bump strength between our 30 Dor and non--30 Dor
samples is slightly more significant.
Our average non--30 Dor
bump strength ($A_{bump} = C_3/\gamma ^2$ = 2.97$\pm$0.30) is slightly larger
than that of F86 ($A_{bump} = 2.58$).
This is not unexpected as
we have included two new lines of sight
with strong bumps in our non--30 Dor average. In addition, the improvements
realized by using IUE spectra reduced with NEWSIPS are
most apparent near the bump.
Our average 30 Dor bump strength ($A_{bump}=2.12\pm0.20$) is only
slightly larger
than that found by F86 ($A_{bump}=1.86$).
The difference in bump strength between our 30 Dor and non--30 Dor
samples is $\Delta A_{bump}=0.85 \pm 0.36$, slightly greater than $2\sigma$.
\begin{figure}[tbp]
\begin{center}
\plotone{F85_30n30_comp.eps}
\caption{Comparison of average 30 Dor (dashed line this study, upper dotted
line F86) and non--30 Dor (solid line this study, lower dotted line F86)
extinction curves. 1$\sigma$ error bars for the new 30 Dor and non--30 Dor average curves
have been plotted at various wavelengths. \label{fig_30dor_n30dor}}
\end{center}
\end{figure}
\subsubsection{LMC~2/LMC--general}
However, the conclusion drawn by F86 that there are significant intrinsic variations
between extinction curves within each of the 30 Dor and non--30 Dor
samples is strengthened by the additional lines of sight included in this study.
In the non--30 Dor sample, for instance, SK $-$68 23 has a strong bump and SK $-$70 116
has almost no bump. Similar differences are seen in the 30 Dor sample.
To try and isolate a sample of sightlines with weak bumps, we have plotted bump strength
versus $\Delta$(B$-$V) in Figure~\ref{fig_bs_ebv}. We discovered that there is a group of stars
with similar reddenings (0.17 $\leq \Delta(B-V) \leq$ 0.21) and bump strengths that also
lie close together in the LMC. These
stars lie in or near the region occupied by the supergiant shell LMC 2 on the southeast
side of 30 Dor (Meaburn 1980; see Figure~\ref{fig_ha_map}).
This structure, which is 475 pc in radius,
was formed by the combined stellar winds and supernovae explosions from the stellar
association within (Caulet \& Newell 1996).
There are nine stars in the LMC 2 group, eight of which are from the 30 Dor sample
and one (SK $-$70 116) from the non--30 Dor sample.
Four 30 Dor stars (SK $-$68 129, $-$69 206, $-$69 210 and $-$69 213) are removed from
our new LMC~2 sample.
These four stars lie in or near
a prominent dust lane separating
the 30 Dor star formation region from the LH~89 and NGC~2042 stellar associations;
SK $-$69 206 is on the south--eastern edge of the dust lane near the
stellar association LH~90 while SK $-$69 210 is in the middle of the
dust lane, coincident with CO clouds 7 \& 8 of Johansson et. al. (1998).
While located in the traditional 30 Dor region, these sightlines
have strong bumps, typical of the non--30 Dor dust.
\begin{figure}
\begin{center}
\plotone{bsvsebv.eps}
\caption{Plot of the bump strength normalized to $\Delta$(B$-$V) vs. $\Delta$
(B$-$V). Symbols
represent the samples discussed in the text. \label{fig_bs_ebv}}
\end{center}
\end{figure}
An average extinction curve has been calculated for the LMC~2 stars and also for the
remaining ten stars which we will call LMC--general. FM parameters and their respective
uncertainties were calculated as above and are reported in Table 3. The parameters for the
average Galactic curve
as derived from FM are also shown for comparison.
The average curves for LMC 2 and LMC--general samples
are plotted in Figure~\ref{fig_lmc2_lmcave}.
These two curves
show a very significant difference in bump strength ($\Delta A_{bump}$ = 1.41 $\pm$ 0.21)
but the far--UV curves lie within one sigma of each other. It is worth noting that the
average Galactic bump strength is very similar to that of the LMC--general sample.
In Figure~\ref{fig_dispersion}
we have over--plotted individual curves within each sample.
The dispersion about the mean
bump strength is significantly less for both the LMC--general sample compared to
the non--30 Dor sample (0.50 and 0.78, respectively; Figure~\ref{fig_dispersion}a)
and for the LMC 2 sample
compared to the
30 Dor sample (0.43 and 0.72 respectively; Figure~\ref{fig_dispersion}b).
\begin{figure}[tbp]
\begin{center}
\plotone{LMC2gen_30n30_comp.eps}
\caption{Comparison of 30 Dor/non--30 Dor average curves from this study
(dashed line and solid line, respectively) with the LMC average and LMC 2
average curves discussed in the text (dotted line and dot-dash line,
respectively). 1$\sigma$ error bars for the new 30 Dor and non--30 Dor average curves
have been plotted at various wavelengths. \label{fig_lmc2_lmcave}}
\end{center}
\end{figure}
\begin{figure}[tbp]
\begin{center}
\plottwo{disp_n30gen.eps}{disp_30lmc2.eps}
\caption{(a) Comparison of the individual curves within the non--30 Dor sample
(lower curves) and LMC--general sample (upper curves). The LMC--general curves
have been offset 6 units for clarity. (b) Same as (a) for the 30 Dor sample
(lower curves) and LMC 2 sample (upper curves). The LMC 2 curves have been offset
by 6 units for clarity. \label{fig_dispersion}}
\end{center}
\end{figure}
\subsection{Variations Within the Samples}
The form of the UV extinction, as parameterized by FM,
along a given line of sight is potentially
a powerful diagnostic of the dust grains responsible for the extinction but
the physical interpretation of variations and correlations among the FM
parameters is unclear.
However, to the degree that they represent underlying physical processes it
is useful to examine them within our two LMC samples.
The coefficients of the linear component of the UV extinction ($C_1 + C_2x$) are
not independent in the Galaxy (Fitzpatrick \& Massa 1988) but are in fact themselves
linearly related. Fitzpatrick \& Massa (1988) interpret the relationship as either
a single grain population modified by evolutionary processes or a varying mixture of several
grain populations with different UV extinction slopes or a combination of both factors.
While $C_1$ and $C_2$ have similar values between the two LMC samples, both
LMC samples exhibit systematically smaller values of $C_1$ and systematically larger
values of $C_2$ relative to the Galaxy. In the SMC, the values of $C_1$ and $C_2$
are even more extreme than in the LMC (GC).
However, the values of $C_1$
and $C_2$ for all these galaxies follow the same
linear relationship (see Figure~\ref{fig_FMparam}a).
Hence, whatever underlying physical processes
or dust components
are responsible for the variations in the linear part of the UV extinction must
operate similarly in the Galaxy, the LMC and the SMC.
Fitzpatrick \& Massa (1988) suggested a possible correlation between the
FM parameters $C_4$, which measures the far UV curvature, and $\gamma$, the bump
width. In Figure~\ref{fig_FMparam}b we plot $C_4$ against $\gamma$ for the Galaxy,
the LMC, and the SMC. Only one SMC sightline (AzV~456) is included since
the remaining three sightlines have no bump and $\gamma$ is undefined (GC).
There is no correlation between these parameters in the LMC extinction data.
The physical significance of $C_4$ is unclear; the far UV extinction
is a combination of the linear term and the $C_4$ polynomial term and the separation
is mathematical rather than physical (CCM).
However, such a correlation may arise if $C_4$ and $\gamma$ are due to different
grain populations provided that the different populations
respond to environmental factors in a similar way (Fitzpatrick \& Massa 1988).
This is consistent with the conclusion
of CCM that the processes producing changes in extinction must be efficient over a range
of particle sizes and compositions. In this case, the absence of correlation in the LMC
would suggest that environmental processes are affecting the different grain populations
differently.
\begin{figure}[tbp]
\begin{center}
\plottwo{c1_vs_c2.eps}{c4_vs_gam.eps}
\caption{(a) Plot of $C_1$ vs. $C_2$ for the LMC--general sample,
LMC~2 sample, the SMC,and the Galaxy. The dashed line is the least--squares fit the Galactic data
given by Fitzpatrick \& Massa (1988). (b) Plot of $C_4$ vs. $\gamma$. Symbols are as in
Figure 7a. In both figures, Galactic data from FM, SMC data from GC. \label{fig_FMparam}}
\end{center}
\end{figure}
The FM parameters $C_1$, $C_2$, $C_3$, and $C_4$
depend on $R_V$ and so interpreting
relations among them in the absence of $R_V$ information is difficult.
CCM found that the general shapes of the UV extinction curves in the Galaxy,
expressed as $A_{\lambda}/A_{V}$,
are well represented by a one parameter family of curves characterized by the
value of $R_V$.
It is of interest to determine whether the UV extinction in the LMC follows the
relation of CCM and whether the deviations from CCM in the LMC can be
related to deviations seen in the Galaxy
We will discuss the FM bump parameters ($x_0$ and $\gamma$) separately in \S 3.2.2.
\subsubsection{CCM and the LMC}
There is an average Galactic extinction relation, $A_{\lambda}/A_{V}$, over
the wavelength range 0.125 $\micron$ to 3.5 $\micron$, which is applicable
to a wide range of interstellar dust environments, including lines of
sight through diffuse dust, dark cloud dust, as well as that
associated with star formation (CCM;
Cardelli \& Clayton 1991; Mathis \& Cardelli 1992). The
existence of this relation, valid over a large wavelength interval,
suggests that the environmental processes which modify the grains are
efficient and affect all grains. The CCM
relation depends on only one parameter
$R_V$, which is a
crude measure of the size distribution of interstellar dust.
Only eleven LMC sightlines in our sample have measured values of $R_V$. Seven of these
are in the LMC 2 sample. The CCM curves for these eleven stars are plotted in
Figure~\ref{fig_ext_curves}.
The LMC 2 curves cannot be fit by a CCM curve with
any value of $R_V$ because of their weak bumps.
The average LMC--general curve is very similar to a Galactic CCM extinction curve with $R_V=2.4$.
However, only four stars in this sample have measured $R_V$ values so it is not clear how well
their extinction curves follow CCM.
SK $-$67 2 and $-$69 213 have stronger FUV extinctions than their respective CCM curves
while SK $-$66 19 appears too weak in the bump. Only SK $-$69 108 clearly follows the
CCM relationship.
In Figures~\ref{fig_bs_rv} and \ref{fig_a1300_rv} we plot bump strength and $A_{1300}/A_V$ versus
$R_V^{-1}$ for ten stars with measured $R_V$'s; SK $-$69 256 is excluded due to its very
uncertain $R_V$ value. Figure~\ref{fig_bs_rv} shows that bump
strength is consistent with CCM for the LMC--general sample while the LMC 2 sample
has bumps which fall below the typical CCM values.
Very little can be said about the relationship with $R_V$ in the far UV
as seen in Figure~\ref{fig_a1300_rv}.
The uncertainties are quite large and though both the LMC 2 and LMC--general sample
appear to be consistent with CCM in the UV, they are also consistent with no $R_V$
dependence. More accurate values of $R_V$ along more sightlines must be obtained before
it can be determined whether a CCM--like relationship may hold in the LMC.
According to CCM, $C_3$ and therefore $A_{bump}$ are proportional
to $R_V$ (Mathis \& Cardelli 1992).
However, since $C_3$(LMC--general)/$C_3$(LMC 2) = 2.25, that would imply that the average value
of $R_V$ should be more than twice as large in the LMC--general sample if a CCM--like
relationship exists. There is
no indication from the available data that this is true. In fact, the sightlines in both
samples appear to have low values of $R_V$ relative to the Galaxy (Table 4). This may indicate that
dust grains in the LMC may be systematically smaller than in the Galaxy.
\begin{figure}
\begin{center}
\plotone{bsvsrv_1.eps}
\caption{Bump strength normalized to A$_V$ plotted vs. R$_V^{-1}$ for LMC stars with measured
values of R$_V$. The solid line represents the mean CCM relationship and the dotted lines the
approximate dispersion around the mean for the CCM sample of Galactic stars.
Symbols represent the samples
discussed in the text. For comparison, several
Galactic stars with ``unusual'' extinction curves are plotted. Bump strengths for the Galactic
stars were taken from Cardelli \& Savage (1988) (HD 29647) and Welty \& Fowler (1992) (HD 62542
\& HD 210121). R$_V$ values are from Messinger et. al. (1997) (HD 29647),
Whittet et. al. (1993) (HD 62542) and
Larson et. al. (1996) (HD 210121). \label{fig_bs_rv}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\plotone{A1300_Av_Rv.eps}
\caption{Plot of the extinction ratio $A_{1300}/A_V$ vs. $R_V^{-1}$ where
$A_{1300}$ is the extinction at $\lambda=1300$~\AA\ plotted as in Figure~\ref{fig_bs_rv}.
\label{fig_a1300_rv}}
\end{center}
\end{figure}
Although the general shape of the UV extinction in the Galaxy is well represented
by the $R_V$ parameterization of CCM, significant deviations are seen,
both in the far UV and the bump (Cardelli \& Clayton 1991; Mathis \& Cardelli 1992;
Fitzpatrick 1998).
There are well known Galactic sightlines which deviate from CCM in much the
same way that the LMC 2 sample does. Three deviant Galactic stars are plotted
in Figures~\ref{fig_bs_rv} and \ref{fig_a1300_rv} for comparison.
HD 29647, 62542, and 210121 all show
weak bumps and strong far--UV extinction for their measured values of R$_V$
(3.62, 3.24 \& 2.1, respectively; Messinger et. al 1997; Whittet et. al. 1993; Larson et. al. 1996).
The bumps seen for HD 29647 and 62542 are not just weak but they
are very broad and shifted to the blue (Cardelli \& Savage 1988). The unusual
extinction curve characteristics along these lines of sight have been attributed
to their dust environments which are quite diverse.
The dust toward HD 62542 has been swept up by bubbles blown by
two nearby O stars and has been subject to shocks while
the HD 29647 sightline passes through a very dense, quiescent
environment (Cardelli \& Savage 1988). HD 210121 lies behind a single
cloud in the halo. There is no present activity near this cloud although
it was ejected into the halo at some time in the past (Welty \& Fowler 1992;
Larson et. al. 1996).
These deviations from CCM in the Galaxy indicate that something
other than the size distribution of dust grains as measured by $R_V$
must be important in determining extinction properties along
a given line of sight.
Evidently, the same is true in the LMC.
Even though all the LMC sightlines have similar, low values of $R_V$, they
exhibit a variety of extinction curves.
\subsubsection{The Bump}
While the physical significance of the linear and far UV functions in the FM
parameterization is unclear,
the Drude profile fitting function for the 2175 \AA\ absorption bump
which is part of the FM parameterization
does have some physical
significance as the expression of the absorption cross section of a damped
harmonic oscillator (CCM; FM; Mathis \& Cardelli 1992). Further, neither
$x_0$ or $\gamma$ depend on $R_V$ and so variations in these parameters are
directly tied to variations in the grains responsible for the bump feature.
There is no evidence for a systematic shift in the central position of the
bump in either LMC sample. The weakness of the bump in the LMC~2 sample means
that $x_0$ and $\gamma$ are not strongly constrained in that sample. In the
LMC--general sample, there are no systematic redward shifts of the bump but three
sightlines are significantly shifted to the blue.
The range of variation in the LMC is consistent with
that seen in the Galactic sample. Several Galactic lines of sight, eg.
HD 62542 and HD 29647 have bumps that are significantly shifted to shorter
wavelengths ($x_0 = 4.74$ and 4.70, respectively; Cardelli \& Savage 1992).
Several possibilities have been suggested to account for this including
mantling of the grains and hydrogenation (Cardelli \& Savage 1992; Mathis 1994).
As in the Galaxy, there is real variation in the width of the bump between
various lines of sight in the LMC (Table 3).
Five lines of sight in our sample have bump widths which nominally
fall below the narrowest Galactic bump ($\gamma=0.8$, FM). Of these five sightlines
two (SK $-$69 213 and SK $-$69 280) are affected by spectral
mismatches in the bump region.
The true bump widths for these two sightlines are not
well constrained by the FM fitting procedure. The remaining three narrow bump sightlines
(SK $-$68 129, SK $-$69 206, SK $-$69 210),
all in the LMC--general sample, interestingly all fall in or near the dust lane
on the northwest edge of 30 Dor. The bumps
are well defined and the narrowness of the bump is real.
An expanded view of the SK $-$69 210 profile can be seen Figure~\ref{fig_drude210}.
There is a strong relationship between environment and
$\gamma$ in the Galaxy. The narrowest bumps are associated with bright nebulosity
while wide bumps are associated with dark, dense clouds (Cardelli \& Clayton 1991,
Mathis 1994).
Therefore, it has been suggested that mantles form on the bump grains in dark clouds
resulting in broad bumps. In bright nebulae, there are no mantles and narrower bumps
result from the bare grains.
In this scenario, mantles are able to form in dense clouds shielded
from the interstellar radiation field while the mantles on grains near H~II regions
are removed by the stronger radiation field.
However, the three small $\gamma$ lines of sight in the LMC appear to be associated
with a dense environment even though they are near the 30 Dor star forming region.
Several stars in the LMC--general sample (eg. SK $-$66 19 and SK $-$66 88) are associated
with bright H~II regions and yet have normal Galactic bump widths.
Accepting the explanation
for the narrow bumps based on the Galactic data, we would expect to find
narrow bumps in the LMC~2 sample. In contrast with this expectation,
the data presented in Table~3 indicates
that the LMC~2 bump widths are comfortably within the average Galactic range.
It doesn't appear that the trend in $\gamma$ with environment seen the Galaxy
holds in the LMC.
There are no exceptionally wide bumps in our sample save
SK $-$70 116 with $\gamma=1.4$; however, the bump is extremely weak and $\gamma$
is not strongly constrained.
\begin{figure}
\begin{center}
\plotone{SK210-69_Drude.eps}
\caption{Drude profile of SK $-$69 210 (filled circles) binned to $\sim$30\AA\ resolution
compared to the average Galactic Drude profile. \label{fig_drude210}}
\end{center}
\end{figure}
The weak bumps in the LMC~2 region are not unique. As discussed above, several
Galactic lines of sight also have extinction curves with very weak bumps (HD 29647,
HD 62542, HD 210121). However, the LMC~2 environment seems to have little in
common with these Galactic lines of sight which in turn seem to have little in
common with each other. Though the swept up, shocked environment near HD 62542
may be similar to the LMC~2 environment (but on a vastly reduced scale), the other
two Galactic sightlines sample relatively quiescent environments.
HD 210121 lies behind a single diffuse, translucent cloud about 150~pc from the
Galactic plane. The interstellar radiation
field is weaker than in the general interstellar medium and shocks do not appear
to be important (Welty \& Fowler 1992). Larson et. al. (1996) suggest that the apparent
preponderance of small grains along the HD 210121 line of sight is due to lack of
grain growth through coagulation as a result of lack of time spent in a dense environment.
It appears that
very diverse environmental conditions result in rather similar bump profiles.
It is not known whether
the bump grains are being modified in a similar fashion
in different environments or substantially different modifications of the bump
grains can result in a similar UV extinction in the bump.
\section{Conclusions}
Evidently the relationship between the UV extinction, dust grain properties, and
environment is a complicated one. Similar variations
in the form of the UV extinction can arise in a variety of environments.
The environmental dependences
seen in the Galaxy do not seem to hold in the LMC.
Since large variations in UV extinction are seen within both the LMC and the Galaxy,
global parameters
such as metallicity cannot be directly responsible for the observed variations from
galaxy to galaxy as has been suggested (e.g., Clayton \& Martin 1985).
However, one effect of decreased metallicity in the LMC
is that the typical molecular cloud is
bigger but more diffuse than those in the Galaxy (Pak et. al. 1998).
Hence, dust grains in the LMC may not spend as much time in dense, shielded environments
as grains in the Galaxy. The lack of time in dense environments may contribute to
the apparent small size of the LMC grains as indicated by the low values of $R_V$ measured
in this study.
In addition, the weak and narrow bump lines of sight in the
LMC all lie near the 30 Dor star forming region which has no analog in the Galaxy.
The dust along these sightlines
has probably been affected by the proximity to the harsh environment
of the copious star formation associated with 30 Dor.
However, it must be pointed out that the most extreme UV extinction
curves, having virtually no bumps and a very steep far UV are found in the SMC.
The SMC dust lies near regions of star formation but they are very modest compared
to 30 Dor. These SMC sightlines have optical depths
similar to those in LMC~2 (GC).
Due to very low metallicity of of the SMC, its molecular clouds are very diffuse (Pak et. al. 1998).
One might expect that values of $R_V$ in the SMC be even smaller than in the LMC; the
current observations, however, show no evidence for this (GC).
Even with the improved and expanded samples of extinction in the LMC and SMC,
the link between particular environments and dust characteristics is still unclear.
The combination of the Galactic and Magellanic cloud data show that the extinction
curve/environment links are not as simple as previously proposed.
But the different times spent by grains in dense molecular environments may be a
significant factor as suggested for the Galactic star HD 210121 (Larson et. al. 1996).
The processing history of dust grains (ie. coagulation and mantling in dense clouds
environments and exposure to strong shocks and radiation field outside of clouds)
is probably quite different in these three galaxies owing to the different
molecular cloud environments and the varying intensity of star formation.
The interplay between at least these two factors likely plays an important role in determining
the form the UV extinction. The fact that starburst galaxies appear to have SMC--type
dust regardless of metallicity (Calzetti et. al. 1994;
Gordon et. al. 1997) implies that the star formation
history of a galaxy plays an important role in determining the extinction properties.
However, the complicated relationship between extinction properties in the UV and environment
implied by the Galactic and Magellanic Cloud data suggests that great care must be taken in
assuming the form of the UV extinction in external galaxies.
\acknowledgments
This research has made use of the SIMBAD database. $IUE$ spectra were down loaded
from the $IUE$ final archive at ESA.
This work has been partially supported through NASA ATP grant NAG5~3531 to GCC.
We thank M. Oestreicher for providing the source code and data files used for
generating the foreground reddening map and M. Bessel for supplying the H$\alpha$ image.
| proofpile-arXiv_065-8002 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The mean lifetime of the $\tau$ lepton is deduced from
geometrical reconstruction of $\tau$ daughter tracks
in $e^+e^-\rightarrow\tau^+\tau^-$ events.
We need to know the $\tau$ lifetime in order to make
certain tests of lepton universality.
Such tests are sensitive to the $\tau\nu_{\tau}{\rm W}$ coupling
and also to possible new physics~\cite{univ};
at present the precision is limited by
the experimental uncertainties on
the $\tau$ lifetime and branching fractions.
The $\tau$ lifetime is also useful for
evaluating the strong coupling constant
$\alpha_{\rm S}$~\cite{alphas}.
In this talk I use the conventional coordinate system
for $e^+e^-$ experiments,
i.e., with the $z$ (polar) axis along the direction of
the incident beams.
The impact parameter $d$ of a reconstructed charged track
is measured in the $xy$ projection
with respect to the nominal interaction point;
$d$ is signed according to the $z$ component of the track's
angular momentum about this point.
The azimuthal decay angle $\psi$ of a $\tau$ daughter track is defined
as the signed quantity $\phi_{\rm daughter} - \phi_{\tau}$
in the laboratory frame,
where $\psi$ lies between ${-}\pi$ and $\pi$.
At present, useful measurements of the $\tau$ lifetime can be obtained
by SLD, the four LEP experiments, and CLEO,
and the experimental conditions are quite different in these places.
As far as the lifetime measurement is concerned, the relevant
differences include the $\tau$ flight distance in the laboratory
(which varies as $\beta\gamma$ of the $\tau$ boost),
the opening angles of the $\tau$ decay products
(which vary roughly as $1/\gamma$),
the dimensions of the luminous region,
and the number of collected $\tau$ pairs.
On the other hand,
the impact parameters of the $\tau$ daughters are on the order
of $c\tau_{\tau} = 87\,\mu{\mathrm{m}}$ in all experiments,
as long as the $\tau$ has $\beta \sim 1$.
Comparisons of the experimental conditions are given in
Table~\ref{t:cond} and Fig.~\ref{f:cond}.
\section{Methods}
In this section I give a brief description of the $\tau$ lifetime
measurement methods that are currently in use.
\subsection{Decay length method}
The decay length (DL) method (or vertex method)
is applied to three-prong $\tau$ decays~\cite{markii}.
The procedure consists of reconstructing the $\tau$ decay vertex
in two or three dimensions.
In order to evaluate the $\tau$ decay length, an estimate of the
$\tau$ production point is also needed.
In an $e^+e^-\rightarrow\tau^+\tau^-$ event, the only available estimate is
the centroid of the luminous region.
Because the luminous region is huge
along the direction of the beam axis
(considerably larger than the typical $\tau$ flight distance),
we can effectively estimate only two of the three coordinates
of the $\tau$ production point,
and we end up with a measurement of the $\tau$ displacement $L_{xy}$
in the $xy$ projection.
An estimate of the polar angle $\theta_{\tau}$ of the $\tau$ direction,
taken for example from the event thrust axis,
is therefore needed in order to calculate the
$\tau$ flight distance $L$ in three dimensions.
The mean lifetime is deduced from the mean of the
$L$ distribution
and the mean value of $\beta\gamma$;
the latter is calculated from simulated $\tau^+\tau^-$ events,
including initial and final state radiation,
after the event selection criteria are applied.
\begin{table}[t]
\caption[]{Experimental conditions for $\tau$ lifetime measurements.
Here, $N_{\tau\tau}$ is the approximate number of produced
$e^+e^-\rightarrow\tau^+\tau^-$ events in the data sample,
$E_{\tau}$ is the $\tau$ energy in the laboratory
(neglecting radiative effects),
$\gamma$ refers to the boost of the $\tau$,
and $\beta\gamma c\tau_{\tau}$ is the mean $\tau$ flight distance
in the lab.}
\label{t:cond}
\begin{tabular}{@{}lrrr}\hline
& \makebox[13mm]{\hfill SLD}
& \makebox[13mm]{\hfill LEP}
& \makebox[13mm]{\hfill CLEO} \\ \hline
$N_{\tau\tau}$ & 20K & $4\times200$K & 5M\rlap{${}^{\rm a}$} \\
$E_{\tau}$ (GeV) & 45.6 & 45.6 & 5.3 \\
$\gamma$ & 25.7 & 25.7 & 3.0 \\
$\beta\gamma c\tau_{\tau}$ ($\mu$m) & 2200 & 2200 & 240 \\
\hline
\multicolumn{4}{@{}l}{\small ${}^{\rm a}$%
\parbox[t]{68mm}{\small Collected with the present
(CLEO 2.5) detector configuration.}} \\
\end{tabular}
\end{table}
\begin{figure}[t]
\begin{center}
\mbox{\epsfysize=94mm\epsffile{3prong.eps}}%
\end{center}\par\vspace{-10mm}
\caption{Experimental conditions at SLD, LEP, and CLEO.
All drawings show the same three-prong $\tau$ decay,
boosted to the appropriate energy for each experiment and
projected onto the $xy$ plane ($20{\times}$ actual size).
The ellipses representing the luminous region include the
typical uncertainty on the beam axis coordinates.}
\label{f:cond}
\end{figure}
Modern vertex detectors can measure precise impact parameters in
both the $r\phi$ and $rz$ views.
Because the luminous region is so large along $z$,
the possibility of measuring the $z$ coordinate of the $\tau$ decay
vertex is not particularly useful for the classical DL method.
However, it is important to realize that the tracking information
from the $rz$ view can also significantly improve the measurement of
the $(x,y)$ coordinates of the decay vertex,
from which the lifetime is extracted.
This point is illustrated in Fig.~\ref{f:rzview}.
\begin{figure}[t]
\begin{center}
\mbox{\epsfxsize=42mm\epsffile{rzview.eps}}%
\end{center}\par\vspace{-10mm}
\caption{A $\tau$ decay in which the three charged daughter tracks
emerge in the same azimuthal direction $\phi$.
The vertex cannot be reconstructed from the track measurements
in the $r\phi$ view alone,
but a full three-dimensional vertex reconstruction is possible when
the measurements in the $rz$ view are added.}
\label{f:rzview}
\end{figure}
At LEP and SLD, the size of the luminous region and the tracking
resolution are such that the statistical uncertainty on the mean
decay length is dominated by the natural width of the exponential $t$
distribution;
the relative uncertainty on the mean lifetime is not far from
its optimum value, $\Delta\tau_{\tau}/\tau_{\tau} = 100\% / \sqrt{N_{\tau}}$,
where $N_{\tau}$ is the number of selected $\tau$ decays.
The systematic errors in DL analyses at high energies
also tend to be fairly small.
In short, our detectors and our technique
for measuring the $\tau$ lifetime
from three-prong decays at LEP and SLD
are very effective.
At lower center-of-mass energies the $\tau$ decay length is shorter
and the size of the luminous region can significantly dilute the
precision of the measurement.
In such cases the decay length resolution can be improved by
considering the separation of the two decay vertices in
\mbox{3-3}\ topology events.
This procedure eliminates the smearing due to the size
of the luminous region,
but it does not dramatically improve the final results because
the statistics are low and the $\textsl{q}\bar{\textsl{q}}$ background is larger
in the \mbox{3-3}\ channel.
\subsection{One-prong decays}
While the DL method is entirely satisfactory for analyzing three-prong
$\tau$ decays,
we cannot apply such a straightforward technique to the one-prong decays.
Due to the unobserved neutrinos, the $\tau$ direction is unknown.
We cannot reconstruct the decay length of an individual $\tau$ decaying
into one prong.
We can nevertheless measure the mean decay length from a collection
of one-prong decays.
The relative statistical uncertainty will be considerably larger than
$100\%/\sqrt{N_{\tau}}$.
We will do best if we analyze both $\tau$ decays in an event together.
It is difficult to incorporate all of the event information into
a simple method.
We now use several methods to analyze one-prong decays.
Each method uses a different subset of the available information,
and none of the methods is vastly superior to the others.
We combine the results from the various methods,
taking into account the statistical and systematic correlations,
to utilize as much of the available information as possible.
I now proceed to describe these methods in more detail.
\subsection{Impact parameter method}
The impact parameter (IP) method
is applied to one-prong $\tau$ decays~\cite{mac}.
In this method, an estimate of the $\tau$ direction is taken,
for example, from the event thrust axis.
The lifetime-signed impact parameter of the daughter track is
then defined:
\[ D = \left\{ \begin{array}{ll}
d & \ \ \mbox{if $\psi > 0$;} \\
-d & \ \ \mbox{if $\psi < 0$.}
\end{array} \right. \]
The mean of the $D$ distribution is then roughly proportional
to $\tau_{\tau}$.
The dependence of the $D$ distribution on $\tau_{\tau}$ is determined
from Monte Carlo simulation.
Decays with $D<0$ result from
the impact parameter resolution,
the size of the luminous region,
and errors on the $\tau$ direction.
The last of these effects is probably the most dangerous because
it brings about a substantial change in the mean of $D$,
and we rely on the Monte Carlo to correctly describe the
$\tau$ direction errors.
Although we tend to think of the IP method as a one-$\tau$-at-a-time
method,
the $\tau$ in the opposite hemisphere contributes to the
thrust axis determination
and hence affects the $D$ distribution.
\subsection{Impact parameter difference method}
\label{ss:ipd}
The impact parameter difference (IPD) method
is applied to \mbox{1-1}\ topology events~\cite{alephipd}.
In this method the mean $\tau$ decay length is extracted
by considering the correlation between the difference on the
daughter track impact parameters $d$ and the
difference of their azimuthal angles $\phi$.
Specifically, we define
$Y = d_+ - d_-$ and
$X = \Delta\phi \sin\theta_{\tau}$,
where $\Delta\phi = \phi_{+} - \phi_{-} \pm \pi$
is the acoplanarity of the two daughter tracks.
If the $\tau^{+}$ and $\tau^{-}$ are back to back
in the $xy$ projection
and the decay angles $\psi$ are small,
such that $\sin\psi \cong \psi$,
we find, at a particular value of $X$, that
$\langle Y \rangle = \bar{L}X$,
where $\bar{L}$ is the mean $\tau$ decay length in the lab.
A fit to the $Y$~vs $X$ distribution is performed to
extract the slope $\bar{L}$.
The polar angle $\theta_{\tau}$ is taken from the
event thrust axis;
the resulting error on the $\tau$ direction has a negligible
effect on the fitted $\bar{L}$.
The main disadvantage of the IPD method is that
the uncertainty on the $\tau^+\tau^-$ production point due to the
size of the luminous region enters twice in the smearing on $Y$.
\subsection{Impact parameter sum methods}
The miss distance (MD)~\cite{delphimd} and
momentum-dependent impact parameter sum (MIPS)~\cite{alephmips}
methods
are designed to give improved statistical precision
by virtually eliminating the smearing effects related
to the size of the luminous region.
In a \mbox{1-1}\ topology event we define the ``miss distance''
$\Delta = d_{+} + d_{-}$.
This sum of impact parameters is, roughly speaking,
the distance in the $xy$ projection between the two daughter tracks
at their closest approach to the beam axis.
This quantity is almost independent of the $\tau^+\tau^-$ production point.
The $\Delta$ distribution depends on $\tau_{\tau}$;
a Monte Carlo simulation is used to parametrize the true distribution,
in order to extract the lifetime from the data.
The main disadvantage of these methods is that
the results of the fit to the data are sensitive to the
assumed impact parameter resolution.
I refer to the simplest form of this analysis as the MD method.
The MIPS method is a refinement of MD in which the $\Delta$ distribution
is parametrized in terms of the momenta in the lab of the
two $\tau$ daughter tracks.
DELPHI's new results announced at this workshop feature the MIPS
refinement and one other: the $\Delta$ distribution is parametrized
separately for lepton and hadron daughters;
I refer to this method as MD++.
\begin{figure}[b]
\begin{center}
\mbox{\epsfxsize=30mm\epsffile{3dip.eps}}
\end{center}\par\vspace{-10mm}
\caption{In the 3DIP method,
the event is projected along the direction
given by $\hat{\tau}_2 - \hat{\tau}_1$,
where $\hat{\tau}_{1,2}$ are the
two possible $\tau^{-}$ momentum directions.}
\label{f:threedip}
\end{figure}
\subsection{Three-dimensional impact parameter method}
The three-dimensional impact parameter\linebreak
(3DIP) method
makes use of more of the kinematic information in the events
to yield a higher sensitivity per event than the other
one-prong methods~\cite{alephthreedip}.
The main disadvantage is that the method can only be applied to
{\sl hadron\/} vs {\sl hadron\/} events
(42\% of all $\tau^+\tau^-$ events).
Because a $\tau\rightarrow{\sl hadron}$ decay yields only one
unobserved neutrino,
it is possible to reconstruct the $\tau$ direction
in {\sl hadron\/} vs {\sl hadron\/} events
up to a twofold ambiguity.
Let $\hat{\tau}_1$ and $\hat{\tau}_2$ denote the two
possible $\tau^{-}$ directions
reconstructed for a particular event (Fig.~\ref{f:threedip}).
If we then project the event along a direction
chosen such that $\hat{\tau}_1$ and $\hat{\tau}_2$
coincide,
we end up with no uncertainty on the $\tau$ direction
in that projection.
We then define a generalized impact parameter sum in that projection,
so that there is almost no smearing due to the size of the luminous
region.
A fitting procedure operates on this impact parameter sum and
on the two projected $\tau$ decays angles in order to
extract the mean lifetime.
The 3DIP method is the first to use impact parameter information
from the $rz$ view in the analysis of one-prong decays.
The method has the extremely important advantage that the
tracking resolution and the $\tau$ lifetime can be extracted
simultaneously from the $\tau^+\tau^-$ events.
\section{New lifetime results since TAU96}
There are four new developments to report:
{\mylists
\begin{itemize}
\item
At TAU98, L3 is reporting preliminary results from their
1994--95 data, analyzed with the DL and IP methods~\cite{ltrois}.
The new L3 average (1991--95 data) is
$\tau_{\tau} =
(291.7 \pm 2.0 \,[{\rm stat}] \pm 1.8 \,[{\rm syst}])\,{\mathrm{fs}}$.
\item
At TAU98, DELPHI is reporting preliminary results from their
1994--95 data, analyzed with the DL, IPD, and
MD++ methods~\cite{delphi}.
The new DELPHI average (1991--95 data) is
$\tau_{\tau} =
(291.9 \pm 1.6 \,[{\rm stat}] \pm 1.1 \,[{\rm syst}])\,{\mathrm{fs}}$.
\item
The thesis of Patrick Saull (ARGUS)~\cite{saull} describes the
vertex impact parameter (VIP) method,
which provides improved lifetime sensitivity
for \mbox{1-3}\ topology events
in cases where the size of the luminous region limits the
precision of the DL method.
The VIP method uses the impact parameter of the one-prong
track with respect to the three-prong vertex,
and the acoplanarity of the one- and three-prong jets.
(A similar approach is described in~\cite{opalvip}.)
\item
In 1997, ALEPH published results from
the 3DIP method (1992--94 data)~\cite{alephthreedip}
and from the MIPS, IPD, and DL methods (1994 data)~\cite{aleph},
preliminary versions of which had been shown at TAU96.
\end{itemize}}
\section{Summary of measurements}
In calculating the world average $\tau$ lifetime,
I follow the Particle Data Group~\cite{pdg} and
ignore early measurements with large uncertainties.
The measurements are listed in Table~\ref{t:summary}
and plotted in Fig.~\ref{f:summary}.
In most cases the results shown are themselves averages
of two or more measurements obtained by a given
experiment with different methods and/or data samples.
\begin{table}[t]
\setlength{\tabcolsep}{1.5pc}
\caption[]{Measurements of the $\tau$ lifetime.}
\label{t:summary}
\begin{tabular}{@{}lr@{$\,\pm\,$}c@{$\,\pm\,$}l}
\hline
Experiment &
\multicolumn{3}{c}{$\tau_{\tau}\pm\mbox{stat}\pm\mbox{syst}$ (fs)} \\ \hline
ALEPH~\cite{alephthreedip,aleph} & 290.1 & 1.5 & 1.1 \\
DELPHI~\cite{delphi}${}^{*}$ & 291.9 & 1.6 & 1.1 \\
L3~\cite{ltrois}${}^{*}$ & 291.7 & 2.0 & 1.8 \\
OPAL~\cite{opal} & 289.2 & 1.7 & 1.2 \\
CLEO~II~\cite{cleo} & 289.0 & 2.8 & 4.0 \\
SLD~\cite{sld}${}^{*}$ & 288.1 & 6.1 & 3.3 \\ \hline
\multicolumn{4}{@{}l}{${}^{*}$Preliminary} \\
\end{tabular}
\end{table}
The world average is
$\tau_{\tau} = 290.5 \pm 1.0 \,{\mathrm{fs}}$,
where the systematic errors in the various experiments are
assumed to be uncorrelated.
The $\chi^2$ describing the consistency of the measurements is
1.36 for 5 degrees of freedom,
corresponding to a confidence level of 0.929.
The four LEP experiments contribute 94\% of the total weight
in the average.
Since the beginning of the LEP era, the uncertainty on the world
average has been reduced by a factor of 8.
By now, almost all of the LEP1 data has been analyzed.
(The LEP2 data is not expected to yield
any useful $\tau$ lifetime results.)
\section{Systematic errors}
Most recent measurements of the $\tau$ lifetime are statistics limited.
Nevertheless, it is useful to examine the systematic effects that
we will need to deal with in the coming years in order to further
improve the precision of the measurements.
Some of the important sources of systematic errors are
tracking errors (simulation and/or parametrization);
vertex fit and lifetime extraction procedures;
and detector alignment.
I will now discuss each of these topics in turn.
\subsection{Tracking errors}
I would like to mention two delicate issues related to
tracking errors.
\begin{figure}[b]
\begin{center}
\mbox{\epsfxsize=60mm\epsffile{summary.eps}}%
\end{center}\par\vspace{-10mm}
\caption{Measurements of the $\tau$ lifetime.}
\label{f:summary}
\end{figure}
The first issue concerns the dependence, in some methods,
of the measured lifetime on the assumed impact parameter resolution.
In such cases, it is mandatory to measure the resolution
from reconstructed tracks in the real data.
Bhabha and dimuon events are readily available for this job,
but the high momentum electrons and muons in those samples
are not representative of the $\tau$ daughter tracks and
the contribution to the impact parameter resolution
from multiple scattering cannot be studied.
Some experiments employ $\gamma\gamma\rightarrowe^+e^-$, $\mu^+\mu^-$ events
to parametrize the resolution at the low end of
the momentum range.
While these test samples can give a fairly precise description
of the impact parameter resolution for electrons and muons,
it is not easy to use the real data
to parametrize the effects of nuclear interactions
on pion and kaon tracks.
The second issue concerns the correlation between the errors
on the reconstructed impact parameter and direction of a track,
e.g., between $d$ and $\phi$.
This correlation is positive and results from the extrapolation
of the reconstructed tracks
from the measured points in the tracking detectors
to the interaction region.
In some methods the correlation can simulate a longer $\tau$
lifetime~\cite{srw}.
The effect is especially bad for the $\tau$ (compared to other
particles) because
(1) the short $\tau$ lifetime leads to small impact parameters,
(2) the small $\tau$ mass leads to small decay opening angles
(which get smaller at higher $\sqrt{s}$), and
(3) we fit to the entire proper time spectrum
(as opposed to the situation in charm lifetime measurements
in fixed target experiments,
where a cut $L > L_{\rm min}$ is imposed and the
mean lifetime is determined from the {\it slope\/} of the proper time
distribution).
The effects of the tracking errors on the measured $\tau_{\tau}$
must be carefully taken into account.
\subsection{Vertex fit and lifetime extraction\\ procedures}
Although the direct reconstruction of $\tau$ decay lengths
in the DL method appears to be quite straightforward,
several subtle effects are present,
yielding biases on the measured lifetime.
These effects are related to the tracking resolution,
and they can be substantial in experiments where the
vertex resolution along the $\tau$ direction is larger
than the mean decay length
(not the case at SLD and LEP).
{\mylists
\begin{itemize}
\item Due to the correlation between
the track impact parameter and direction errors,
fluctuations to larger opening angles in a three-prong decay
tend to be associated with upward fluctuations
in the reconstructed decay length.
The larger opening angles also lead to
a smaller calculated uncertainty on $L$.
Thus the upward fluctuations in $L$ are associated with larger weights
in the calculation of the mean decay length.
\item Radiative events have lower $\tau$ momenta,
which tend to result in smaller decay lengths.
But the lower momenta also
tend to yield larger opening angles of the daughter tracks
and therefore smaller uncertainties on $L$.
Thus smaller decay lengths are associated with larger weights.
\item In some vertex fitting programs,
the covariance matrices describing the errors on the reconstructed
track parameters are ``swum'' to the location of the fitted vertex
and a second fitting iteration is performed.
This appears to be a reasonable thing to do,
but such a fitting program assigns larger weights, on average,
to the $\tau$'s with larger decay lengths, leading to a bias
on the average decay length.
\end{itemize}}
\subsection{Detector alignment}
Tracking systems are calibrated and surveyed based on tracks
reconstructed in the data.
This procedure is not perfect;
after alignment, the average impact parameter $\langle d \rangle$
(which would ideally be zero)
can vary with $\theta$ and $\phi$ by $10\,\mu{\mathrm{m}}$ or more.
The $\tau$ lifetime measurement is, however,
based on impact parameters on the order of $c\tau_{\tau} = 87\,\mu{\mathrm{m}}$.
How can the experiments claim systematic uncertainties related
to detector alignment as small as $0.1\%$?
A conjecture, put forth by ALEPH~\cite{alephalign},
provides some insight.
They theorize that the effects of $d$ offsets cancel, to first order,
if there are no azimuthal holes in the acceptance of the tracking
system.
To illustrate this point, I present the preliminary results
of a simple Monte Carlo study.
Simulated $e^+e^-\rightarrow\tau^+\tau^-$ events at
$\sqrt{s} = 91.2 \,{\mathrm{GeV}}$ were generated, and
a sample of $500\,000$ three-prong decays was selected with
reasonable cuts on the momentum and polar angle of the
daughter tracks.
Rather sterile conditions were maintained for the experiment:
Gaussian tracking errors were generated, with
$\sigma_d = 30\,\mu{\mathrm{m}}$,
$\sigma_{\phi} = 0.2\,{\mathrm{mrad}}$,
and $\langle \delta d \cdot \delta\phi \rangle =
0.9 \sigma_d \sigma_{\phi}$.
A two-dimensional vertex fit was performed for each decay, and
no errors on the $\tau$ production point or the $\tau$ direction
were introduced in the calculation of the decay length.
Impact parameter offsets were then applied as a function of $\phi$,
and the fits were repeated,
to study the effect on the decay length bias
$\langle L_{\rm rec} - L_{\rm true} \rangle$.
I tried four different $d$ offset configurations, as described below.
\begin{figure}[t]
\begin{center}\mbox{%
\rlap{\epsfxsize=67.5mm\epsffile{teenal1.ps}}%
\rlap{\epsfxsize=67.5mm\epsffile{teenal_labels.ps}}%
\rule{75mm}{0mm}}\end{center}\par\vspace{-10mm}
\caption{Monte Carlo study of impact parameter offsets, Experiment~B
(uniform offset plus one excursion).
The ``Input'' plot shows the offsets
applied to the impact parameters $d$
as a function of $\phi$.
The ``Output'' plot shows the resulting relative bias on the
reconstructed decay length,
$B = \langle L_{\rm rec} - L_{\rm true} \rangle /
\langle L_{\rm true} \rangle$
as a function of $\phi$.}
\label{f:offsetb}
\end{figure}
\begin{figure}[t]
\begin{center}\mbox{%
\rlap{\epsfxsize=67.5mm\epsffile{teenal6.ps}}%
\rlap{\epsfxsize=67.5mm\epsffile{teenal_labels.ps}}%
\rule{75mm}{0mm}}\end{center}\par\vspace{-10mm}
\caption{Monte Carlo study of impact parameter offsets, Experiment~C
(radial shift of silicon vertex detector wafers).}
\label{f:offsetc}
\end{figure}
{\mylists
\begin{itemize}
\item[A.] No $d$ offsets.
When no systematic offsets are applied to the impact parameters,
the average bias is
$B = \langle L_{\rm rec} - L_{\rm true} \rangle /
\langle L_{\rm true} \rangle
= (+0.16 \pm 0.04)\%$,
reflecting the small positive bias due to the correlation
of the $d$ and $\phi$ errors.
The rule of thumb is that the relative decay length bias
is roughly equal to the ratio
of the detector-induced correlation of $d$ and $\psi$ to the
lifetime-induced correlation.
In this case the detector-induced correlation is
$\langle \delta d \cdot \delta\phi \rangle = 0.0054\,\mu{\mathrm{m}}$,
while the lifetime-induced correlation is roughly
$\langle d \cdot \psi \rangle = (c\tau_{\tau})(1/\gamma) = 3.4\,\mu{\mathrm{m}}$,
so the ratio is $0.0054/3.4 = 0.0016$,
which is comparable (!) to the observed offset.
\begin{figure}[t]
\begin{center}\mbox{%
\rlap{\epsfxsize=67.5mm\epsffile{teenal9.ps}}%
\rlap{\epsfxsize=67.5mm\epsffile{teenal_labels.ps}}%
\rule{75mm}{0mm}}\end{center}\par\vspace{-10mm}
\caption{Monte Carlo study of impact parameter offsets, Experiment~D
(broken azimuthal acceptance).}
\label{f:offsetd}
\end{figure}
\item[B.] Uniform offset plus one excursion.
The ``Input'' plot in
Fig.~\ref{f:offsetb} shows the applied
$d$ offset of $+20\,\mu{\mathrm{m}}$ everywhere,
plus a triangular excursion of amplitude $-60\,\mu{\mathrm{m}}$ in one region.
The ``Output'' plot shows that the local bias $B$
is essentially the derivative of the
input function with respect to $\phi$,
with the sharp edges smoothed out over an angular scale
corresponding to the typical opening angle of the $\tau$ decays.
In particular, the slope of the offset function in the region
of the excursion is
${\pm}(60 \,\mu{\mathrm{m}})/(30^{\circ}) = {\pm}115\,\mu{\mathrm{m}}$
(converting the degrees to radians).
This quantity is equal to ${\pm}6.1\%$ of the mean decay length in the
$xy$ projection,
whereas the maximum observed bias in the output plot
is about ${\pm}6\%$.
In spite of the large local biases, the global average bias is
$B = (+0.17 \pm 0.04)\%$,
i.e., unchanged from Experiment~A.
If the acceptance is unbroken in $\phi$, and the bias is
the derivative of the input function,
then the average bias is proportional to the integral of the derivative,
which is zero for {\it any\/} input offset function.
\item[C.] Radial shift of silicon vertex detector faces.
Here I consider a one-layer vertex detector
with nine flat faces at a radius of $6\,{\mathrm{cm}}$.
I suppose that the silicon wafers are shifted away from the origin
by $100\,\mu{\mathrm{m}}$ with respect to their assumed locations.
I then make the crude approximation that this shift has no effect
on the reconstructed track directions and simply introduces an
offset on $d$ given by $(-100\,\mu{\mathrm{m}})\sin\alpha$,
where $\alpha$ is the azimuthal angle of incidence of the
track on the wafer.
This scenario corresponds to the input function shown in
Fig.~\ref{f:offsetc}.
Again the output plot looks like the derivative of the input:
the local bias has positive $\delta$ functions
(again smeared due to the opening angles of the $\tau$ decays)
in the regions where the daughter tracks do not all pass through
the same vertex detector face
and a uniform negative value elsewhere.
The bias is locally as large as $+28\%$.
Nevertheless, the global average bias is unchanged:
$B = (+0.14 \pm 0.04)\%$.
\item[D.] Broken azimuthal acceptance.
This experiment is the same as Experiment~C,
except that $2^{\circ}$ azimuthal gaps are
introduced between adjacent faces of the vertex detector.
I reject $\tau$ decays in which one or more of the daughter tracks
passes through a gap.
Naturally, this has no effect on the output plot (Fig.~\ref{f:offsetd})
except near the positive spikes;
some of the events that had a large positive bias in Experiment~C
are now removed from the sample.
The resulting negative bias on the global mean decay length is huge:
$B = (-3.73 \pm 0.04)\%$.
If the gaps had been wider,
such that the three daughter tracks in selected events
always pass through the same face of the vertex detector,
the bias on the mean $xy$ decay length would have been simply
$-100\,\mu{\mathrm{m}}$ (the radial shift of the faces)
or $-5.3\%$.
\end{itemize}}
These experiments show that $d$ offsets
due to alignment and calibration errors
in tracking systems have little effect on the measured lifetime
in the DL method,
as long as the azimuthal acceptance is unbroken.
In fact it is straightforward to prove that the same result
holds for the IPD method.
But these conclusions rely on the assumption that the
weighting of the events in the lifetime averaging procedure
is independent of $\phi$.
In a realistic situation where the smearing related to the
luminous region is significant (not at SLD!)
and depends on $\phi$,
the (reweighted) integral of the derivative of the offset function
is, in general, not equal to zero.
It should be mentioned that correlated offsets in $d$ and $\phi$
can yield a bias even when the azimuthal acceptance is unbroken.
It is straightforward to measure the $d$ offsets versus $\theta$ and
$\phi$ from Bhabha, dimuon, or $\textsl{q}\bar{\textsl{q}}$ events.
Corrections may then be applied to the $\tau$ data.
On the other hand, offsets in $\phi$ are difficult to measure;
the systematic uncertainty on the lifetime should allow for
a range of possibilities.
Systematic offsets in impact parameter and direction may also be present
in the $rz$ view.
These offsets, affecting methods such as DL and 3DIP, which do not operate
solely in the $xy$ projection,
are difficult to measure.
Moreover, there is no bias cancellation rule for such offsets because
the unbroken acceptance and periodic boundary conditions do not
apply in $\theta$ as they do in $\phi$.
Finally, it is interesting to note that the absolute scale of the
lifetime measurements is set by the impact parameter scale,
which depends on the detector dimensions.
In experiments with microstrip or pixel vertex detectors,
it is the pitch of those detector elements that counts,
not the radii of the vertex detector layers.
\section{Treatment of biases}
With some methods, the $\tau$ lifetime measurement must be
``calibrated'' by means of Monte Carlo events.
For example, the interpretation of
the impact parameter sum distributions studied in the MD/MIPS methods
is based on simulated events.
In other methods (e.g., DL and IPD)
there is a simple geometric relation
between the observables and the mean lifetime,
so, to first order, no calibration is needed.
In such cases the Monte Carlo is normally used
to check for ``possible'' biases in a measurement;
a sample of events with known generated lifetime
is passed through the analysis,
and the reconstructed mean lifetime is compared to the input value.
The experimenters may then choose one of two valid approaches:
{\mylists
\begin{enumerate}
\item If $\tau_{\rm output}$ is significantly different from
$\tau_{\rm input}$,
apply the difference as a correction to the lifetime measured
in the data.
\item Always apply the difference
as a correction to the lifetime measured in the data.
\end{enumerate}}
Approach~1 is used by most experiments.
It turns out that between 1985 and 1993
the difference $\tau_{\rm output} - \tau_{\rm input}$
was not subtracted
in eight published measurements of $\tau_{\tau}$ with the DL method.
In all eight cases, $\tau_{\rm output}$
was greater than $\tau_{\rm input}$~\cite{srw}.
This observation is experimental evidence that
common biases are present in all DL measurements.
Corrections must be applied for all biases before
a valid world average can be calculated.
I would encourage experimenters to apply those corrections themselves,
but to go far beyond Approach~2.
We can and should identify and measure
each {\it individual\/} source of bias
in our analyses by means of Monte Carlo events.
Here is an example from the IPD method.
The bias that results from radiative events in which
the $\tau^{+}$ and $\tau^{-}$ are {\it not\/} back to back in the
$xy$ projection can be evaluated by calculating $\Delta\phi$
(see Section~\ref{ss:ipd})
from Monte Carlo truth information,
with and without a correction for the acoplanarity of the $\tau$'s,
and comparing the fitted lifetimes in the two cases.
Such a technique yields a measurement of this one bias
with very good statistical precision,
and the reliability of the simulation can be checked by
searching for $e^+e^-\rightarrow\gamma\tau^+\tau^-$ events in data and Monte Carlo.
A series of measurements of this type can be devised,
using various pieces of information from the Monte Carlo truth,
in order to evaluate each contribution to
$\tau_{\rm output} - \tau_{\rm input}$.
We can rigorously evaluate the systematic uncertainty
on the lifetime measurement
only by understanding the magnitude of each bias contribution.
\section{History of the $\tau$ lifetime}
The world average $\tau$ lifetime values evaluated by
the Particle Data Group since 1986
are plotted in Fig.~\ref{f:history}.
A fairly steady decline is observed in these averages.
In my opinion, three factors contribute to this decline:
statistics,
unsubtracted biases,
and ``other effects.''
As evidence for the presence of ``other effects,''
the $\tau$ lifetime average
in the 1992 Review of Particle Properties~\cite{pdg}
had $\chi^2 = 2.0$ for 10 degrees of freedom,
corresponding to a confidence level of $0.9962$.
\begin{figure}[t]
\begin{center}
\mbox{\epsfxsize=55mm\epsffile{history.ps}}%
\end{center}\par\vspace{-15mm}
\caption{RPP world average $\tau$ lifetime value
versus year~\cite{pdg}.}
\label{f:history}
\end{figure}
\section{Thoughts on the next method}
Here are a few ideas about a possible ultimate method
for analyzing \mbox{1-1}\ topology events.
We may be able to squeeze a little more out of the \mbox{1-1}\
(and perhaps \mbox{1-3}) topology events
if we created a method that takes into account all of the
available information:
energies and directions of charged and neutral particles,
charged particle identification,
impact parameters in $r\phi$ and $rz$ views, and
the position and size of the luminous region.
The method should
transform this information into two or three variables
from which the lifetime is extracted.
It should take into account the known $\tau$ decay dynamics
and allow for initial and final state radiation.
Furthermore, I believe we will not be able to make
substantial advances in precision unless
the new method allows the impact parameter resolution
to be fitted from the $\tau$ data itself,
as in the 3DIP method.
\section{Outlook and conclusions}
In conclusion, the world average $\tau$ lepton lifetime
is
\[ \tau_{\tau} = 290.5 \pm 1.0 \,{\mathrm{fs}}, \]
and the $\chi^2$ of the average looks healthier.
The LEP experiments dominate the world average.
It will be a challenge to achieve the next factor of 8
improvement in precision on $\tau_{\tau}$.
We will need to rely on the large statistics of CLEO and the b factories,
with considerable care to understand and reduce systematic errors
related to the fitting procedures, tracking resolution,
and backgrounds.
\section*{Acknowledgements}
I would like thank the organizers,
particularly Toni Pich and Alberto Ruiz,
for giving me the chance to participate in the workshop.
It was a pleasure to visit Santander and the surrounding area,
and the arrangements, accomodations, and food were excellent!
I would also like to thank
Attilio Andreazza,
Roy Briere,
Auke-Pieter Colijn,
Mourad Daoudi,
Jean Duboscq,
Wolfgang Lohmann,
Duncan Reid,
Patrick Saull,
Abe Seiden, and
Randy Sobie
for providing information for this talk.
| proofpile-arXiv_065-8008 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{References}
| proofpile-arXiv_065-8024 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
It is well--known that the Bogoliubov model (BM) of the weakly
interacting Bose gas~\cite{Bog1} is a fundamental of the theory of
the many--boson systems. The long--range spatial correlations of
bosons are properly taken into account within this model. As to the
short--range ones, there are situations when the latter need more
accurate treatment. In particular, one can mention the troubles
appearing within BM when arbitrary strong repulsion between bosons
is expressed in a nonintegrable (singular) interparticle potential
$\Phi(r)$ behaving at small separations as $\displaystyle 1/r^m \;(m
> 3)\,.$ These troubles are commonly overcome by means of using BM
with an effective, 'dressed' interparticle potential (instead of the
'bare' one, $\Phi(r)\;$) that contains all the necessary
'information' concerning the short--range boson
correlations~\cite{Lee,Brueck,Bel,Huge}.
Although there exist sufficient amount of comprehensive ways of
constructing the effective interaction potentials (~the
pseudopotential method~\cite{Lee}, various procedures based on
summation of the ladder diagrams~\cite{Brueck,Bel,Huge}~)\footnote{
All the procedures can be reduced to the ordinary two--particle
Schr\"odinger equation with the 'bare' potential. }, it looks
interesting and promising to realize an alternative variant of
taking account of the short--range boson correlations. We mean a
generalization of BM which operates directly with the 'bare'
potential $\Phi(r)\;$ and provides a reasonable treatment of the
short--range particle correlations side by side with the long--range
ones. A generalization of BM like this is proposed in the present
Letter. Note that we limit ourselves to the case of the zero
temperature and consider only equilibrium characteristics such as
the pair correlation function and boson momentum distribution.
The key point of generalizing BM is based on rejecting the usual way
of dealing with the Bogoliubov model. The investigation presented
concerns the second--order reduced density matrix (2-matrix). In
particular, we operate with in--medium Schr\"odinger equations whose
solutions are the eigenfunctions of the 2--matrix, or the pair wave
functions. As an in--medium interparticle potential depends on these
functions, so the cited Schr\"odinger equations are nonlinear ones.
However, they can be linearized in the weak--coupling regime as well
as for a dilute Bose gas even with strong repulsive interaction
between bosons. The former case corresponds to the canonical
Bogoliubov model. The latter variant is related to, say, the
'strong--coupling' generalization of BM.
The present Letter is organized as follows. In the second part BM is
reconsidered in the framework of the 2--matrix. The third section
concerns the pair wave functions in the generalized BM. At last, to
show a reasonable character of the approach proposed, the
zero--density limit for the Bose gas with strong repulsion between
bosons is discussed in the fourth part of the paper.
\section{The Bogoliubov model in the light of the 2--matrix}
Let us consider a homogeneous cold many--body system of $N$~spinless
bosons with the volume~$V$ and interparticle potential $\Phi(r)\,.$
Note that absence of the spin degrees of freedom simplifies the
further reasoning without a loss of generality.
To start our investigation, let us recall the necessary definitions.
The 2--matrix for the system of interest is written as follows:
\begin{equation}
\rho_2({\bf r}_1^{\prime},{\bf r}_2^{\prime};{\bf r}_1,{\bf r}_2)=
\sum_{\nu}\,w_{\nu}\,\psi_{\nu}({\bf r}_1^{\prime},
{\bf r}_2^{\prime})\;\psi_{\nu}^*({\bf r}_1,{\bf r}_2)\; ,
\label{1}
\end{equation}
where $\psi_{\nu}({\bf r}_1,{\bf r}_2)$ are usually
called~\cite{Bog2} the pair wave functions and, physically,
$\displaystyle \sum_{\nu}\,w_{\nu}=1\;(w_{\nu}\geq 0)\,.$ The
pair wave functions for bosons are symmetric with respect to the
permutation of particles and obey the standard normalization
condition
$$
\int_V\;\int_V |\psi_{\nu}({\bf r}_1,{\bf r}_2)|^2\;
d^3 r_1\;d^3 r_2 = 1.
$$
The 2--matrix is connected with the pair correlation function
\begin{equation}
F_2({\bf r}_1,{\bf r}_2;{\bf r}_1^{\prime},{\bf r}_2^{\prime})=
\langle \psi^+({\bf r}_1) \psi^+({\bf r}_2)
\psi ({\bf r}_2^{\prime})\psi ({\bf r}_1^{\prime})\rangle
\label{2}
\end{equation}
by the expression~\cite{Bog2}
\begin{equation}
\rho_2({\bf r}_1^{\prime},{\bf r}_2^{\prime};{\bf r}_1,{\bf r}_2)=
{F_2({\bf r}_1,{\bf r}_2;{\bf r}_1^{\prime},{\bf r}_2^{\prime})
\over N(N-1)}\;.
\label{3}
\end{equation}
Here $\psi ({\bf r}_1)$ denotes the boson field operator. Knowing
the pair correlation function, one is able to calculate all the
important thermodynamic quantities~\cite{Pines}.
The most general structure of the 2-matrix of the equilibrium
many--body system of spinless bosons is given by the following
expression~\cite{Bog3,Cherny}:
$$
F_2({\bf r}_1,{\bf r}_2;{\bf r}_1^{\prime},{\bf r}_2^{\prime})=
\sum_{\omega,{\bf q}}\,\frac{N_{\omega,q}}{V}
\varphi_{\omega,q}^{*}({\bf r})
\varphi_{\omega,q}({\bf r}^{\prime})
$$
$$
\times\exp\{i{\bf q} ({\bf R}^{\prime}-{\bf R})\}
$$
\begin{equation}
+\sum_{{\bf p},{\bf q}}\frac{N_{{\bf p},{\bf q}}}{V^2}
\varphi_{{\bf p},{\bf q}}^{*}({\bf r})
\varphi_{{\bf p},{\bf q}}({\bf r}^{\prime})
\exp\{i {\bf q} ({\bf R}^{\prime}-{\bf R})\},
\label{4}
\end{equation}
where $\displaystyle {\bf r} = {\bf r}_1-{\bf r}_2,\;{\bf R}
=({\bf r}_1+{\bf r}_2)/2\;.$ The quantity $\displaystyle
\varphi_{\omega,q}({\bf r}) \cdot \exp(i{\bf q}{\bf R})/
\sqrt{V}$ denotes the wave function of the $\omega-$th bound state
of the pair of bosons with the total momentum $\hbar {\bf q}\;.$
Respectively, $\displaystyle \varphi_{{\bf p},{\bf q}}({\bf r})
\cdot \exp(i{\bf q}{\bf R})/V$ stands for the wave function
of a dissociated state of the pair of bosons with the total momentum
$\hbar {\bf q}$ and the momentum of relative motion $\hbar {\bf p}
\;.$ For the characteristics $N_{\omega,q}$ and $N_{{\bf p},
{\bf q}}$ we have: $N_{\omega,q}$ is the duplicated number
of the bound pairs of the $\omega-$th species with the total momentum
$\hbar{\bf q}\; ;\;N_{{\bf p},{\bf q}}$ is the duplicated number of
the dissociated pairs with the total momentum $\hbar {\bf q}$ and the
momentum of relative motion $\hbar {\bf p}\;.$ The wave functions
$\varphi_{\omega,q}({\bf r})$ and $\varphi_{{\bf p},{\bf q}}({\bf
r})$ obey the normalization conditions
$$
\lim\limits_{V \to \infty}\int_V |\varphi_{\omega,q}({\bf r})|^2\,
d^3r=1,
$$
\begin{equation}
\lim\limits_{V \to \infty} {1 \over V}
\int_V|\varphi_{{\bf p},{\bf q}}({\bf r})|^2\,d^3r=1,
\label{5}
\end{equation}
and have the symmetry properties
$$
\varphi_{\omega,q}({\bf r})=\varphi_{\omega,q}(-{\bf r})\,,
$$
$$
\varphi_{{\bf p},{\bf q}}({\bf r})=\varphi_{{\bf p},{\bf q}}
(-{\bf r})=\varphi_{-{\bf p},{\bf q}}({\bf r})\,,
$$
which are a consequence of the Bose statistics. Thus, the first term
in the right--hand side of (\ref{4}) represents the sector of the
bound pairs; the second one corresponds to the dissociated states.
Remark that generally speaking, one can expect a discrete index to
appear in addition to ${\bf q}$ and ${\bf p}$ for the dissociated
states in rather complicated situations. However, this does not
concern our present consideration. So, we have restricted ourselves
to the summation over ${\bf q}$ and ${\bf p}$ in the second term of
the right--hand side of (\ref{4}).
Comprehensive analysis recently fulfilled in paper~\cite{Cherny}, has
demonstrated that, in the thermodynamic limit, the correlation
function (\ref{4}) can be rewritten as
$$
F_2({\bf r}_1,{\bf r}_2;{\bf r'}_1,{\bf r'}_2)=
n_{0}^{2}\varphi(r)\varphi(r')
+\int d^{3}p\,d^{3}q\;\frac{n_{0}}{(2\pi)^{3}}
$$
$$
\times\Biggl\{\delta\left({\bf p}
-\frac{{\bf q}}{2}\right)n\left({\bf p}+\frac{{\bf q}}{2}\right)
+\delta\left({\bf p}+\frac{{\bf q}}{2}\right)
n\left({\bf p}-\frac{{\bf q}}{2}\right)\Biggr\}
$$
\begin{equation}
\times\varphi^{*}_{{\bf p}}(\r{})\varphi_{{\bf p}}(\rp{})
\exp\{i{{\bf q}}({\bf R}^{\prime }-{\bf R})\}
+{\widetilde F}_2({\bf r}_1,{\bf r}_2;{\bf r'}_1,{\bf r'}_2),
\label{5b}
\end{equation}
where $n_0$ denotes the density of the condensed particles;
$n(p)=n({\bf p})$ stands for the distribution of the noncondensed
bosons over momenta. Note that the Bose--Einstein condensation of
particles is accompanied by the condensation of the particle pairs
and, thus, by the appearance of the $\delta-$functional terms
(the off--diagonal long--range order) in the pair distribution over
momenta $\hbar\, \bf{p}$ and $\hbar\,\bf{q}$~\cite{Yang}. The
first term in the right--hand side of (\ref{5b}) is conditioned by
presence of a macroscopic number of the pairs with $q=p=0$.
Since they include only the condensed bosons, we can call them the
condensate--condensate pairs. The second term in (\ref{5b})
corresponds to the condensate--supracondensate couples. Besides a
condensed particle, they also include a noncondensed boson. At last,
${\widetilde F}_2({\bf r}_1,{\bf r}_2; {\bf r}_1^{\prime},
{\bf r}_2^{\prime})$ is the contribution made by the
supracondensate--supracondensate dissociated states of a pair and,
maybe, by its bound states. For the wave functions of the
condensate--condensate and condensate--supracondencate couples
we have
\begin{equation}
\varphi(r)=1+\psi(r),
\;\varphi_{{\bf p}}({\bf r})=\sqrt{2}
\cos({\bf p}{\bf r})+\psi_{{\bf p}}({\bf r}) \quad (p\not=0),
\label{7}
\end{equation}
where the boundary conditions
\begin{equation}
\psi(r)\rightarrow 0 \;\;(r\rightarrow \infty), \quad \;
\psi_{{\bf p}}({\bf r}) \rightarrow 0 \;\;(r\rightarrow \infty)
\label{8}
\end{equation}
take place. At small particle separations the pair wave function
$\varphi_{{\bf p},{\bf q}}({\bf r})$ is very close to the usual
wave function of the two--body problem with the relative momemtum
$p$. Therefore, for a singular interparticle potential,
when $\Phi(r) \propto 1/r^m\;(m > 3)$ at small $r$, we have
$\varphi_{{\bf p},{\bf q}}({\bf r})\rightarrow 0$ as $r\rightarrow
0\,.$ And, hence, the relations
\begin{equation}
\psi(r=0)=-1, \quad \;\psi_{{\bf p}}({\bf r}=0)=-\sqrt{2}
\label{5c}
\end{equation}
are fulfilled.
In the case of a small depletion of the zero momentum state
(it is of interest in this Letter) we can neglect the third term in
expression (\ref{5b}):
$$
F_2({\bf r}_1,{\bf r}_2;{\bf r}_1^{\prime},{\bf r}_2^{\prime})=
n_0^2\,\varphi^*(r)\,\varphi(r^{\prime})
$$
\begin{equation}
+\frac{16 n_0}{(2\pi)^3}\int d^3p\,n(2p)
\varphi_{{\bf p}}^*({\bf r})\varphi_{{\bf p}}({\bf r}^{\prime})
\exp\{ i 2{\bf p}({\bf R}^{\prime}-{\bf R})\}.
\label{6}
\end{equation}
As it is known, there are two physical situations when the Bose
condensate fraction is expected to be close to 1. One of them is
related to the weak--coupling regime when a small depletion of the
zero momentum state results from a weak interaction of bosons. The
Bogoliubov model is an adequate approach of investigating this case.
Another situation occurs when we deal with a dilute Bose gas with an
arbitrary strong interaction between particles (singular potential).
Here the dilution of the system gives rise to the small depletion.
In this 'strong--coupling' regime the short--range correlations
play a significant role, which is expressed in relations (\ref{5c}).
On the contrary, the weak--coupling case is specified by the
inequalities
\begin{equation}
|\psi(r)| \ll 1, \;\quad |\psi_{{\bf p}}({\bf r})| \ll 1\;.
\label{9}
\end{equation}
In particular, the Bogoliubov model is characterized by the
choice~\cite{Cherny}
\begin{equation}
|\psi(r)| \ll 1, \;\quad \psi_{{\bf p}}({\bf r}) = 0\;.
\label{10}
\end{equation}
Expressions (\ref{6}) and (\ref{10}) allow one to obtain
$$
F_2({\bf r}_1,{\bf r}_2;{\bf r}_1,{\bf r}_2)=
n^{2}g(r)=
n_{0}^{2}\Bigl(1+\psi(r)+\psi^*(r)\Bigr)
$$
\begin{equation}
+2n_0\biggl(n-n_0+\frac{1}{(2\pi)^3}\int n(k)
\exp(i{\bf k}{\bf r})\,d^3k\biggr),
\label{11}
\end{equation}
where $n=N/V$ and $g(r)$ is the radial distribution function.
According to the weak--coupling conditions (\ref{9}) and the
approximation adopted in (\ref{6}), it is correct to neglect the
terms of the order of $\psi(r) (n-n_0)$ and $(n-n_0)^2$ in
(\ref{11}). Besides, we may choose the wave function
of the pair ground state as a real quantity~\cite{Feyn}:
$\psi(r)=\psi^*(r)\,.$ So, expression (\ref{11}) can be rewritten
as
\begin{equation}
g(r)=1+2\psi(r)+\frac{2}{(2\pi)^3n}
\int n(k)\exp(i{\bf k}{\bf r})\,d^3k.
\label{12}
\end{equation}
Let us show that (\ref{12}) does represent the result of BM. To be
convinced of this, we need the equality
\begin{equation}
\widetilde{\psi}(k)=\int\psi(r)
\exp(-i{\bf k}{\bf r})\,d^3r=
\frac{1}{n_0}\langle a_{{\bf k}}\,a_{-{\bf k}}\rangle
\label{13}
\end{equation}
connecting $\psi(r)$ with the boson annihilation
operators~\cite{Cherny}. At T=0 (we deal with the zero temperature
case in the present Letter) BM yields~\cite{Bog1,Bog2} the following
relations:
\begin{equation}
\langle a_{{\bf k}}\,a_{-{\bf k}}\rangle={A_k \over 1-A_k^2},\quad
n(k)={A_k^2 \over 1-A_k^2}\; ,
\label{14}
\end{equation}
where
\begin{equation}
A_k ={1 \over n_0\widetilde{\Phi}(k)}\biggl(E(k)-{\hbar^2k^2
\over 2m} -n_0\,\widetilde{\Phi}(k)\biggr)
\label{15}
\end{equation}
and
$$
E(k)=\sqrt{\frac{\hbar^2 k^2}{m}\,n_0\widetilde{\Phi}(k)+
\frac{\hbar^4 k^4}{4 m^2}},
$$
\begin{equation}
\widetilde{\Phi}(k)=\int\Phi(r)\exp(-i{\bf k}{\bf r})\,d^3r.
\label{16}
\end{equation}
Using (\ref{12}), (\ref{13}) and (\ref{14}) one is able to arrive
at
\begin{equation}
g(r)=1+\frac{2}{(2\pi)^3\,n}\int\,{A_k\over1-A_k}\;\exp(i
{\bf k}{\bf r})\;d^3k\;.
\label{17}
\end{equation}
This relation is exactly the result of the Bogoliubov model (see
Ref.~\cite{Bog2}).
Concluding this part of the paper, let us take notice of an
interesting equation following from (\ref{13}) -- (\ref{16}) and
being important for the reasoning of the next section. It is given
by
\begin{equation}
-\frac{\hbar^2 k^2}{m} \widetilde{\psi}(k)=
\widetilde{\Phi}(k)+2\,\widetilde{\Phi}(k)\,\Bigl(n(k)+
n_0\,\widetilde{\psi}(k)\Bigr)\; ,
\label{18}
\end{equation}
where $n_0\,\widetilde{\psi}(k)\;$ can be replaced by
$n\,\widetilde{\psi}(k)\;$ because we agreed to neglect the terms
of the order of $(n-n_0)\,\psi(r)\;.$ From (\ref{12}) and (\ref{18})
it follows that
\begin{equation}
\frac{\hbar^2}{m}\,\nabla^2\,\varphi(r)=\Phi(r) +
n\,\int\, \Phi(|{\bf r}-{\bf y}|) \;\Bigl( g(y)-1\Bigr)\;d^3y\;.
\label{19}
\end{equation}
This looks like the Schr\"odinger equation in the Born approximation.
\section{Pair wave functions in the generalized Bogoliubov model}
As it has been noted above, this Letter addresses the
generalization of the Bogoliubov model in such a way that
the short--range correlations should be taken into account
properly side by side with the long--range ones, a small
depletion of the zero momentum state being implied while
generalizing. So, in our further investigation it is correct
to rely on expression (\ref{6}) taken beyond the weak--coupling
regime introduced by (\ref{9}).
To employ approximation (\ref{6}) for the 2-matrix, one needs
to determine the wave functions $\varphi(r)$ and $\varphi_{{\bf p}}
({\bf r})$ beyond the weak coupling. To do this, let us
consider the in--medium two--particle problem:
\begin{equation}
H_{12}\;\psi_{\nu}({\bf r}_1,{\bf r}_2)=E_{\nu}\;
\psi_{\nu}({\bf r}_1,{\bf r}_2)\;.
\label{23}
\end{equation}
The Hamiltonian $H_{12}$ of two bosons placed into the medium of
similar bosons can be represented as
\begin{equation}
H_{12}= -\frac{\hbar^2}{2m}\nabla^2_1 - \frac{\hbar^2}{2m}\nabla^2_2
+\Phi(|{\bf r}_1-{\bf r}_2|)+U_1+U_2\;,
\label{24}
\end{equation}
where $U_i\;(i=1,2)$ stands for the energy of the interaction of the
$i-$th particle with the medium. Proceeding in the spirit of
the Thomas--Fermi approach~(for details see Ref.~\cite{shan}) and,
thus, neglecting retarding effects, one is able to approximate $U_i$
in the form
\begin{equation}
U_i=(N-2)\int\Phi(|{\bf r}_i-{\bf r}_3|)
w({\bf r}_1,{\bf r}_2,{\bf r}_3)\,d^3r_3\ \ (i=1,2),
\label{25}
\end{equation}
where $w({\bf r}_1,{\bf r}_2,{\bf r}_3)$ denotes the density of the
probability of observing the third particle at the point ${\bf r}_3$
under the condition that the first and second ones are located at
${\bf r}_1$ and ${\bf r}_2$. This quantity is connected with the
third and second reduced density matrices via the relation
\begin{equation}
w({\bf r}_1,{\bf r}_2,{\bf r}_3)=
\rho_2^{-1}({\bf r}_1,{\bf r}_2;{\bf r}_1,{\bf r}_2)\,
\rho_3({\bf r}_1,{\bf r}_2,{\bf r}_3;{\bf r}_1,{\bf r}_2,
{\bf r}_3)\,.
\label{26}
\end{equation}
Using the Kirkwood superposition approximation~\cite{Hill}
$$
V^3\;\rho_3({\bf r}_1,{\bf r}_2,{\bf r}_3;{\bf r}_1,{\bf r}_2,
{\bf r}_3)
$$
\begin{equation}
\simeq g(|{\bf r}_1-{\bf r}_2|)\,
g(|{\bf r}_1-{\bf r}_3|)\,g(|{\bf r}_2-{\bf r}_3|)\;,
\label{27}
\end{equation}
one can obtain
\begin{equation}
w({\bf r}_1,{\bf r}_2,{\bf r}_3)\simeq \frac{1}{V}\,
g(|{\bf r}_1-{\bf r}_3|)\,g(|{\bf r}_2-{\bf r}_3|)\;.
\label{28}
\end{equation}
With (\ref{25}) and (\ref{28}) we arrive at
\begin{equation}
U_1=U_2=n\,\int\, g(|{\bf r}-{\bf y}|)\,\Phi(|{\bf r}-{\bf y}|)\;
g(y)\;d^3y\;.
\label{29}
\end{equation}
Thus, equation (\ref{23}) taken with the specifications (\ref{24})
and (\ref{29}) separates in the usual variables ${\bf r}={\bf r}_1-
{\bf r}_2$ and ${\bf R}=({\bf r}_1+{\bf r}_2)/2$,
$$
\psi_{\nu}({\bf r}_1,{\bf r}_2)=\varphi_{{\bf p}}({\bf r})\,
\frac{\exp(i{\bf q}{\bf R})}{\sqrt{V}}\;,
$$
and yields the following relation for the wave function
$\varphi_{{\bf p}}({\bf r})$:
$$
-\frac{\hbar^2}{m}\nabla^2\,\varphi_{{\bf p}}({\bf r})+
\Phi(r)\varphi_{{\bf p}}({\bf r})
$$
\begin{equation}
+2\varphi_{{\bf p}}({\bf r})n
\int g(|{\bf r}-{\bf y}|)\Phi(|{\bf r}-{\bf y}|)
g(y)\,d^3y=\varepsilon_p\varphi_{{\bf p}}({\bf r}).
\label{31}
\end{equation}
Let us consider equation (\ref{31}). The quantity $\varepsilon_p$
can be found taking the limit $r\rightarrow \infty\,.$
Using the asymptotic relations
$$
\lim\limits_{r\rightarrow \infty } \;g(r)=1\;,\quad
\lim\limits_{r\rightarrow \infty } \;\Phi(r)=0\;,
$$
we readily obtain at $r\rightarrow \infty$
$$
\int g(|{\bf r}-{\bf y}|)\Phi(|{\bf r}-{\bf y}|)
g(y)\,d^3y\rightarrow
\int g(|{\bf r}-{\bf y}|)\Phi(|{\bf r}-{\bf y}|)\,d^3y.
$$
Thus, we arrive at
\begin{equation}
\varepsilon_p=\frac{\hbar^2\,p^2}{m}+2n\;
\int\, g(|{\bf r}-{\bf y}|)\,\Phi(|{\bf r}-{\bf y}|)\; d^3y
\label{32}
\end{equation}
rather than $\varepsilon_p=\hbar^2\,p^2/m$ which appears within the
ordinary two--body problem. Inserting (\ref{32}) into (\ref{31}),
one is able to find
$$
\frac{\hbar^2}{m}\nabla^2\varphi_{{\bf p}}({\bf r})=
-\frac{\hbar^2 p^2}{m}+\Phi(r)\varphi_{{\bf p}}({\bf r})
$$
\begin{equation}
+2\xi_{ex}n\varphi_{{\bf p}}({\bf r})
\int g(|{\bf r}-{\bf y}|)\Phi(|{\bf r}-{\bf y}|)
\Bigl( g(y)-1\Bigr)\,d^3y,
\label{33}
\end{equation}
where the equality
$$
\int\, g(|{\bf x}-{\bf y}|)\,\Phi(|{\bf x}-{\bf y}|)\;d^3y\;=\;
\int\, g(|{\bf z}-{\bf y}|)\,\Phi(|{\bf z}-{\bf y}|)\;d^3y
$$
is used. In (\ref{33}) $\xi_{ex}$ is the correcting
factor which should be introduced to compensate oversimplification
of treating the exchange effects while deriving (\ref{31}). Indeed,
the exchange between bosons in the pair is taken into consideration:
$\varphi_{{\bf p}}({\bf r})=\varphi_{{\bf p}}(-{\bf r})=
\varphi_{-{\bf p}}({\bf r})\;.$ However,
the arguments resulting in (\ref{31}) ignore the exchange between
the particles of the pair and surrounding bosons whose influence
is considered on the mean--field level, in the spirit of the
Thomas--Fermi approach. This correcting factor may
be, in general, a function of ${\bf p}$ and some other quantities
related to the problem: $\xi_{ex}=\xi_{ex}({\bf p}, \ldots)\;.$ For
${\bf p}=0$ equation (\ref{33}) has to come to (\ref{19}) provided
relations (\ref{10}) are valid, which allows one to find in the
weak--coupling regime $\xi_{ex}({\bf p}=0)=1/2$. In this case the
approximation $\xi_{ex}=1/2$ can also be employed for ${\bf
p}\not=0$ owing to a small mean--absolute value of a boson momentum.
Moreover, we expect that in the situation of the large condensate
fraction, the choice $\xi_{ex}=1/2$ is correct beyond the weak
coupling too. The main reason for this is that the interaction
between a couple of particles and the medium is weak in both the
cases. For the potentials with a repulsive core this is due to a
small density of surrounding bosons.
Remark, that in contrast to (\ref{19}), relation (\ref{33}) is
reduced to the usual Schr\"odinger equation of the two--body problem
in the limit $n\rightarrow 0$. The same occurs for $r \rightarrow 0$
in the situation of arbitrary strong repulsion between bosons.
Indeed, in this case $\Phi(r)\rightarrow \infty \;$ at $r
\rightarrow 0\,.$ Hence, the second term in the right--hand side of
(\ref{33}) becomes much less than $\Phi(r)$ at small $r\,.$ This
leads to conditions (\ref{5c}) fulfilled at any particle density. It
is noteworthy that (\ref{33}) with $\xi_{ex}=1/2$ coincides with one
of the basic relations of the approach developed in
papers~\cite{Our}.
An important peculiarity of equation~(\ref{33}) is that it can be
used without any divergency in the integral term in the case of a
singular interparticle potential because $$g(r)\,\Phi(r)\rightarrow
0 \quad\;(r \rightarrow 0)\,.$$ Thus, (\ref{33}) reduced to
(\ref{19}) in the weak--coupling approximation, well answer our
purpose of generalizing BM.
\section{'Strong--coupling' case}
To deal with the generalization of BM based on (\ref{6}) and
(\ref{33}) with $\xi_{ex}=1/2$, we need one more equation. This
is because the number of the functions $g(r), \;n(k),\;\varphi(r),
\;\varphi_{{\bf p}}({\bf r})$ to be determined is larger than the
number of the equations at our disposal. Within the Bogoliubov
model for a weakly interacting Bose gas, the relation additional
to (\ref{12}) and (\ref{19}) at zero temperature is of the form
\begin{equation}
n^2\;\widetilde{\psi}^2(k)=n(k)\;(n(k)+1)\, ,
\label{34}
\end{equation}
(see relations (\ref{13}) -- (\ref{16})). Remark that (\ref{34})
follows from the canonical character of the well--known Bogoliubov
transformation~\cite{Bog1,Bog2}. The question now arises if one
may employ (\ref{34}) beyond the weak coupling or not. It turns out
that (\ref{34}) yields quite reasonable results even in the case
of a dilute Bose gas with strong repulsive interaction between
bosons. To be convinced of this, let us consider equations
(\ref{6}), (\ref{33}) and (\ref{34}) in the 'strong--coupling'
regime. From (\ref{34}) it follows that
$$
n(k)=\frac{1}{2}
\left(\sqrt{1+4\,n^2\,\widetilde{\psi}^2(k)}-1\right)\,.
$$
Therefore, in the limit $n \rightarrow 0$ we arrive at
\begin{equation}
\frac{n(k)}{n} =n\;\widetilde{\psi}^2_0(k)\,,
\label{35}
\end{equation}
where $\psi_0(r)$ obeys equation (\ref{33}) taken
at $n=0$ and $p=0$:
\begin{equation}
{\hbar^2 \over m} \nabla^2\Bigl(1+\psi_0(r)\Bigr)=
\Bigl(1+\psi_0(r)\Bigr)\,\Phi(r)\;.
\label{36}
\end{equation}
The relation (\ref{35}) suggests that all the bosons are condensed
in the zero--density limit. So, the use of (\ref{34}) does not
contradict the common expectation concerning a large condensate
fraction in a dilute Bose gas with strong repulsive interaction.
According to equation (\ref{33}), for sufficiently low values
of $p$ we have
\begin{equation}
\varphi_{{\bf p}}({\bf r})\simeq \sqrt{2}\;\varphi(r)\;
\cos({\bf p}{\bf r})\,.
\label{37}
\end{equation}
Since (\ref{35}) is valid at small boson densities $n$, one can take
approximation (\ref{37}) to investigate the thermodynamics of a
dilute Bose gas. Inserting (\ref{37}) into (\ref{6}) we obtain the
following ansatz:
\begin{equation}
g(r)=\varphi^2(r)\Bigl(1 +\frac{2}{(2\pi)^3\,n}
\int\;n(k)\exp(i{\bf k}{\bf r})\,d^3k \Bigr)\;,
\label{38}
\end{equation}
where $\varphi(r)$ is given by equation (\ref{33}) at ${\bf p}=0$
and $\xi_{ex}=1/2$:
$$
\frac{\hbar^2}{m}\nabla^2\varphi(r)=\Phi(r)\varphi(r)
$$
\begin{equation}
+n\,\varphi(r)\int g(|{\bf r}-{\bf y}|)\Phi(|{\bf r}-{\bf y}|)
\Bigl( g(y)-1\Bigr)\,d^3y.
\label{38a}
\end{equation}
This ansatz can be used for arbitrary strong repulsion between
bosons without any divergency because
$\varphi(r)\,\Phi(r)\rightarrow 0$ at $r \rightarrow 0$ while
$\Phi(r)\rightarrow \infty\,.$ It is worth noting that ansatz
(\ref{38}) is also good for a weakly interacting Bose gas. Really,
(\ref{38}) is reduced to (\ref{12}) with the assumption $|\psi(r)|
\ll 1$ and neglect of the term containing the product
$\psi(r)\,n(k)\,.$ At last, using (\ref{38}) and (\ref{38a}) and
taking the zero density limit, one can derive
\begin{equation}
\lim\limits_{n \to 0}\;g(r)=\varphi^2_0(r)\;,
\label{39}
\end{equation}
where $\varphi_0(r)=1+\psi_0(r)\,.$ Equality (\ref{39}) is the
well--known result for the pair distribution function of the Bose
gas of strongly interacting particles~\cite{Bog1}.
\section{Conclusion}
The generalization of the Bogoliubov model of the cold Bose gas has
been proposed which is based on equations (\ref{34}), (\ref{38}) and
(\ref{38a}). They come from the more complicated set of equations
(\ref{6}), (\ref{33}) with $\xi_{ex}=1/2$ and (\ref{34}) provided
the ansatz (\ref{37}) is used. The generalization properly takes
into account the short--range boson correlations side by side with
the long--range ones. The proposed approach yields reasonable
results in the weak--coupling regime as well as in the
'strong--coupling' case of a dilute Bose gas with arbitrary intense
repulsion. The detailed analysis of the latter variant will be
fulfilled in the forthcoming paper. As it has been noted in the
Introduction, the in--medium Schr\"odinger equations can be
linearized not only in the weak--coupling approximation. This can
also be done in the 'strong--coupling' regime. In the former case
(\ref{38a}) is reduced to equation (\ref{19}) linear in $\psi(r)\;.$
While in the latter situation (\ref{38a}) comes to an equation
linear in $\zeta(r)=\varphi(r)-\varphi_0(r)\;$ due to the obvious
inequality $|\zeta(r)| \ll|\varphi_0(r)| (n\rightarrow 0)\;.$
| proofpile-arXiv_065-8049 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{The power spectrum of galaxies and clusters}
There exist a large body of observational determinations of the power
spectrum of galaxies and clusters of galaxies. In this talk I shall
review the determination of the mean galaxy power spectrum (Einasto
{\it et\thinspace al\/} 1998a, hereafter E98a). The mean galaxy power spectrum shall be
reduced to that of matter (Einasto {\it et\thinspace al\/} 1998b, E98b). This
semi-empirical matter power spectrum shall be used to determine the
primordial power spectrum (Einasto {\it et\thinspace al\/} 1998c, E98c).
\begin{figure*}
\vspace*{6.5cm}
\caption{Power spectra of galaxies and clusters of galaxies scaled to
match the amplitude of the 2-D APM galaxy power spectrum (E98a).
Spectra are shown as smooth curves and are designated as follows:
ACO-E and ACO-R -- Abell-ACO clusters (Einasto {\it et\thinspace al\/} 1997a, Retzlaff
{\it et\thinspace al\/} 1998); APM-T -- APM clusters (Tadros {\it et\thinspace al\/} 1998); APM-TE -- APM
galaxies (Tadros \& Efstathiou 1996); APM-P-GB -- spectra derived from
2-D distribution of APM galaxies (Peacock 1997, Gazta\~naga \& Baugh
1997); IRAS-P -- IRAS galaxies (Peacock 1997); CfA2 -- SSRS-CfA2~130
Mpc galaxy survey (da Costa {\it et\thinspace al\/} 1994); LCRS -- LCRS survey (Landy
{\it et\thinspace al\/} 1996); $P(k)_{mean}$ indicates the mean power spectrum $P_{HD}$
for high-density regions; the power spectrum for medium-density
regions, $P_{MD}$, is identified with the spectrum APM-P-GB. The mean
error of the mean spectrum is 11~\%, for individual samples it varies
between 23 and 48~\%. }
\special{psfile=einastof1.eps voffset=120
hoffset=0 hscale=45 vscale=45}
\label{figure1}
\end{figure*}
Recent determinations of power spectra for large galaxy and cluster
samples are plotted in Figure~1. The compilation in based on summary
by E98a. Spectra are shifted in amplitude to match the amplitude of
the power spectrum of APM galaxies on wavenumber $k=0.1$~$h$~{\rm Mpc$^{-1}$}. The
APM galaxy spectrum is a reconstruction of the 3-dimensional spectrum
based on deep 2-dimensional distribution of over 2 millions of
galaxies, thus the cosmic error is smaller here than in available 3-D
surveys. The APM galaxy spectrum is also free of redshift
distortions. We see that after vertical scaling there is little
scatter between individual determinations of power spectra on medium
and small scales. On large scales around the maximum the scatter is
much larger.
We have formed two mean power spectra. One spectrum is based on
samples having power spectra with a high amplitude near the maximum.
Such samples are Abell-ACO and APM cluster surveys, the redshift
survey of APM galaxies, and the SSRS-CfA2-130 galaxy survey. These
samples cover large regions in space where both high- and
medium-density regions are well represented, thus we use the notation
$P_{HD}$ for this mean power spectrum (HD for high-density). This mean
power spectrum has a relatively sharp maximum at $k=0.05 \pm
0.01$~$h$~{\rm Mpc$^{-1}$}, followed by an almost exact power-law spectrum of index
$n\approx -1.9$ toward smaller scales.
The other mean spectrum is based on samples which have a power
spectrum with a shallower turnover; such samples are the LCRS survey,
IRAS QDOT galaxies, and the APM 2-D sample of galaxies. In LCRS and
IRAS QDOT surveys medium-density regions are well present but not
regions of highest density (very rich superclusters). We use the
notation $P_{MD}$ for this mean power spectrum (MD for
medium-density). On medium and small scales it coincides with the
previous spectrum, but it has a maximum of lower amplitude.
Presently it is not clear, whether the difference between these two
spectra is real or partly due to some artifacts of data handling.
Several arguments suggest that there exist real differences between
power spectra of various samples. All samples which have a high
amplitude of the power spectrum near the maximum are deep fully 3-D
samples. In contrast, samples with a shallower power spectrum are not
so deep, or do not contain very rich superclusters as the LCRS
sample. In IRAS QDOT sample galaxies in rich superclusters were
removed (Tadros \& Efstathiou 1995).
On the other hand, some artifacts of data reduction or influence of
the sample selection and/or geometry are not excluded. For instance,
the Las Campanas survey is not a fully 3-dimensional sample, it is
made in narrow strips which may smooth out sharp features of the power
spectrum near the maximum. A curious fact is the reconstruction of the
3-D spectrum from APM 2-D galaxy distribution which has near the
maximum a lower amplitude as expected from real 3-D observations of
galaxies of the same sample. This difference is not explained yet, it
may be due to some problems with the reconstruction of the 3-D power
spectrum from 2-D data.
The difference between two mean power spectra can be considered as the
combined result of the cosmic scatter and our ignorance of all details
of the data reduction.
\section{The reduction of galaxy power spectrum to matter}
To compare the observed power spectrum with theoretical spectra the
galaxy spectrum must be reduced to matter.
Differences between the distribution of galaxies and matter are due to
the gravitational character of the evolution of the Universe. As shown
already by Zeldovich (1970), the evolution of under- and over-dense
regions is different. Matter flows away from low-density regions
toward high-density ones until it collapses. In order to form a galaxy
or a system of galaxies, the mean density of matter in a sphere of
radius $r$ must exceed the mean density by a factor of 1.68 (Press \&
Schechter 1974), the radius $r$ determines the mass of the contracting
object. Thus in low-density regions (voids) galaxies are absent, but
the gravity is unable to evacuate voids completely -- there exists
primordial matter in voids. Visible matter is concentrated together
with dark matter in a web of galaxy filaments and superclusters
(Zeldovich, Einasto \& Shandarin 1982, Bond, Kofman \& Pogosyan 1996).
\begin{figure}
\vspace{4.5cm}
\caption{The biasing parameter as a function of the wavenumber $k$
for 2-D simulation, determined for all galaxies (threshold density
$\rho_0=1$ in units of the mean density), galaxies in high-density
regions ($\rho_0= 2$), and galaxies in clusters ($\rho_0=5$). }
\special{psfile=einastof2.eps voffset=65 hoffset=-20 hscale=30
vscale=30}
\label{figure2}
\end{figure}
These considerations show that model particles can be divided into two
populations, the unclustered population in voids, and the clustered
population in high-density regions. The last population is associated
with galaxies including DM halos of galaxies and clusters of
galaxies. To get the clustered population one has to exclude the
population of low-density particles using a certain threshold density,
$\rho_0$, which divides the matter into the matter in voids and the
clustered matter. Hydrodynamical simulations by Cen \& Ostriker
(1992, 1998) show that the overall mean density is a good
approximation to the threshold density. In determining the density
field we use smoothing on scales comparable to the size of actual
systems of galaxies ($\sim 1$~$h^{-1}$~{\rm Mpc}).
Analytical calculations and numerical simulations show that the
exclusion of matter from low-density regions rises the amplitude of
the power spectrum but not its shape (E98b). Power spectra of galaxies
and matter are related as follows:
$$
P_{gal}(k) = b^2 P_{m}(k),
\eqno(1)
$$
where the bias factor $b$ is expressed by the fraction of matter in
the clustered population associated with galaxies,
$$
b=1/F_{gal}.
\eqno(2)
$$
Actually the biasing parameter $b$ is a slow function of the
wavenumber $k$, but in the range of scales of interest for the present
study it is practically constant. The biasing parameter $b(k)$ is
shown in Figure~2 for a 2-D model, it is found by comparing power
spectra of all particles (matter) and clustered particles associated
with galaxies using different threshold density $\rho_0$ (E98b).
\begin{figure}
\vspace{4.5cm}
\caption{The fraction of matter associated with galaxies, $F_{gal}$,
as a function of time, measured through the $\sigma_8$ parameter
(curved lines). Thick straight line shows the relation equation~(3); open
circle notes the observed value of $\sigma_8$. }
\special{psfile=einastof3.eps voffset=65 hoffset=-20 hscale=30
vscale=30}
\label{figure3}
\end{figure}
\begin{figure*}
\vspace*{4.5cm}
\caption{The semi-empirical matter power spectra compared with
theoretical and primordial power spectra for mixed DM models. Left:
present power spectra; right: primordial spectra. Solid bold line
shows the matter power spectrum found for regions including rich
superclusters, $P_{HD}(k)$; dashed bold line shows the power spectrum
of matter $P_{MD}(k)$ for medium dense regions in the Universe. On
small scales observed power spectra are corrected for non-linear
effects. Model spectra with $\Omega_0=0.9, \dots ~0.3$ are plotted
with solid lines; for clarity models with $\Omega_{0} = 1.0$ and $0.5$
are drawn with dashed lines. Primordial power spectra are shown for
the power spectrum $P_{HD}(k)$; they are reduced to scale-free
spectrum, $P(k) \sim k$. }
\special{psfile=einastof4a.eps
voffset=110 hoffset=-10 hscale=29 vscale=29}
\special{psfile=einastof4b.eps voffset=110 hoffset=130 hscale=29
vscale=29}
\label{figure4}
\end{figure*}
The fraction of matter in the clustered population can be determined
from numerical simulations of the void evacuation. This has been done
for several models: the standard CDM model (SCDM, Hubble constant
$h=0.5$), model with cosmological constant (LCDM, $h=0.7$,
$\Omega_0=0.3$), and open model (OCDM, $h=0.7$, $\Omega_0=0.5$). In
Figure~3 we show the increase of the fraction of matter associated
with galaxies for these models (for LCDM 2 models with different
realization are given). The epoch of simulation is measured in terms
of $\sigma_8$, rms density fluctuations in a sphere of radius
$r=8$~$h^{-1}$~{\rm Mpc}. From model data alone it is impossible to determine
$\sigma_8$ which corresponds to the present epoch. The value of this
parameter is known from observations for galaxies. It is found by the
integration of the observed mean power spectrum of galaxies; we get
$(\sigma_8)_{gal}=0.89 \pm 0.05$. This value is equal to $\sigma_8$
for matter, if all matter were associated with galaxies
($F_{gal}=1$). In reality they are different and related via an
equation similar to (1) since $\sigma_8^2$ is proportional to the
amplitude of the power spectrum:
$$
(\sigma_8)_m = F_{gal}(\sigma_8)_{gal}.
\eqno(3)
$$
This relationship is plotted in Figure~3 by a straight line. The
intersection of this line with curves $F_{gal}$ vs. $\sigma_8$ for
models yields values of both parameters which correspond to the
present epoch. We get $\sigma_8=0.68 \pm 0.06$ for matter;
$F_{gal}=0.75 \pm 0.05$, and the biasing parameter of galaxies in
respect to matter $b_{gal}=1.3 \pm 0.1$ (E98b).
\section{The primordial power spectrum}
The final step in our study is the comparison of the power spectrum of
mass with theoretical power spectra for models of structure formation.
We use models with cold dark matter (CDM) and a mixture of cold and
hot dark matter (MDM) in spatially flat models. We derive the transfer
functions for a set of cosmological parameters.
Models are normalized on large scale by four-year COBE normalization
(Bunn \& White 1997); the density of matter in baryons is taken
$\Omega_{b} = 0.04$ (in units of the critical density); and the Hubble
parameter $h = 0.6$. The cosmological constant was varied between
$\Omega_{\Lambda} = 0$ and $\Omega_{\Lambda} = 0.8$. In mixed dark
matter models the density of hot DM was fixed, $\Omega_{\nu}=0.1$, cold
DM density was chosen to get a spatially flat model.
Analytical power spectra for MDM models are plotted in the left panel
of Figure~4 together with the semi-empirical matter power spectra,
$P_{HD}(k)$ and $P_{MD}(k)$. MDM models fit the semi-empirical matter
power spectrum better than other models. In the right panel we show
the initial power spectrum,
$$
P_{init}(k) = P(k)/T^{2}(k),
\eqno(4)
$$
compared with the scale-free primordial power spectrum, $P_0(k) \sim k$;
here $T(k)$ is the transfer function. The initial power spectrum is
plotted only for spectrum $P_{HD}(k)$ which corresponds to high-density
regions.
The main feature of primordial power spectra is the presence of a
spike at the same wavenumber as that of the maximum of the observed
power spectrum. Primordial spectra also have a break, i.e. their
amplitudes on small scales are different from the amplitudes on large
scales. The shape of the primordial spectrum varies with the
cosmological constant. The primordial power spectrum which is found from
the shallower spectrum $P_{MD}(k)$ has no sharp peak but the break is
similar to that for the spectrum $P_{HD}$.
The main conclusion from the present analysis is that it is impossible
to avoid a break and/or spike in the primordial power spectrum, if
presently available cluster and galaxy data represent a fair sample of
the Universe. Clusters of galaxies cover a much larger volume in space
than galaxies, thus cluster samples are presently best candidates for
the fair sample. However, this conclusion is tentative. New very deep
surveys of galaxies now in progress will specify the power spectrum on
large scales more exactly and yield a better estimate of the
primordial power spectrum. Presently we can say that the possibility
of a broken scale and peaked initial power spectrum has to be taken
seriously.
\section*{Acknowledgments}
I thank H. Andernach, F. Atrio-Barandela, R. Cen, M. Einasto,
M. Gramann, A. Knebe, V. M\"uller, I. Suhhonenko, A. Starobinsky,
E. Tago and D. Tucker for fruitful collaboration and permission to use
our joint results in this talk. This study was supported by the
Estonian Science Foundation.
| proofpile-arXiv_065-8057 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec1}
An early prediction of QCD concerns the existence of a spectrum
of glueballs, i.e. mesonic bound states of two or more constituent
gluons, in addition to the spectrum of $q\overline q$ mesons with
characteristic restrictions in the accessible quantum numbers \cite{HFPM}.
Such glueball states have been
searched for extensively.
The first 20 years of searches have seen some interesting candidates
\cite{Close}, especially in gluon rich processes
such as radiative $J/\psi$ decays,
but no clear and convincing evidence for the two kinds of spectroscopy
have emerged, despite many efforts \cite{Heusch}.
In the recent years these studies have entered a more optimistic phase.
On one hand, the theoretical predictions from lattice QCD
were claimed to become more accurate. For the purely gluonic theory
these calculations put the lightest glueball
into the mass region around 1600 MeV (for recent reviews, see
\cite{Michael,Teper}). On the other hand, there are new
experimental investigations with high statistical precision
aiming at a better understanding of the spectroscopy in the 1000-2000 MeV
region, especially by the Crystal Barrel Collaboration
\cite{Ams,Amseta,Ams4,Ams5,Ams2,abele} in the analysis
of $p\overline p$ annihilation at rest and by the WA102 Collaboration
\cite{Barb,Barb1}
studying central production in high energy $pp$ collisions.
The analysis of the recent results with inclusion of certain older experimental
data has apparently brought a new consensus supporting the lattice result
with a $J^{PC}=0^{++}$ glueball
around 1600 MeV. In this channel more states are
reported than are
expected for the $q\overline q$ nonet
\cite{Landua}. Prime candidates for the lightest glueball
are the $f_0(1500)$ and the $f_J(1710)$ with spin taken as $J=0$.
As
the decay branching ratios of these states do not follow
closely the expectations for a glueball it is proposed that
these states and also the $f_0(1370)$ represent mixtures of the
glueball with the members of the $J^{PC}=0^{++}$ nonet.
With this mixing scheme various experimental results can be
described \cite{amsclo,aps,clo}.
In this paper we begin with a discussion of the theoretical expectations
in Section 2. In particular it is pointed out that the first
results from unquenched lattice calculations show large effects from
sea quarks with the tendency to decrease the glueball mass with decreasing
quark mass.
The spectral sum rules
require a gluonic contribution at low mass around 1 GeV.
We discuss possible mass patterns for the
scalar $q\overline q$ states
in a model with general QCD potential and
explicit chiral symmetry breaking by quark masses.
A closer look at the real world reveals a surprisingly complex experimental
situation and we implement the data with several
{\em phenomenological hypotheses} :
\begin{description}
\item a) Despite the eventually strong mixing between quark and gluon states
it is possible to classify the ``more'' quark like states with
$L_{ q \bar{q}} \ = \ 1$ into four nonets.
\item b) The four isoscalar members of the respective nonets with largest mass
do not exceed (by more than $\sim$100 MeV) the mass of $f'_{ 2} ( 1525 )$ .
\item c) All members of the four nonets in question have been observed,
possibly with incorrect assignment of quantum numbers \cite{PDG} .
\end{description}
In Section 3 we begin our phenomenological analysis and
discuss in some detail the evidence for resonance states in the mass
range below $\sim$1700 MeV. We pay particular attention to the earlier results
from elastic and inelastic $\pi\pi$ scattering which allow in principle the
determination of the amplitude phase.
It is the evidence for the moving phase of the Breit-Wigner amplitude
which is necessary to establish a resonant state. The restriction to the
study of resonance peaks may become misleading, especially if
the state is broad and above ``background''. The present analysis confirms
the amplitude ``circles'' for the $f_0(1500)$ whereas we do not accept
the $f_0(1370)$ listed by the Particle Data Group (PDG) \cite{PDG}
as a genuine resonant state.
In Section 4 we study the additional information provided by the
various couplings in production and decay in order to identify the members
of the $J^{PC}=0^{++}$ nonet. The satisfactory
solution includes the $f_0(980)$ and $f_0(1500)$. There is the broad object
seen in $\pi\pi$ scattering, often called ``background'', which extends from
about 400 MeV up to about 1700 MeV. This object we consider as a single broad
resonance\footnote{we refer to it as ``red dragon''}
which we identify as the lightest glueball with quantum numbers
$J^{PC}=0^{++}$ as will be discussed
in Section 5.
Two further states with $J^{PC}=0^{-+}$ and $J^{PC}=2^{++}$ complete the basic triplet
of binary gluon states (Section 6).
The conclusions are drawn in Section 7,
in particular, we compare our spectroscopic conclusions with the theoretical
expectations.
\section{Theoretical expectations}
\label{sec2}
The purpose of this section is to clarify the possible mass patterns
of the lightest mesons
which are either bound states of quark-antiquark or of gluons.
We assume the dynamics to be reducible to chromodynamics with three
light flavors u, d and s.
\subsection{Properties of low mass glueballs}
\label{glueballs}
\subsubsection{Scenarios for glueball and $q\overline q$
spectroscopy}
We consider first the spectroscopy in the chiral and antichiral limits
\begin{equation}
\label{eq:1}
\begin{array}{lll}
\chi_{\ 3} \ : &\ \lim \ m_{\ u, d , s} &\ = \ 0
\vspace*{0.3cm} \\
\chi_{\ 2} \ : &\ \lim \ m_{\ u, d}& \ = \ 0
\hspace*{0.3cm} ; \hspace*{0.3cm}
m_{\ s} \ > \ 0
\vspace*{0.3cm} \\
\overline{\chi} \ : &\ \lim \ m_{\ u, d , s}& \ \rightarrow \ \infty
\end{array}
\end{equation}
In the antichiral limit $\overline{\chi}$ the
gluon states become separately visible. The quantum numbers of these states,
scenarios for their masses as well as the decay properties
have been discussed in Ref. \cite{HFPM}.
Here we consider the basic triplet of binary glueball states $gb_i$
which can be formed
by two ``constituent gluons'' and correspond to the three invariants
which can be built from
the bilinear expression of gluon fields $F_{a}^{\mu\nu}F^{a\rho\sigma}$
with $J^{PC}$ quantum numbers
\begin{equation}
gb_0(0^{++}),\quad gb_1( 0^{-+})\quad \mbox{and} \quad
gb_2( 2^{++}) \label{triple}
\end{equation}
corresponding to the helicity wave functions of the two ``constituent
gluons''
$|11\rangle+|-1-1\rangle$, $|11\rangle -|-1-1\rangle$ and
$|1-1\rangle$ (or $|-11\rangle$).
Theoretical calculations to be discussed below (bag model, sum rules,
lattice)
suggest the $0^{++}$ state to be the lightest one
\begin{equation}
m_{gb_{0}}\ < \ m_{gb_{1}}\ ,
\ m_{gb_{2}} \label{gbmin}
\end{equation}
\noindent
and these three states dominate
the low energy dynamics. In the antichiral limit $\overline{\chi}$
the mass $m_{gb_{0, \infty}}$ of
the scalar glueball meson
defines the mass gap in the purely gluonic world;
in this limit at least the lightest
scalar and pseudoscalar glueballs are stable.
In the chiral limits $\chi_2$ and $\chi_3$ the $q\overline q$ multiplets
may partly overlap in mass with the glueball states. Of special
interest is the multiplet with the quantum numbers of the vacuum $0^{++}$
as its members have the same quantum numbers as the glueball of lowest
mass. We focus on the two
alternatives for the glueball mass $m_{gb_0}$ and the mass of the lightest
particle $m_{a_0}\sim 980$~MeV of the scalar $q\overline q$ nonet
(see Fig. \ref{gbfig1})
\begin{description}
\item 1) $m_{gb_0 } \lesssim \ m_{ a_{ 0}}$ corresponding to a ``light''
glueball
\item 2) $m_{gb_0 \ } \ \gg \ m_{ a_{ 0}}$ corresponding to a ``heavy''
glueball.
This condition is considered to be met if $m_{gb_0 }$
exceeds $\sim 1500$ MeV.
\end{description}
The first alternative is an extension of scenario A,
the second one of scenario(s) B(C)
discussed in Ref. \cite{HFPM}.
In case 2) the basic triplet of binary glueballs is in the high mass
region. Then their width is expected
to be small according to perturbative
arguments (``gluonic Zweig rule'' \cite{HFPM}). Also in this case the glueball
states may be well separated in mass from the states in the $q\overline q$
nonet of lowest mass.
In case 1) which we favor the width
of $gb_{ 0}$ could be large.
First, the gluonic Zweig rule
cannot be invoked any more as the coupling $\alpha_s$
at low energies may become large.
Secondly, the main decay mode is the pseudoscalar
channel $\pi\pi$ and at higher mass also
$K\overline K$ and $\eta\eta$ and
there is a dynamical argument based on the
overlap of the wave functions between the external pseudoscalar states and the
intermediate gluon states:
The angular momentum between the constituent 2 gluons is dominated
by S waves ($L_{ gg} \ = \ 0$) and so are
the open
pseudoscalar decay channels such as $\pi \pi$.
This alignment of S wave dominance in constituent quanta and
two body decay channels distinguishes
$gb_{ 0}$ from the $0^{++}$ $q \overline{q}$ states which form an intermediate
P-wave. The same is also true for the lowest
$q \overline{q}$ vector mesons where the
intermediate S wave contrasts with the external P waves. We therefore expect
\begin{equation}
\Gamma_{gb_0}\ \gg \ \Gamma_{q\overline q-hadron}. \label{gammaglu}
\end{equation}
Both arguments, the large coupling and the large overlap of internal and
external wave functions
lead to the expectation of a broad $0^{++}$ glueball if it is light.
\subsubsection{Bag models}
A dynamical calculation of hadron masses has been achieved in models where
quarks and massless gluons are confined in spherical bags
of similar size.
If the $gg$ interaction is neglected
one expects \cite{JJ} the lightest glueball states with even parity
to be degenerate
in mass and the same holds for the states with odd parity:
\begin{equation}
\label{glbag}
\begin{array}{ll}
\alpha_s=0:\quad & m_{gb}(0^{++})\ \sim\ m_{gb}(2^{++})\ \sim\ 0.87 \ \mbox{GeV}\\
& m_{gb}(0^{-+})\ \sim\ m_{gb}(2^{-+})\ \sim\ 1.3 \ \mbox{GeV}
\end{array}
\end{equation}
Inclusion of the $gg$
interaction leads to a hyperfine splitting and typically a mass ordering
\cite{Close}
\begin{equation}
\alpha_s\ne 0:\qquad\qquad\ \ \ \
m_{gb_{0}}\ < \ m_{gb_{1}}\ < \ m_{gb_{2}}.
\qquad \label{gbmasses}
\end{equation}
The energy shifts in $O(\alpha_s)$ are calculated \cite{Barnes}
in terms of two parameters, the coupling $\alpha_s$ and the cavity radius
$a$. Reasonable values for these parameters
consistent with $q\overline q$ spectroscopy ($\alpha_s=0.748,
\ a^{-1}=0.218$ GeV) led to an identification of the $0^{-+}$
glueball with $\eta(1440)$ and of $2^{++}$ with $f_J(1720)$ in today's
nomenclature. The mass for the $0^{++}$ glueball was then {\em predicted} as
\begin{equation}
m_{gb_{0}}\ \sim 1\ \mbox{GeV}. \label{m0bag}
\end{equation}
and because of the self energy of the gluons in the bag this mass can hardly
become smaller. So in this unified treatment of $q\overline q$ and gluonium
spectroscopy the ``light glueball'' scenario 1) is preferrred.
\subsubsection{QCD Spectral sum rules}
In a recent application of the sum rule approach \cite{svz}
the basic quadratic gauge boson bilinear operators with quantum numbers
$J^{ P C} \ = 0^{ ++}, \ 0^{ -+}$ and $2^{ ++}$ have been analysed by
Narison \cite{Nar}
together with the lowest quark antiquark operators.
Constraints for the masses of a sequence of states are obtained by
saturating the spectral sum rules. It is interesting to note that
in the $0^{++}$ channel not all sum rules can be satisfied by a single
glueball at a mass of around
1500 MeV -- as suggested by quenched lattice calculations.
Rather one is forced to include contributions from lower mass
around 1~GeV with a large width. A consistent
solution is found with two states $\sigma_B(1000),\sigma'_B(1370)$
which both have large gluonic couplings. A sum rule fit which includes
a light glueball
with mass around 500 MeV besides a heavier one at 1700 MeV
has been presented by Bagan and Steele \cite{bagan}.
We take these results as a further hint towards the need of a light
glueball in agreement with our findings.
On the other hand, the
spectrum of the next heavier gluon states differs
from our suggestions. Also our assignment of the scalar $q\overline q$ nonet
is different from the one in \cite{Nar} and not along the OZI rule.
\subsubsection{Results from lattice gauge theory}
A serious tool to assess the spectral location
of glueball states -- in particular in the antichiral limit, where all quark
masses are sent to infinity -- comes from simulations of pure $SU(3)_{ c}$
Yang Mills theories on a lattice \cite{latt,latt2,Morning}. First results
from full QCD including sea quarks became available recently
\cite{lattunq1,lattunq2}.
In the calculations without quarks one finds
the lowest lying scalar glueball
$gb_{ 0}$ in the
mass range 1500 - 1700 MeV
which corresponds to our high mass scenario 2) discussed above.
In the scenario suggested by Weingarten \cite{latt2}
the members of the scalar
$q\overline q$ nonet are taken to be the observed states
listed in Table~\ref{eq:3b}.
The quark composition is assumed along the OZI rule.
The actually observed particles with $0^{++}$ quantum numbers
$a_0(980)$ and $f_0(980)$ at lower energies
are considered as ``irrelevant to glueball
spectroscopy'' and not taken as candidates for the scalar nonet.
\begin{table}[ht]
\caption{Assignment of the (bare) scalar $\overline{q} q$ nonet
according to Weingarten \protect\cite{latt2}.}
\[
\begin{array}{lccc}
\hline
\mbox{name} & \overline{q} q &
\mbox{mass}^{ 2} \ \left \lbrack \mbox{GeV}^{ 2} \right \rbrack &
\mbox{mass} \ \left \lbrack \mbox{GeV} \right \rbrack
\\
\hline
f_{ 0} (1390)\qquad
& \frac{1}{\sqrt{2}}
\ \left (
\ \overline{u} u \ + \ \overline{d} d
\ \right )
& 1.932 & 1.390 \\
a_{ 0}^{ 0} (1450) \qquad
& \frac{1}{\sqrt{2}}
\ \left (
\ \overline{u} u \ - \ \overline{d} d
\ \right )
& 2.102 & 1.450 \\
K_{ 0}^{ *} (1430) \qquad
& \overline s q, \ s\overline q
& 2.042 & 1.429 \\
f_{ 0} (1500) \qquad
& \overline{s} s
& 2.265 & 1.505
\\ \hline
\end{array} \]
\label{eq:3b}
\end{table}
In a variant of this phenomenological scheme \cite{amsclo,Teper,latt2}
one includes the $f_J(1720)$ with spin assignment $J=0$ and assumes
the three observed $0^{++}$ states to be a superposition of the bare
glueball and the two bare isoscalar $q\overline q$ states.
In this mixing scheme one
can take into account the observed small $K\overline K$
branching ratio of the $f_0(1500)$.
First calculations in unquenched QCD
including two flavors of quark have been carried out by Bali et al.
\cite{lattunq1,lattunq2}. Results from a $16^3\times 32$ lattice
with an inverse lattice spacing
of $2 - 2.3 \ \mbox{GeV}$ show a definite dependence
of the results on the quark mass
and correspondingly on the pion mass. If the pion mass is lowered from
about 1000 to 700 MeV the $0^{++}$
glueball mass decreases from 1400 to about 1200 MeV (using the
data in \cite{lattunq2}). The quark masses are still quite large but
in any case the
glueball mass becomes smaller than in case of the quenched
approximation without quarks
(see Table \ref{eq:43}). On the other hand, the calculations \cite{lattunq2}
also indicate a significant dependence on the volume. For a $24^3\times 40$
lattice the glueball mass goes up again to the larger number
of the quenched calculation.
\begin{table}[ht]
\caption{Glueball masses with statistical and systematic errors in quenched
lattice approximation whereby the first two determinations are based on
data in \protect\cite{latt}
(upper part) and with inclusion of sea quarks
for different spatial lattice sizes $L_S$ (lower part).}
\[
\begin{array}{lccc}
\hline
\mbox{author} & m_{ gb_{ 0}} \ \left \lbrack \mbox{MeV}
\right \rbrack \quad
& n_{ fl} \ & \ m_{ \pi}
\ \left \lbrack \mbox{MeV} \right \rbrack
\\ \hline
\mbox{Teper}\cite{Teper} & 1610 \pm 70\pm 130 & 0 & \\
\mbox{Weingarten} \cite{latt2} & 1707 \pm \ 64 & 0& \\
\mbox{Morningstar et al.} \cite{Morning} & 1630 \pm 60 \pm 80 &0& \\
\hline
\mbox{Bali et al.} \cite{lattunq1,lattunq2} & \sim 1200 \ (L_S=16) &
2 & 700 - 1000 \\
\mbox{Bali et al.} \cite{lattunq2} & \sim 1700 \ (L_S=24) & 2 & \\ \hline
\end{array} \]
\label{eq:43}
\end{table}
\noindent
We conclude from the mass values in Table \ref{eq:43} that the
quenched calculation
supports the ``high mass region'' for the lightest glueball,
i.e. our alternative 2), while the results with sea quarks do not exlude the
opposite,
i.e. $gb_{ 0}$ placed into the ``low mass region'', in view of
the large values of the quark masses in the calculation
and the observed decrease of the glueball
mass with the quark mass.
It is also of great interest to compute
the mass of the lightest scalar state $a_0(0^{++})$ in lattice QCD.
First results have been
obtained recently in quenched approximation with non-perturbatively
$O(a)$ improved Wilson fermions \cite{Gockel}
for two values of the coupling $\beta$,
see Table \ref{scalar}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{lccc}
\hline
$ \beta$ & ${a^2}/{K}$ & $ R={aM_{a0}}/{a\sqrt{K}}$ & $ R\sqrt{K}$
[GeV]\\ \hline
6.0 & 0.048 & 3.72 $\pm$ 0.15 & 1.59 $\pm$ 0.06 \\
6.2 & 0.026 & 2.86 $\pm$ 0.13 & 1.22 $\pm$ 0.05 \\
\hline
\end{tabular}
\end{center}
\caption{The ratio $R$ of the
$a_0$-mass and the string tension
as function of the square of the lattice
spacing $a$ in units of the string tension.
The last column shows the ratio $R$ multiplied with the physical value of
the string tension
0.427 GeV \protect\cite{Gockel}.}
\label{scalar}
\end{table}
The ratio $R\sqrt{K}$ in the last column of Table \ref{scalar}
extrapolates to the
physical mass in the continuum limit
$a^2 \to 0$. As can be seen these
mass values decrease in the approach of this limit, the lowest mass value
being $M_{a_0} \sim 1.2$~GeV. A reliable extrapolation from two data points
cannot be expected. If one extrapolates nevertheless one finds
$M_{a_0} \sim 0.8$ GeV.
These results seem to be consistent with
the mass 0.98 GeV for the lightest scalar meson, but the heavier mass 1.45 GeV
as suggested for this state by Weingarten
cannot be excluded on the basis of only two
measurements.\footnote{We thank D. Pleiter and S. Aoki for helpful
discussions of these results}
With improved accuracy such calculations could provide an interesting hint
towards the classification of the $a_0(980)$ state as the lowest mass scalar
meson.
The results reported here indicate that our hypothesis of a light glueball
with mass around 1 GeV accompanied by a scalar nonet with particles
around 1 GeV
is not necessarily in contradiction with lattice QCD results.
We also wish to point out that the parametrization of the $0^{++}$ spectrum
in terms of one resonance only may not be appropriate; this was found in
case of QCD sum rules and may be true in particular
if the lightest state is very broad.
\subsection{The scalar nonet
and effective $\Sigma$ variables}
\label{Sigma}
An important precondition for the assignment of glueball states
is the understanding of the low mass $q\overline q$ spectroscopy.
As the lightest glueball is expected with $J^{PC}=0^{++}$ quantum numbers
we focus here on the expectations for the lightest scalar $q\overline q$
nonet. The lightest particles with these quantum numbers
are $a_0(980)$ and $f_0(980)$,
approximately degenerate in mass. Some authors consider one or both of these
states as $K\overline K$ molecules \cite{molecule} and take $a_0(1450)$
and $f_0(1370)$ (or a broad $f_0(1000)$) as members of the scalar nonet.
The next (uncontroversial) candidate for the nonet is $K^*_0(1430)$;
the last member of the nonet is a heavier isoscalar state $f_{0>}$,
possibly $f_0(1500)$ or $f_J(1720)$,
which can mix with the lighter $f_0(980)$.
An attractive theoretical approach to the scalar and pseudoscalar mesons is
based on the ``linear sigma models''
which realize the spontaneous chiral symmetry breakdown (for reviews, see
\cite{ls1}). The requirement of renormalizability provides a considerable
restriction in the functional form of the effective potential compared to
what would be generally allowed.
In a recent application
T\"{o}rnqvist \cite{Torn} considered a renormalizable Lagrangian
for the scalar and pseudoscalar sector.
In his solution for the scalar nonet the
OZI rule holds exactly for the bare states with a broad isoscalar
non-strange ``$\sigma$''
resonance below 1~GeV and the $f_0(980)$ as the lowest
$s\overline s$ state. The resulting mass spectrum, however,
is considerably modified by unitarisation effects.
In an alternative approach \cite{dmitra,klempt,burakovsky}
one starts from a 3-flavor Nambu-Jona-Lasinio
model but includes an effective action for the sigma fields
with an instanton induced axial $U(1)$ symmetry-breaking
determinant term (proportional to $I_3$ in Eq. (\ref{eq:6}) below),
along the suggestion by t'Hooft \cite{thooft}, which keeps the Lagrangian
renormalizable. This corresponds again to a linear sigma
model but now the scalars are close to the singlet and octet states, and
they do not
split according to the OZI rule; the sign of the mass splitting in the
scalar and pseudoscalar sectors is reversed. This suggests $f_0(1500)$
to be near the octet state whereas different options are pursued
for the lighter isoscalar $f_0$ and the isovector $a_0$ by the authors
\cite{dmitra,klempt,burakovsky}.
In our approach we do not follow the $K\overline K$
molecule hypothesis for the $f_0(980)$ and the $a_0(980)$ (see also
the remarks in the next section)
but take them as genuine members of the $q\overline q$ scalar nonet.
In the rest of this section we discuss
what can be derived about the mass of $f_{0>}$ and the mixing pattern
in the scalar nonet from the most general effective QCD potential
for the $\Sigma$-variables pertaining to the scalar and pseudoscalar
mesons, whereby we do not restrict the analysis to renormalizable
interaction terms. In this way we explore the consequences of chiral
symmetry in the different limits in (\ref{eq:1}) in a general QCD framework.
Thereafter we turn to the phenomenological analysis of data where we
try to minimize the theoretical preconditions as suggested
by the present section.
\subsubsection{$\Sigma$ variables and chiral invariants}
We assume that the glueball states do not
affect in an essential way the remaining effective degrees of freedom
at low energy.
Then all degrees of freedom can be integrated out.
The variables are those of a linear sigma model
\begin{equation}
\label{eq:3}
\begin{array}{c}
\Sigma_{\ st} \ = \ ( \ \sigma_{\ st} \ - \ i \ p_{\ st} \ )
\vspace*{0.3cm} \\
\sum_{\ c} \ \overline{q}_{\ s}^{\ c} \ q_{\ t}^{\ c} \ \leftrightarrow
\ \sigma_{\ st}
\hspace*{0.3cm} ; \hspace*{0.3cm}
\sum_{\ c} \ \overline{q}_{\ s}^{\ c} \ i \ \gamma_{\ 5} \ q_{\ t}^{\ c}
\ \leftrightarrow \ p_{\ st}
\end{array}
\end{equation}
where the indices $s,t$ refer to the flavors $u,d,s$.
We do not require interactions to be renormalizable,
rather we study the general effective action of QCD restricted
to the sigma variables \cite{PM}.
The resulting mass spectra and mixings are then less restricted
than in the renormalizable Lagrangian models:
for example, OZI splitting is possible but not particularly favored.
In Eq. (\ref{eq:3}) we chose
the normalization of the complex (nonhermitian) field
variables $\Sigma_{\ st}$ such that in the chiral limit
$\chi_{\ 3} \ ( \lim \ m_{\ u, d , s} \ = \ 0$ ) the vacuum expected value
corresponds to the (real) unit matrix :
\begin{equation}
\label{eq:4}
\begin{array}{l}
\chi_{\ 3}
\hspace*{0.3cm} : \hspace*{0.3cm}
\ \left \langle \ \Omega \ \right | \ \Sigma_{\ st} \ ( \ x \ )
\ \left | \ \Omega \ \right \rangle \ \rightarrow \ \delta_{\ st}.
\end{array}
\end{equation}
\noindent
We propose to discuss the {\em general} form of the effective
potential, more precisely its real part -- restricted only to the first order
approximation with respect to the strange quark mass term in the two flavor
chiral limit $\chi_{\ 2}$ ($ \lim \ m_{\ u, d} \ = \ 0$)
\begin{equation}
\label{eq:5}
\begin{array}{l}
\chi_{\ 2}
\hspace*{0.3cm} : \hspace*{0.3cm}
V \ ( \ \Sigma \ ) \ \rightarrow \ V_{\ 0} \ - \ \mu_{\ s} \ \re \ \Sigma_{\ 33}
\hspace*{0.3cm} ; \hspace*{0.3cm} \mu_{\ s} \ \propto \ m_{\ s}.
\end{array}
\end{equation}
\noindent
The quark mass parameter $\mu_{\ s}$ in Eq. (\ref{eq:5}) is to be expressed
in appropriate units ($\mbox{mass}^{4}$). $V_{\ 0}$ refers to the
chiral limit $\chi_{\ 3}$;
it depends in an a priori arbitrary way on four base variables
for which we can choose
\begin{equation}
\label{eq:6}
\begin{array}{l}
I_{\ 1} \ = \ \tr \ \Sigma \ \Sigma^{\ \dagger}
\ - \ \tr \ {\bf 1}
\hspace*{0.3cm} ; \hspace*{0.3cm}
I_{\ 2} \ = \ \tr \ ( \ \Sigma \ \Sigma^{\ \dagger} \ )^{\ 2}
\ - \ \tr \ {\bf 1}
\vspace*{0.3cm} \\
I_{\ 3} \ = \ \re \ \dete \ \Sigma \ - 1
\hspace*{0.3cm} ; \hspace*{0.3cm}
I_{\ 4} \ = \ \im \ \dete \ \Sigma
\end{array}
\end{equation}
\noindent
If we introduce
the shifted variables
\begin{equation}
\Sigma \ = \ {\bf 1} \ + \ Z
\hspace*{0.3cm} ; \hspace*{0.3cm}
Z \ = \ s \ - \ i \ p
\label{shift}
\end{equation}
we can express the four invariants defined in Eq. (\ref{eq:6}) as
\begin{equation}
\label{eq:6a}
\begin{array}{l}
I_{\ 1} \ = \ 2 \ \tr \ s \ + \ \ \ \tr \ s^{\ 2} \ + \ \ \ \tr \ p^{\ 2}
\vspace*{0.3cm} \\
I_{\ 2} \ = \ 4 \ \tr \ s \ + \ 6 \ \tr \ s^{\ 2} \ + 2 \ \tr \ p^{\ 2}
\ + \ 4 \ \tr \ s^{\ 3} \ + \ 4 \ \tr \ s \ p^{\ 2}
\vspace*{0.3cm} \\
\hspace*{7.5cm} \ + \ \tr \ ( \ Z \ Z^{\ \dagger} \ )^{\ 2}
\vspace*{0.3cm} \\
I_{\ 3} \ = \ \ \ \tr \ s \ + \ \frac{1}{2}
\ \left (
\begin{array}{l}
( \ \tr \ s \ )^{\ 2} \ - \ \tr \ s^{\ 2}
\ - \ ( \ \tr \ p \ )^{\ 2} \ + \ \tr \ p^{\ 2}
\end{array}
\ \right )
\vspace*{0.3cm} \\
\hspace*{2.2cm} \ + \ \re
\ \frac{1}{6}
\ \left (
\ ( \ \tr \ Z \ )^{\ 3} \ - \ 3 \ \tr \ Z \ \tr \ Z^{\ 2}
\ + \ 2 \ \tr \ Z^{\ 3}
\ \right )
\vspace*{0.3cm} \\
I_{\ 4} \ = - \ \tr \ p \ -
\ \left (
\ \tr \ s \ \tr \ p \ - \ \tr \ s \ p
\ \right )
\vspace*{0.3cm} \\
\hspace*{2.2cm} \ + \ \im
\ \frac{1}{6}
\ \left (
\ ( \ \tr \ Z \ )^{\ 3} \ - \ 3 \ \tr \ Z \ \tr \ Z^{\ 2}
\ + \ 2 \ \tr \ Z^{\ 3}
\ \right )
\end{array}
\end{equation}
\noindent
There is no loss of generality - {\em concentrating on scalar mass terms only} -
to restrict $\Sigma$ to the hermitian matrix $s$ whereby the four variables
in Eq. (\ref{eq:6}) reduce to three :
\begin{equation}
\label{eq:7}
\begin{array}{l}
I_{\ 1} \rightarrow \ 2 \ \tr \ s \ + \ \ \ \tr \ s^{\ 2}
\vspace*{0.3cm} \\
I_{\ 2} \ \rightarrow \ 4 \ \tr \ s \ + \ 6 \ \tr \ s^{\ 2}
\ + \ 4 \ \tr \ s^{\ 3}
\ + \ \tr \ s^{\ 4}
\vspace*{0.3cm} \\
I_{\ 3} \ \rightarrow \ \ \ \tr \ s \ + \ \frac{1}{2}
\ \left (
\ ( \ \tr \ s \ )^{\ 2} \ - \ \tr \ s^{\ 2}
\ \right )
\vspace*{0.3cm} \\
\hspace*{2.2cm} \ +
\ \frac{1}{6}
\ \left (
\ ( \ \tr \ s \ )^{\ 3} \ - \ 3 \ \tr \ s \ \tr \ s^{\ 2}
\ + \ 2 \ \tr \ s^{\ 3}
\ \right ).
\end{array}
\end{equation}
\subsubsection{Scalar mass terms to order $\mu_{\ s}$}
To the precision required we need the expansion of $V_{\ 0} \ ( \ s \ )$
to third order in the matrix variable $s$.
To third order in $s$ the three base variables
in Eq. (\ref{eq:7}) can be replaced by the simple power basis
$\ \tr \ s$ , $\ \tr \ s^{\ 2}$ , $\ \tr \ s^{\ 3}$.
\noindent
As a consequence $V_{\ 0} \ ( \ s \ ) \ $ is of the form
\begin{equation}
\label{eq:9}
\begin{array}{l}
V_{\ 0} \ =
\begin{array}[t]{l}
\frac{1}{2}
\ \left (
\ A \ \tr \ s^{\ 2} \ + \ B \ ( \ \tr \ s \ )^{\ 2}
\ \right )
\ +
\vspace*{0.3cm} \\
\frac{1}{3}
\ C \ \tr \ s^{\ 3} \ +
\ \frac{1}{2}
\ D \ ( \ \tr \ s \ ) \ \tr \ s^{\ 2} \ +
\ \frac{1}{3}
\ E \ ( \ \tr \ s \ )^{\ 3} \ + \ O \ ( \ s^{\ 4} \ ).
\end{array}
\end{array}
\end{equation}
\noindent
We shall neglect the terms of order $s^{ 4}$ in the following.
To first order in the strange quark mass the vacuum expected values
are shifted from their values in Eq. (\ref{eq:4}) according to Eq. (\ref{eq:5})
\begin{equation}
\label{eq:10}
\begin{array}{l}
\left \langle \ \Omega \ \right | \ \Sigma
\ \left | \ \Omega \ \right \rangle
\ =
\ {\bf 1} \ +
\ \left \langle \ s \ \right \rangle
\hspace*{0.3cm} ; \hspace*{0.3cm}
s \ = \ \left \langle \ s \ \right \rangle \ + \ x
\vspace*{0.3cm} \\
A \ \left \langle \ s \ \right \rangle
\ +
\ B \ \tr \ \left \langle \ s \ \right \rangle \ {\bf 1}
\ =
\ \mu_{\ s} \ P_{\ 3}
\hspace*{0.3cm} ; \hspace*{0.3cm}
P_{\ 3} \ =
\ \left (
\ \begin{array}{lll}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 1
\end{array}
\ \right )
\vspace*{0.4cm} \\
\left \langle \ s \ \right \rangle \ =
\ \displaystyle{
\frac{\mu_{ s}}{A}
\ \left (
\ P_{ 3} \ -
\frac{B}{A+3B}
\ {\bf 1}
\ \right ) }
\end{array}
\end{equation}
\noindent
Thus the quadratic parts with respect to $x$ of $V$ to first order
in the strange quark mass are of the form
\begin{equation}
\label{eq:11}
\begin{array}{l}
V^{\ (2)} \ =
\ \begin{array}[t]{l}
\frac{1}{2}
\ \left (
\ A \ \tr \ x^{\ 2} \ + \ B \ ( \ \tr \ x \ )^{\ 2}
\ \right ) \ +
\vspace*{0.3cm} \\
C \ \tr \ \left \langle \ s \ \right \rangle \ x^{\ 2} \ +
\ D \ ( \ \tr \ x \ ) \ \tr \ \left \langle \ s \ \right \rangle \ x \ +
\vspace*{0.3cm} \\
\frac{1}{2}
\ D \ ( \ \tr \ \left \langle \ s \ \right \rangle \ )
\ \tr \ x^{\ 2} \ +
\ E \ ( \ \tr \ \left \langle \ s \ \right \rangle \ )
\ ( \ \tr \ x \ )^{\ 2}.
\end{array}
\end{array}
\end{equation}
\noindent
The first two terms composing $V^{\ (2)}$ in Eq. (\ref{eq:11}) describe
singlet and octet masses (squares)
$m_{\ (1)}$ and $m_{\ (8)}$ in the u-d-s chiral limit $\chi_{3}$ ,
whereas the remaining terms contain the further mass splittings to
first order in the strange quark mass.
Introducing the quantities
\begin{equation}
\label{eq:12}
\begin{array}{rlrl}
m_{\ (1)}^{\ 2} & = \ A \ + \ 3 \ B
\hspace*{0.3cm} ; &
m_{\ (8)}^{\ 2} & = \ A \vspace*{0.2cm} \\
R & = \ {\displaystyle
\frac{ m_{\ (8)}^{\ 2}}
{m_{\ (1)}^{\ 2}}\ ; \ \qquad } &
( \ c \ , \ d \ , \ e \ ) & = {\displaystyle
\ \frac{\mu_{s}}{A}
\ ( \ C \ , \ D \ , \ E \ ) }\\
\end{array}
\end{equation}
\noindent
the expression for $V^{\ (2)}$ in Eq. (\ref{eq:11}) becomes
\begin{equation}
\label{eq:13}
\begin{array}{l}
V^{\ (2)} \ =
\ \begin{array}[t]{l}
\left \lbrack
\ A \ + \ \frac{2}{3} \ c \ ( \ R \ - \ 1 \ )
\ + \ d \ R
\ \right \rbrack
\ \frac{1}{2} \ \tr \ x^{\ 2} \ +
\vspace*{0.3cm} \\
\left \lbrack
\ B \ + \ \frac{2}{3} \ d \ ( \ R \ - \ 1 \ ) \ +
\ 2 \ e \ R
\ \right \rbrack
\ \frac{1}{2} \ ( \ \tr \ x \ )^{\ 2} \ +
\vspace*{0.3cm} \\
c \ \tr \ P_{\ 3} \ x^{\ 2} \ +
\ d \ ( \ \tr \ x \ ) \ \tr \ P_{\ 3} \ x.
\end{array}
\end{array}
\end{equation}
\noindent
{\em Remark on the (semi)classical interpretation of} $V^{\ (2)}$\\
\vspace*{0.1cm}
\noindent
We assume here and in the following that the (semi)classical interpretation
of $V^{\ (2)} \ ( \ x \ )$ as a quadratic function of the shifted
field variables $x$ actually describes the real part of the mass (square) term
pertaining to scalar mesons and can be extended to pseudoscalar mesons,
while we neglect the specific $m_{\ s}$ dependence of the kinetic energy term,
which within the same (semi)classical interpretation is
in general
simplified to remain unperturbed, i.e. of the form
\begin{equation}
\label{eq:25}
\begin{array}{l}
{\cal{L}}_{\ kin} \ = \ \frac{1}{4} \ f_{\ \pi}^{\ 2}
\ \tr
\ \left (
\ \partial^{\ \varrho} \ \Sigma^{\ \dagger} \ \partial_{\ \varrho} \ \Sigma
\ \right ).
\end{array}
\end{equation}
\noindent
In Eq. (\ref{eq:25}) $f_{\ \pi} \ \sim \ 93 \ \mbox{MeV}$ denotes the
pseudoscalar decay constant in the three flavor chiral limit.
The simplified form of the kinetic energy term in Eq. (\ref{eq:25}) can always
be achieved after a nonlinear transformation of the $\Sigma$ variables.
The corresponding chiral (Noether) currents are then only proportional to
the respective quark bilinear currents modulo explicitely $m_{\ s}$
dependent factors as visible in the ratio of physical
pion to kaon decay constants, far away from the flavor symmetric limit 1.
\subsubsection{Mass square patterns for the scalar nonet}
It follows from the structure of the mass terms in Eq. (\ref{eq:13})
that the (nearly perfect) degeneracy of the $f_{0} \ (980)$ and
$a_{0} \ (980)$ isosinglet and isotriplet levels can only be realized
independently of $m_{\ s}$ if the constant $B$ prevailing in the $\chi_{\ 3}$
limit vanishes. We adopt thus $B \ = \ 0$ in the following, which
implies that the entire scalar nonet is degenerate in mass in the
chiral limit $\chi_{\ 3}$.
Thus the expression for $V^{\ (2)}$ in Eq. (\ref{eq:13}) becomes
\begin{equation}
\label{eq:14}
\begin{array}{l}
V^{\ (2)} \ =
\ \begin{array}[t]{l}
\left \lbrack
\ A \ + \ d
\ \right \rbrack
\ \frac{1}{2} \ \tr \ x^{\ 2} \ +
\ e \ ( \ \tr \ x \ )^{\ 2} \ +
\vspace*{0.3cm} \\
c \ \tr \ P_{\ 3} \ x^{\ 2} \ +
\ d \ ( \ \tr \ x \ ) \ \tr \ P_{\ 3} \ x.
\end{array}
\end{array}
\end{equation}
\noindent
The first term on the right hand side of Eq. (\ref{eq:14})
yields a common mass square to the entire nonet. Hence, if we
consider all mass squares relative to $m^{\ 2} \ ( \ a_{0} \ )$
all contributions are contained in the last three terms
composing $V^{\ (2)}$ , which we denote by
$\Delta \ m^{\ 2} \ = \ m^{\ 2} \ - \ m^{\ 2} \ ( \ a_{0} \ )$
\begin{equation}
\label{eq:15}
\begin{array}{l}
\Delta \ m^{\ 2} \ =
c \ \tr \ P_{\ 3} \ x^{\ 2} \ +
\ d \ ( \ \tr \ x \ ) \ \tr \ P_{\ 3} \ x \ +
\ e \ ( \ \tr \ x \ )^{\ 2}
\vspace*{0.3cm} \\
\tr \ P_{\ 3} \ x^{\ 2} \ =
\ \overline{K} \ K
\ + \ \frac{2}{3} \ S_{\ (8)}^{\ 2} \ -
\ 2 \ \frac{\sqrt{2}}{3}
\ S_{\ (8)} \ S_{\ (1)}
\ + \ \frac{1}{3} \ S_{\ (1)}^{\ 2}
\vspace*{0.3cm} \\
( \ \tr \ x \ ) \ \tr \ P_{\ 3} \ x \ =
- \ 2 \ \frac{1}{\sqrt{2}} \ S_{\ (8)} \ S_{\ (1)}
\ + \ S_{\ (1)}^{\ 2}
\vspace*{0.3cm} \\
( \ \tr \ x \ )^{\ 2} \ =
\ 3 \ S_{\ (8)}^{\ 2}.
\end{array}
\end{equation}
\noindent
In Eq. (\ref{eq:15}) $S_{\ (1)}$ , $S_{\ (8)}$ denote the (hermitian)
singlet, octet component fields within the scalar nonet respectively.
Furthermore
\begin{equation}
\label{eq:15a}
\begin{array}{l}
x_{\ 3 3} \ = \ \frac{1}{\sqrt{3}} \ S_{\ (1)}
\ - \ \frac{2}{\sqrt{6}} \ S_{\ (8)}.
\end{array}
\end{equation}
\noindent
From the structure of $\Delta \ m^{\ 2}$ in Eq. (\ref{eq:15})
we obtain the mass of the $K \ \overline{K}$ system
\begin{equation}
\label{eq:16a}
\begin{array}{l}
\Delta \ m^{\ 2} \ ( \ K \ ) \ = \ c
\end{array}
\end{equation}
\noindent
as well as the
mass and mixing pattern involving the two isosinglets
$S_{\ (1)}$ and $S_{\ (8)}$. We introduce the octet-singlet
mixing matrix $\Delta \ m^{\ 2}_{\ 8-1} \ \equiv \ \Delta \ M^{\ 2}$,
which generates the quadratic form in $S_{\ (8)} \ , \ S_{\ (1)}$
in Eq. (\ref{eq:15})
\begin{equation}
\label{eq:16}
\begin{array}{l}
\Delta \ M^{\ 2} \ =
\ \left (
\ \begin{array}{lr}
\frac{4}{3} \ c &
- \ \sqrt{\ 2 \ }
\ \left ( \ \frac{2}{3} \ c \ + \ d \ \right )
\vspace*{0.3cm} \\
- \ \sqrt{\ 2 \ }
\ \left ( \ \frac{2}{3} \ c \ + \ d \ \right ) &
\frac{2}{3} \ c \ + \ 2 \ d \ + \ 6 \ e
\end{array}
\ \right )
\end{array}
\end{equation}
\noindent
which we can transform into
\begin{equation}
\label{eq:17}
\begin{array}{l}
\Delta \ M^{\ 2} \, = \, \frac{4}{3} \ c
\ \left (
\ \begin{array}{lr}
1 &
- \ \frac{1}{\sqrt{2}} \ ( \ 1 \ + \ \delta \ )
\vspace*{0.3cm} \\
- \ \frac{1}{\sqrt{2}} \ ( \ 1 \ + \ \delta \ ) \ &
\ \frac{1}{2} \ \left (
\ ( \ 1 \ + \ \delta \ )^{\ 2}
\ + \ \varepsilon \ - \ \delta^{\ 2}
\ \right )
\end{array}
\ \right )
\vspace*{0.3cm} \\
\delta \ = \
\frac{3}{2} \frac{d}{c}
\ ; \quad
\varepsilon \ = \
\frac{9\ e}{ c}
\ ; \quad
\dete \ \Delta \ M^{\ 2} \ = \ \frac{8}{9} \ c^{\ 2}
\ ( \ \varepsilon \ - \ \delta^{\ 2} \ ).
\end{array}
\end{equation}
\noindent
The mass square differences of the lighter and heavier
isoscalar $f_{0 \ <}$ , $f_{0 \ >}$ are obtained as eigenvalues
of $\Delta \ M^{\ 2}$.
We note that the observed (approximate) degeneracy
of $f_{0} \ (980)$ and $a_{0} \ (980)$, i.e.
$\Delta \ m \ ( \ f_{0 \ <} \ ) \ \sim \ 0$,
corresponds through first order in $m_{\ s}$ to the vanishing
determinant of $\Delta \ M^{\ 2}$
\begin{equation}
\label{eq:18}
\begin{array}{l}
\dete \ \Delta \ M^{\ 2} \ = \ 0
\hspace*{0.3cm} \leftrightarrow \hspace*{0.3cm}
\varepsilon \ = \ \delta^{\ 2}
\vspace*{0.3cm} \\
\Delta \ M^{\ 2} \ = \ \frac{4}{3} \ c
\ \left (
\ \begin{array}{ll}
1 \ & \ k
\vspace*{0.3cm} \\
k \ & \ k^{\ 2}
\end{array}
\ \right )
\hspace*{0.3cm} ; \hspace*{0.3cm}
k \ = \ - \ \frac{1}{\sqrt{2}} \ ( \ 1 \ + \ \delta \ ).
\end{array}
\end{equation}
\noindent
Introducing the mixing angle $\Theta$ by
\begin{equation}
\label{eq:19a}
\begin{array}{rl}
f_{0 \ >} & =
\ \cos \ \Theta \ S_{ (8)} \ +
\ \sin \ \Theta \ S_{ (1)} \\
f_{0 \ <} & = \
- \ \sin \ \Theta \ S_{ (8)} \ +
\ \cos \ \Theta \ S_{ (1)}
\end{array}
\end{equation}
\noindent
we find the mass square and mixing pattern due to $\Delta \ M^{\ 2}$
in Eq. (\ref{eq:18}), with $k=\tan \Theta$, to be given by
\begin{equation}
\label{eq:19}
\begin{array}{l}
\Delta \ m^{\ 2} \ ( \ f_{0 \ >} \ ) \ = {\displaystyle
\ \frac{4}{3} \ c
\ \frac{1}{
\cos^{ 2} \ \Theta } \ ;}
\hspace*{0.3cm}
\Delta \ m^{\ 2} \ ( \ f_{0 \ <} \ ) \ =
\ 0.
\end{array}
\end{equation}
\noindent
Note that in the present approximation there is the inequality
\begin{equation}
\Delta \ m^{\ 2} \ ( \ f_{0 \ >} \ ) \ > \ \ \frac{4}{3}
\ \Delta \ m^{\ 2} \ ( \ K \ ).
\label{ineqnonet}
\end{equation}
Finally, we consider
two limiting patterns for the mass square of scalar mesons :
\begin{description}
\item I) No or small singlet octet mixing.
\noindent {\it a) No mixing}: \\
This assignment corresponds to
\begin{equation}
\label{eq:21a}
\begin{array}{l}
k \ = \ 0
\hspace*{0.3cm} ; \hspace*{0.3cm}
\delta \ = \ - \ 1
\hspace*{0.3cm} ; \hspace*{0.3cm}
d \ = \ - \ \frac{2}{3} \ c.
\end{array}
\end{equation}
In the following discussion we
use as unit of mass square the $K^*_0 - a_0$ splitting constant $c$
in (\ref{eq:16a})
and denote the common nonet mass in the $\chi_{\ 3}$ limit by
$m_{\ (9)}$.
Relative to $m_{\ (9)}^{\ 2}$ the four degenerate
states $f_{0 \ <} \ , \ a_{0}$ are lower in mass square by
$\frac{2}{3}$ units, the $K^*_0 , \ \overline{K^*_0}$ states are
higher by $\frac{1}{3}$ unit, whereas $f_{0 \ >}$ is raised
by $\frac{2}{3}$ units. To first order in $m_{ s}$
the Gell-Mann-Okubo mass square formula is valid within the octet
\begin{equation}
\label{eq:21b}
3 \ \Delta \ m^{\ 2} \ ( \ f_{0 \ >} \ )
\ = \
4 \ \Delta \ m^{\ 2} \ ( \ K^*_0 \ )
\end{equation}
and yields a prediction for the mass of the heavier isoscalar
\begin{equation}
m \ ( \ f_{0 \ >} \ ) \ \sim \ 1550 \ \mbox{MeV}.
\label{f0hms}
\end{equation}
This mass pattern is also displayed in
Fig. \ref{gbfig2} (Ia) together with the one for the pseudoscalars
for comparison. According to (\ref{ineqnonet}) the mass value (\ref{f0hms})
is the lower limit for $m(f_{0>})$ under the condition $m(a_0)=m(f_{0<})$.
\noindent
{\it b) Small mixing as in the pseudoscalar nonet}\\
\noindent
This mixing pattern is suggested by our phenomenological
analysis
in the following sections and corresponds to
\begin{equation}
\label{eq:21}
\begin{array}{l}
k \ = \ \frac{1}{\sqrt{8}}
\hspace*{0.3cm} ; \hspace*{0.3cm}
\Theta \ = \ \arcsin \ \frac{1}{3}
\ \sim \ 19.5^{\rm o}
\vspace*{0.3cm} \\
\Delta \ m^{\ 2} \ ( \ f_{0 \ >} \ ) \ =
\ \frac{3}{2} \ \Delta \ m^{\ 2} \ ( \ K^*_0 \ )
\hspace*{0.3cm} \rightarrow \hspace*{0.3cm}
m \ ( \ f_{0 \ >} \ ) \ \sim \ 1600 \ \mbox{MeV}.
\end{array}
\end{equation}
\noindent
Relative to $m_{\ (9)}^{\ 2}$ the four degenerate
states $f_{0 \ <} \ , \ a_{0}$ are now lower in mass square by
one unit, the $K^*_0 , \ \overline{K^*_0}$ states are
at the same level, whereas $f_{0 \ >}$ is raised
by $\frac{1}{2}$ units (see Fig. \ref{gbfig2}).
\item II) Strict validity of the OZI rule.
Flavor mixing according to the OZI-rule corresponds to
$\delta \ = \ 0$ and thus to
\begin{equation}
\label{eq:20}
\begin{array}{l}
k \ = \ - \ \frac{1}{\sqrt{2}}
\hspace*{0.3cm} ; \hspace*{0.3cm}
\Theta \ = \ - \ \arcsin \ \frac{1}{\sqrt{3}}
\ \sim \ - \ 35.3^{\rm o}
\vspace*{0.3cm} \\
\Delta \ m^{\ 2} \ ( \ f_{0 \ >} \ ) \ =
\ 2 \ \Delta \ m^{\ 2} \ ( \ K^*_0 \ )
\hspace*{0.3cm} \rightarrow \hspace*{0.3cm}
m \ ( \ f_{0 \ >} \ ) \ \sim \ 1770 \ \mbox{MeV}.
\end{array}
\end{equation}
In this case
the four degenerate
states $f_{0 \ <} \ , \ a_{0}$ remain at the same level
as $m_{\ (9)}^{\ 2}$,
the $K^*_0 \ , \ \overline{K^*_0}$ states are
higher by one unit, whereas $f_{0 \ >}$ is raised
by two units (see Fig. \ref{gbfig2}).
\end{description}
\noindent
We conclude that the degeneracy in mass of $f_0 (980)$ and $\ a_0 (980)$
indeed implies a full degenerate nonet in the $\chi_{\ 3}$ chiral limit.
It is important to note, however, that
the contributions of order $m_{ s}$ can respect the
$f_0-a_0$ mass degeneracy, without splitting necessarily the nonet
according to the OZI-rule, i.e. according to flavor, as often assumed.
Furthermore we point out,
that an eventual similarity
of singlet octet mixing for scalars {\em and} pseudoscalars as outlined in
Ib) is by no means excluded.
Approximate singlet-octet
mixing is known to prevail for the latter -- with
a mixing angle near 19.5$^{\rm o}$ as in (\ref{eq:21}) \cite{Bij}.
Only case I) is compatible with our phenomenological analysis
in the subsequent sections and we assign
\begin{equation}
f_{0 \ >} \ \rightarrow \ f_0(1500).
\label{f0gt}
\end{equation}
The observed mass is slightly lower than the masses theoretically calculated
in the lowest order of the strange quark mass $m_s$.
We emphasize here, that the splitting between
$a_{0}$ and $K^*_0$ is considerable and thus there will be non-negligible
corrections of higher order in $m_{ s}$,
in particular to the $K^*_0, \ \overline{K^*_0}$ square masses.
These corrections can easily account for the violation
of the inequality (\ref{ineqnonet}) and the larger $f_{0 \ >}$ masses
predicted.
The alternative choice II would treat $f_0(980)$ as purely nonstrange state
which is not attractive phenomenologically as will be discussed below.
The pure $s\overline s$ state $f_{0 \ >}$ with mass as in (\ref{eq:20})
could then be associated with the $J=0$ component of $f_J(1710)$
(see, Sect. \ref{basic_triplet}) but not much is known about the flavor
properties of this state. In any case, the mass ordering of the three spin
triplet states
\begin{equation}
\label{eq:37}
\begin{array}{l}
f_{\ J \rightarrow 0} \ (1710)
\hspace*{0.3cm} ; \hspace*{0.3cm}
f_{\ 1} \ (1510)
\hspace*{0.3cm} ; \hspace*{0.3cm}
f_{\ 2}^{\ '} \ (1525)
\end{array}
\end{equation}
\noindent
would be upset by $\sim \ 200 \ \mbox{MeV}$ within this scheme.
\section{Spectroscopy of light isoscalar $J^{PC}=0^{++}$ states}
\label{spectr}
Next we turn to the more detailed phenomenological discussion,
first concerning the lowest mass $q\overline q$ nonet
and the lightest glueball.
Much effort has been devoted
to clarify the experimental situation. To this end a variety of
reactions has been studied in considerable detail
\begin{equation}
\begin{array}{l}
\label{reactions}
\begin{array}{rl}
1.& \pi^+\pi^- \to \pi^+\pi^- ,\ \pi^0\pi^0 \\
2.& \pi^+\pi^- \to K^+K^- ,\ K^0 \overline{K^0} \\
3.& \pi^+\pi^- \to \eta \eta,\ \eta \eta' \\
4.& p\overline p \to 3 \pi^0,\ 5 \pi^0 ,\ \pi^0 \pi^0 \eta,\ \eta \eta \pi^0,
\eta \eta'\pi^0 \\
5.& J/\psi \to \phi\pi\pi,\ \phi K \overline K,\ \omega\pi\pi,\ \omega K \overline K \\
6.& J/\psi \to \gamma \pi \pi,\ \gamma K \overline K,\ \gamma \eta \eta,\
\gamma \eta \eta'\\
7.& pp \to pp\ + \ X_{{\rm central}} \\
8.& \psi' \to J/\psi \pi\pi,\ Y' \to Y\pi\pi, \ Y'' \to Y\pi\pi \\
9.& \gamma \gamma \to \pi \pi, \ K \overline K \\
\end{array}
\end{array}
\end{equation}
Our knowledge about the
first three reactions
comes from the analysis of peripheral $\pi N$ collisions
in application of
the one-pion-exchange model; these reactions
represent the oldest source of information on the scalar resonances.
The fourth one, $p\overline p$ annihilation at threshold,
has been studied in recent years with high statistics at the LEAR facility
at CERN and has improved our understanding of the spectroscopy above 1 GeV
in particular; data from
higher primary energies have been obtained at FERMILAB.
The states recoiling against the $\phi$ and the $\omega$ in reaction 5
should have a large
strange or nonstrange $q\overline q$ component respectively.
The reactions 6,7 and 8
are expected to provide a gluon rich environment
favorable for glueball production (for a review, see Ref. \cite{Close}),
whereas in the last one (9) the glueball production
is suppressed if the mixing with $q\overline q$ states is small.
In the search for resonances one usually looks first for peaks in the
mass spectrum. If several states are overlapping, or in the presence of
coherent ``background'',
the peak position may be shifted or the resonance may
even appear as a dip in the mass spectrum. The crucial
characteristics of a resonance is therefore the energy
dependence of the
corresponding complex partial wave amplitude which moves along a full
loop inside the ``Argand diagram'': besides the mass peak the phase
variation has to be demonstrated.
Such results are obtained from energy independent
phase shift analyses which try to determine
the individual partial waves for a sequence of energy values. Usually
such analyses are plagued by
ambiguities. To start with, one can obtain a description of the scattering
data in an energy dependent fit from an ansatz
with a superposition of resonances. Such global fits to the mass
spectra of mesonic systems in a broad
range up to about 1700 MeV and including an increasing number of different
reactions in (\ref{reactions})
have been performed by several groups,
starting with the CERN-Munich Collaboration \cite{CERN-Munich1,CERN-Munich2},
then by Au, Morgan and Pennington (AMP) \cite{amp,mp},
Lindenbaum und Longacre (LL) \cite{linden}
and more recently by Bugg, Sarantsev and Zou (BSZ) \cite{bsz} and by Anisovich,
Prokoshkin and Sarantsev (APS)
\cite{aps}.
A survey of results from these representative global fits
are given in Table \ref{tabres}.
All these fits include the narrow $f_0(980)$, probably the only
uncontroversial and well located $f_0$ state.
Furthermore, they all show
one rather broad state of more than 500 MeV width, called now
$f_0(400-1200)$ by the PDG \cite{PDG}; this state is considered as resonance
$f_0(1000)$ in \cite{mp}, otherwise it is just refered to as ``background''.
In addition, states of higher mass are required by the fits but with masses
which fluctuate from one fit to another.
The PDG in their recent summary table includes the
$f_0(1370)$ and the $f_0(1500)$ which also represents
the states quoted earlier,
the $f_0(1300)$ and $f_0(1590)$.
\begin{table}[t]
\begin{tabular}{cccc}
\hline
authors & broad state & other states &
reactions \\ \hline
CM\ \cite{CERN-Munich1} & \ 1049\ -\ $i$\ 250\ MeV &
$f_0(980),\ f_0(1537) $
& 1a\\
AMP\ \cite{amp} & \ 910\ -\ $i$\ 350 \ MeV &$\ f_0(988),\ [f_0(991)],
\ f_0(1430)$ & 1a, \ 2, \ 5 \\
& & & 7a,b\ 8,\ 9 \\
LL\ \cite{linden} & \ 1300\ -\ $i$\ 400 \
&$f_0(980),\ f_0(1400), \ f_0(1720)$ & 1,\ 2,\ 3,\ 6 \\
& & & \\
BSZ\ \cite{bsz} & \ A:~571\ -\ $i$\ 420 \ MeV \ &$f_0(980),\ f_0(1370), \
f_0(1500)$ & 1,\ 2,\ 4\\
& \ B:\ 1270\ -\ $i$\ 530 \ MeV & & \\
APS\ \cite{aps} & \ 1530\ -\ $i$\ 560 \ MeV
& $f_0(980),\ f_0(1370),\ f_0(1500), $& 1,\ 2,\ 3,\ 4\\
& & $f_0(1780)$ & \\ \hline
\end{tabular}
\caption{Isoscalar states included in various global
energy dependent fits to
reactions (\protect\ref{reactions}) with channels a-d.
For the broad state the poles of the scattering
amplitude at $m - i~\Gamma/2$ is given. Only one of the two states near
the $f_0(980)$ found by AMP are kept in \protect\cite{mp}.}
\label{tabres}
\end{table}
In Fig. \ref{gbfig3} we show some recent results on
the mass dependence
of the isoscalar (I=0) S-wave $\pi\pi$ cross section as obtained by
BNL-E852 \cite{gunter} and GAMS Collaborations \cite{aps}.
This mass spectrum with three peaks (the ``red dragon'')
will be interpreted by us as a very broad state centered around
1 GeV (glueball) which interferes with
the resonances $f_0(980)$ and $f_0(1500)$ whereby the dips
near the respective resonance positions are generated.
In the following we will reexamine the evidence for resonances claimed
in the different mass intervals, especially in the peak regions in Fig.
\ref{gbfig3} by studying the
phase shift analyses in different
processes and in particular the phase variation near the
respective resonance masses.
\subsection{The low energy $\pi\pi$ interaction
($m_{\pi\pi}\lesssim 1000$ MeV) and the
claim for a narrow $\sigma(770)$ resonance}
\label{spectr1}
At low energies only the $\pi\pi$ channel is open.
According to the common view
which emerged in the mid of the 70's the isoscalar S-wave
has negligable coupling to inelastic channels below the $K\overline K$
threshold and the phase shift $\delta_{\ell}^{I}$ with $\ell=0,\ I=0$
moves much more
slowly through the $\rho$ meson region than the P-wave.
This strong $\pi\pi$ interaction is interpreted either as
``background'' or as a very broad state as discussed above. Only
near the $K\overline K$ threshold the phase varies rapidly
because of the presence of the
$f_0(980)$ resonance. There is an old claim for the existence of
a narrow resonance $\sigma(770)$ under the $\rho$ meson which has been put
forward again more recently arguing with results from polarized target.
Most results on the $\pi\pi$ S-wave obtained more than 20 years ago have been
derived from the reactions
\begin{equation}
\mbox{(a)} \quad \pi N \to \pi\pi N \qquad
\mbox{(b)} \quad \pi N \to \pi\pi \Delta
\label{pinucleon}
\end{equation}
in the $\pi^+\pi^-$ charge mode
with unpolarized target in application of variants of the
one-pion exchange (OPE) model (for a general review, see \cite{mms}, for the low
energy $\pi\pi$ interactions, see \cite{ochs}, for example).
In this charge mode there is a twofold ambiguity
(``up-down'') for each energy interval
which corresponds to either a narrow or a very broad resonance
under the $\rho$ meson.
From the study of the $\pi^+\pi^-$ \cite{flatte,CERN-Munich1,CERN-Munich2}
and $\pi^0\pi^0$ data \cite{apel} the narrow resonance solution has
finally been excluded \cite{flatte,em73,CERN-Munich1,CERN-Munich2,mp}.
The measurement of reaction (\ref{pinucleon}a) with polarized target
by the CERN-Cracow-Munich Collaboration \cite{CKM1,CKM2} has made possible a
more detailed investigation of the production mechanisms but
the analysis also leads to
a new class of ambiguities in phase shift analysis.
In a recent reanalysis of these data Svec \cite{svec1} finds
in the modulus of one of the transversity amplitudes a narrow peak
near 750 MeV, whereas in case of the other one a broad mass spectrum
is observed. In Breit-Wigner fits
to these mass spectra an extra state $\sigma(750)$ of width $\Gamma=150$
MeV -- or in the preferred fits even two $\sigma$ states -- are included
besides the $f_0(980)$ resonance.
In these considerations no attempt has been made to
fit the amplitude phases nor to respect the partial wave unitarity which is
important in particular near the inelastic threshold.
These constraints are taken into account in the subsequent
analysis of the polarized target data
by Kami$\acute{\rm n}$ski, Le$\acute{\rm s}$niac and Rybicki
(KLR) \cite{klr}. In the
region below 1000 MeV they found four different solutions
duplicating the old up-down ambiguity:
\begin{equation}
\mbox{(a) up-steep \qquad (b) up-flat \qquad
c) down-steep \qquad (d) down-flat.}
\label{solution}
\end{equation}
Furthermore, a separation into pseudoscalar ($\pi$) and pseudovector ($a_1$)
exchange amplitudes has been carried out.
The solution (\ref{solution}c) is excluded immediately as it leads to a strong
violation of partial wave unitarity.
The solution (\ref{solution}d) is essentially consistent
with the previous result from the unpolarised target
data \cite{CERN-Munich1} up to $m_{\pi\pi} \sim 1400$ MeV
and the phase shift deviates by not more than $30^{\rm o}$ in the
region above this mass.
The solution (\ref{solution}a) which is consistent with a narrow
$\sigma(750)$ as in \cite{svec1} also shows a systematic violation of unitarity
and is therefore
considered as ``somewhat queer'' by KLR but not excluded. However, both up
solutions suffer from similar problems already
discussed in connection with the old analyses:
\noindent
{\it (a) Comparison with the $\pi^0\pi^0$ final state}\\
In this case the P-wave is forbidden and therefore the up-down
ambiguity does not show up.
The recent very precise
data on the reactions 4
in (\ref{reactions}) by the Crystal Barrel Collaboration
\cite{Ams,abele} can be interpreted in terms of $\pi\pi$ amplitudes using an
isobar model for the annihilation process. The striking effects from
the $f_0(980)$ state are clearly
visible but the mass spectra around 750 MeV are
rather structureless and there is no sign of a narrow resonance.
In particular, the existence of a resonance with width around 250 MeV
has been excluded in \cite{abele}.
\noindent
{\it (b) Rapid variation of phase near $K\overline K$ threshold}\\
The GAMS-Collaboration \cite{GAMS}
has presented results on the S-wave magnitude of
reaction 1b in (\ref{reactions}) obtained from process (\ref{pinucleon}a).
Their results show a sudden decrease
of the S-wave magnitude above a mass of 850
MeV with a narrow dip at 970 MeV. A dip of comparable type is also obtained
for the KLR down-solution (Fig. 2a of \cite{klr}); the position of the dip
is slightly moved upwards, presumably because of different isospin
I=2 contributions.
On the other hand, the up-solution reaches the minimum cross section already
at the lower mass around 900 MeV in qualitative difference to the GAMS data.
The GAMS collaboration
so far has not yet published the original experimental results in terms of
spherical harmonic moments. Once available the 4 different solutions from the
polarized target experiment could be compared directly to the moments from
the $\pi^0\pi^0$ final state which should determine the unique
solution. The GAMS data are consistent again with fits
which properly take into account
unitarity at the threshold such as in Refs. \cite{bsz,aps}.
A similar behaviour in the mass region below $\sim 1000$ MeV is shown by the
BNL-E852 data \cite{gunter} (see also Fig.~\ref{gbfig3}).
These arguments favor the down-flat-solution which agrees with
the results obtained previously. All other choices would lead to serious
inconsistencies with general principles or with other experimental results.
Nevertheless, it would be desirable to obtain the complete
results and a common description
of the reactions (\ref{pinucleon}) in the $\pi^+\pi^-$ and
$\pi^0\pi^0$ charge modes.
\subsection{How reliable are $\pi\pi$ scattering results from unpolarised
target experiments?}
\label{OPE}
It may be surprising at first sight that the results from
polarized and unpolarized target are so similar, as found by KLR \cite{klr}.
In fact, it is occasionally claimed (especially in \cite{svec1})
that the analyses from unpolarized target experiments are obsolete
because of the importance of $a_1$-exchange besides $\pi$-exchange.
Most results obtained so far on elastic and inelastic $\pi\pi $ interactions
1-3 in (\ref{reactions}), which are important in the subsequent discussion,
have been obtained from unpolarized target experiments.
Therefore, it is appropriate
at this point to contemplate the consequences to be drawn from the
experiments with polarized target.
Motivated by
the OPE model with absorptive corrections,
the commonly applied procedure to extract the production amplitudes
from the unpolarized target experiment, has been based on the following two
assumptions \cite{ochs2,fm,em0} concerning
the nucleon helicity flip and non-flip
amplitudes
$ f^{\pm}_{\ell,\mu}$ and $ n^{\pm}_{\ell,\mu}$ with natural ($+$) and
unnatural ($-$) parity exchange for production of a mesonic system with
spin $\ell$ and helicity $\mu$:
\begin{description}
\item (i) Spin-flip dominance:
the non-flip amplitudes $n^{\pm}_{\ell,\mu}$ vanish, at least
the $(-)$ amplitudes, which are {\em not} generated by absorbed OPE at high
energies (this allows for $a_2(2^{++})$-exchange but not for
$a_1(1^{++})$-exchange).
\item (ii) Phase coherence: The phases of the production amplitudes at fixed
mass $m_{\pi\pi}$ and momentum transfer $t$ between the
incoming and outgoing nucleons
depend only on $\pi\pi$
spin $\ell$ and not on the helicities.
\end{description}
A further simplification can be obtained if $t$-integrated moments are used
in the ``$t$-channel frame'' \cite{owag}.
These assumptions yield an overdetermined system of equations. It can be
solved for the amplitudes up to some discrete ambiguities
whereby the constraints are found to be well satisfied \cite{CERN-Munich1};
results from ``Chew-Low extrapolation'' in $t$ and from
$t$ averaged moments yield comparable
results \cite{em1}.
The polarized target experiment has clearly demonstrated the existence of
the $a_1$ exchange process \cite{CKM1,CKM2} which contributes to the
non-flip amplitudes invalidating assumption (i).
However, for the amplitude analyses carried out in an unpolarized target
experiment -- such as in \cite{CERN-Munich1,em1} --
weaker assumptions than the ones above
are sufficient to obtain the same results
\cite{ochs2}, namely
\begin{description}
\item (i$'$)
nucleon helicity flip and non flip amplitudes $f$ and $n$ are proportional
\begin{equation}
n^{+}_{\ell,\mu}=\alpha^{(+)} f^{+}_{\ell,\mu}, \qquad
n^{-}_{\ell,\mu}=\alpha^{(-)} f^{-}_{\ell,\mu}
\label{flip}
\end{equation}
for natural ($+$) and unnatural ($-$)
exchange separately for any dipion spin
and helicity $\ell,\mu$.
\item (ii$'$) as in (ii) but there may be an overall phase difference between
$(+)$ and $(-)$ amplitudes.
\end{description}
Then also the transversity up and down amplitudes $g$ and $h$
which are determined in the polarized target experiment
\begin{equation}
g^{\pm}_{\ell,\mu}\ =\
\frac{1}{\sqrt{2}} (n^{\pm}_{\ell,\mu} \mp
f^{\pm}_{\ell,\mu}), \qquad
h^{\pm}_{\ell,\mu}\ =\
\frac{1}{\sqrt{2}} (n^{\pm}_{\ell,\mu} \pm
f^{\pm}_{\ell,\mu}) \label{transvers}
\end{equation}
should be proportional
\begin{equation}
\frac{|g^{\pm}_{\ell,\mu}|}{|h^{\pm}_{\ell,\mu}|} =
\frac{|\alpha+i|}{|\alpha-i|}. \label{gdh}
\end{equation}
This relation is approximately fulfilled and the ratio
is found $\frac{|g|}{|h|} \approx 0.6$ for
S,P,D and F waves over the full mass range explored in the small $|t|<0.15$
GeV region (Fig. 6 in \cite{CKM2});
however, the fluctuation of the data is quite large and local
deviations cannot be excluded.
In a restricted analysis using only S and P
waves below 900 MeV some trend of this amplitude ratio with mass was found
\cite{CKM1} but the D-waves can certainly not be neglected here.
It is pointed out by KLR \cite{klr} that in the narrow
regions where the S-wave
magnitude is small, i.e. around 1000 MeV and 1500 MeV the $a_1$
contribution may become as large as the $\pi$ exchange contribution, whereas
otherwise it amounts to only about 20\% of it.
With one pion exchange only (modified by absorption)
the ratio (\ref{gdh})
would have to be unity, so the modification (\ref{flip})
amounts only to a change of the overall adjustment of
normalization in the energy dependent fits.
It is very satisfactory that the down-solution for the S-wave
which we preferred above is also consistent
with the energy independence of the amplitude ratio $\frac{|g|}{|h|} \approx
0.6$ (Fig. 2b in \cite{klr}) whereas the disfavored up solution
with the narrow $\sigma$ would lead to an increase of this ratio by
up to a factor
of 2 just in the mass interval of ambiguity 800-1000 MeV. Such exceptional
behaviour of amplitudes is not plausible.
As to the simplifying assumption (ii) on the phase coherence of amplitudes
the data from polarized target are confirming it in their general trend
but there are overall shifts of amplitude phases
of up to about
20$^{\rm o}$, only some relative phases involving the D-wave amplitudes
indicate larger differences
(Figs. 8,9,10 in (\cite{CKM2})).
In summary, the original assumption (i) has been demonstrated by the
polarized target experiment to be clearly violated; the modified assumption
(i$'$) is still approximately correct within the given accuracy, whereas
some moderate violations of phase coherence (ii$'$) have been seen.
This explains why the phase shift results from the polarized target
experiment -- looking at the preferred solution -- are not very different
from the previous findings, in particular, there is no evidence for entirely
new states, such as a $\sigma(750)$.
The proportionality (\ref{flip}) is expected, in particular, if the amplitudes
$a_1\pi \to \pi\pi$ and $\pi\pi \to \pi\pi$ are proportional and
appear as factors in the
production amplitudes. In general, such a relation may be violated as
different resonant states could have different couplings to the $\pi\pi$ and
$a_1\pi$ channels, also there could be different signs. However, as long as
the $a_1$ exchange is small, such as in the small $t$ region,
the violation of the assumption can play only a role at this reduced level.
On the other hand, one has to be careful in applications of the above
assumptions in kinematic regions where OPE is not dominant
(for example, large $t$).
\subsection{Interference of the $f_0(980)$ with
the ``background'' and the molecule hypothesis}
\label{spectr2}
In elastic $\pi\pi$ scattering the narrow $f_0(980)$
interferes with the large ``background'', now also called $f_0(400-1200)$
and appears as a dip in the S-wave
cross section \cite{flatte,CERN-Munich1}.
There are other processes where to the contrary the $f_0(980)$
appears as a peak. This phenomenon has been observed first in
pion pair production with large momentum transfer $|t|\gtrsim 0.3$ GeV
\cite{binnie} and more recently by GAMS \cite{GAMS}. Fits to the peak yield
values for the total width of $\Gamma = 48\pm 10$ MeV.
A direct clear
evidence for the phase variation according to
a Breit-Wigner resonance can be inferred from the
interference pattern of the rapidly varying resonance amplitude with the
tail of the $f_2(1270)$ in reaction (3a) at large $t$ as measured by the GAMS
Collaboration \cite{GAMS}.
The interference of this narrow resonance with the background
varies from one reaction to the other. In this way one can see that this
``background'' has its own identity.
The reactions in (\ref{reactions}) with a $\pi\pi$ system in the final
state
can be classified roughly into 3 groups according to the
different appearence of the $f_0(980)$ in the mass spectrum:
\begin{description}
\item (a) dip in reaction 1, indication
of dip in reaction 4a \cite{Ams,Armstrong1};
\item (b) peak in reaction 1 in large t production, and in
5a \cite{Gidal,falvard,Lockman}, 5c \cite{Augustin5pi}, 9a
\cite{Cball,Markii,JADE};
\item (c) an interference of the $f_0(980)$ Breit-Wigner amplitude with a
background amplitude of positive real part is suggested
in 4b \cite{Amseta} and in a similar way in 7 \cite{gamspp,akesson}.
\end{description}
The different interference patterns are naturally attributed
to the different couplings of the $f_0(980)$ and
of the ``background'' to the initial
channel.
The dip is observed in the elastic $\pi\pi$ channel. In this case the
background amplitude is near the unitarity limit and the additional
resonance has to interfere destructively. The reaction 4a shows a small dip
around 950 MeV followed by a peak near 1000 MeV and fits into group (a) or
(b). All other processes are inelastic.
In particular, the transmutation of a
dip into a peak in $\pi N\to \pi\pi N$ with increasing momentum transfer
can be explained by the assumption of an increasing importance of
$a_1$ exchange
over $\pi$ exchange with
\begin{equation}
|A(\pi a_1 \to f_0(980) \to \pi\pi)|\quad \gg \quad
|A(\pi a_1 \to f_0(400-1200) \to \pi\pi)|.
\label{a1exchange}
\end{equation}
In this case the peak occurs, either because the background interferes
constructively, or because it becomes too small.
There is some
support for this interpretation from the KLR results \cite{klr}
discussed above on the
polarized target data at small $|t|<0.2$ GeV.
For the favored
``down-flat'' S-wave solution the modulus of the $a_1$-exchange amplitude
shows a peak (significance about 2$\sigma$) just in the
mass interval 980-1000 MeV whereas the pion exchange amplitude shows a dip
in the region 980-1060 MeV (see Fig. 7a in \cite{klr}).
Similar conclusions concerning different exchange mechanism has been drawn in
the recent paper by Achasov and Shestakov \cite{achsh} where detailed fits
including $a_1$-exchange are presented.
A remarkable similarity is seen in the interference pattern of the two
reactions in group (c) where a small peak near 950 MeV is followed by a
large drop near 1000 MeV.
In reaction 4b the initial $p\overline p$ state must be in a $\eta,\eta'$ type
state, so the $\pi\pi$ state couples to two isoscalars,
similarly in reaction 7 if the initial state is formed by two
isoscalar pomerons. This is in marked difference to the pattern seen in
$p\overline p \to 3\pi^0$ where four isovectors couple together. This shows that
the $f_0(980)$ and the ``background'' must have different flavor
composition although they have the same quantum numbers.
Finally, we comment on the hypothesis \cite{molecule},
$f_0(980)$ could correspond to a $K\overline K$ molecule (or other 4q
system), which is adopted in
various contemporary classification schemes (see Sect. 2).
S-matrix parametrizations have been used to argue both ways, against \cite{mp}
or in favor \cite{locher} of such a hypothesis.
If the $a_0(980)$ and $f_0(980)$ are such bound states one
has to worry that the successful quark model
spectroscopy is not
overwhelmed by a large variety of additional hadronic
bound states.
On the phenomenological side,
if the $f_0(980)$ is a loosely bound system, then,
in a violent collision with large
momentum transfer one would expect an increased probability for a break-up.
The GAMS data, however, demonstrate the opposite, the persistence of
$f_0(980)$ with respect to the background. Furthermore, a recent
investigation by the OPAL Collaboration \cite{OPALf0} has shown the production
properties of the $f_0(980)$ to be very similar to those of the $q\overline q$
states $f_2(1270)$ and $\phi(1020)$ nearby in mass
in a variety of measurements.
Therefore, we do not feel motivated to give up $f_0(980)$
as genuine $q\overline q$ state but we suggest
a flavor composition different from the one of the ``background''.
\subsection{The mass region between 1000 and 1600 MeV}
\label{spectr3}
This includes the mass range from the $f_0(980)$ up to the $f_0(1500)$.
Near both resonance positions there are
dips in the elastic $\pi\pi$ S-wave cross section (see Fig. \ref{gbfig3}).
For this region the PDG lists -- besides the $f_0(400-1200)$ --
the $f_0(1370)$ state; there may actually be two states, one
seen as a large effect in the elastic $\pi\pi$ scattering,
the other one being strongly inelastic.
We will reconsider now the evidence from phase shift analyses for
states in this mass interval.
\subsubsection{Elastic and charge exchange $\pi\pi$ interaction}
Phase shift analyses have been performed using the CM data from unpolarized
target \cite{CERN-Munich1,CERN-Munich3,em1} and by the Omega spectrometer
group \cite{omega}. One finds here a number of ambiguous solutions which are
discussed in terms of Barrelet zeros \cite{barrelet}. Namely, for a finite
number of partial waves the amplitude can be written as a polynomial in
$z=\cos \vartheta$. Then the measurement of the
cross section differential in the scattering
angle $\vartheta$ determines the
real parts and the moduli of the imaginary parts of
the amplitude zeros $z_i$. The different solutions can then be classified
according to the signs of $\im z_i$.
In \cite{CERN-Munich3} four solutions for elasticities $\eta_\ell^I$ and
phase shifts $\delta_\ell^I$
are presented distinguishing the signs
of $\im\ z_i$ at 1500 MeV for $i=1,2,3$
as ($---$), ($-+-$), ($+--$) and ($++-$)
and assuming a sign change of $\im z_1$ at 1100 MeV. They
correspond to the solutions A, C, $\overline {\rm B}$ and $\overline {\rm D}$ in
\cite{em1}. Yet more solutions are given in \cite{em1,omega} corresponding
to different branches near 1100, 1500 and above 1800 MeV. The comparison
with $\pi^0\pi^0$ data \cite{shimada,omega} left the solutions C and D
as unfavored.
Solution A is also consistent with the energy dependent result of CM
\cite{CERN-Munich1} up to 1500 MeV where $\delta_0^0\approx 156^{\rm o}$, whereas
solution $\overline {\rm B}$ reaches $\delta_0^0\approx 165^{\rm o}$.
Some descendents $\alpha,\beta$ and $\beta'$ are obtained
from solutions A and B if constraints from dispersion relations are taken
into account \cite{martinpenn}.
The data from polarized target \cite{CKM2,klr} essentially lead to a unique
solution in this mass range as the imaginary parts of zeros came out
rather small
\begin{eqnarray}
\im\ z_1 \ \sim \ 0 \quad & \textrm{for}&
\quad m_{\pi\pi}\ >\ 1100\ \textrm{MeV} \nonumber\\
\im\ z_2\ \sim \ 0 \quad & \textrm{for}&
\quad m_{\pi\pi}\ > \ 1400\ \textrm{MeV}
\label{zeros}
\end{eqnarray}
so that the various solutions are not significantly different any more.
The results for the phase shifts $\delta_0^0$ in \cite{klr} are again
similar to solution A in \cite{em1} or to the
energy dependent phase shift solutions in CM
\cite{CERN-Munich1} up to $m_{\pi\pi}\sim 1400$ MeV; some additional
variation is indicated above this energy in both $\delta_0^0$
and $\eta_0^0$.
Furthermore we note two aspects of the polarized target results
\begin{description}
\item (a) The S-wave is near the unitarity limit in
$1150\lesssim m_{\pi\pi} \lesssim 1450$ MeV and drops to
zero at $m_{\pi\pi} \sim 1500$ MeV (Fig. 2 in \cite{CKM2}).
\item (b) The phase difference of S and D wave amplitudes changes sign
in both $g$ and $h$ transversity amplitudes with
\begin{eqnarray}
\varphi_S-\varphi_D\ >\ 0 \quad &\textrm{for}&\quad
m_{\pi\pi}\ <\ 1250\ \textrm{MeV},
\nonumber\\
\varphi_S-\varphi_D\ <\ 0 \quad &\textrm{for}&\quad
m_{\pi\pi}\ >\ 1350\ \textrm{MeV}.
\label{phchange}
\end{eqnarray}
\end{description}
The phase differences (Fig. 9,10 in \cite{CKM2})
are best met by the previous solution $\beta'$, the result
(b) excludes solution B. The drop of intensity (a) is not so well reproduced
by the previous phase shift analyses of unpolarized target experiments.
Next we turn to the charge exchange reaction $\pi^+\pi^-\to\pi^0\pi^0$
which should help to select the unique solution for the S-wave.
There are three relevant experiments: (a) IHEP-NICE \cite{apel2},
(b) GAMS \cite{GAMS} and (c) BNL -- E852 (preliminary results
\cite{gunter}).
In (a) the amplitude $S_0$ is obtained after subtraction of the
I=2 contribution. One finds two different solutions for $S_0$,
one inside the unitarity circle with $0.5\lesssim\eta_0^0\lesssim1$,
and another one with larger modulus which
exceeds by far the unitarity limit. The physical solution has
\begin{equation}
\varphi_S-\varphi_D\ <\ 0 \ \quad \textrm{for} \
\quad m_{\pi\pi}\ >\ 1100\ \textrm{MeV}.
\label{phchange1}
\end{equation}
The two solutions branch around $m_{\pi\pi}=1100$ MeV where
$\cos(\varphi_S-\varphi_D)\approx 1$.
The sign change is similar to the result (\ref{phchange}) above in the
charged $\pi\pi$ mode.
The first results from (c) at small momentum transfer $t$
with higher statistics give a similar picture: The
amplitude with the smaller modulus is the physical solution. The modulus
shows again a sharp drop at 1000 and 1500 MeV.
There is a
strong inelasticity immediately above 1000 MeV and a small phase shift
rise by about $30^{\rm o}$ if $m_{\pi\pi}$ increases from 1000 to 1300 MeV, in
qualitative agreement with the CM result \cite{CERN-Munich1}.
In the GAMS experiment (b) with high statistics
one finds again the two solutions with large and
small amplitude modulus peaking at $m_{\pi\pi}\sim 1200$ MeV and the
dips at 1000 and
1500 MeV. There is no attempt, however, to
determine the phase shifts and to consider the role of unitarity. In fact, the
solution with large modulus is declared as the physical one.
The phase difference
$\varphi_S-\varphi_D$ stays positive in the full mass range:
it is falling for $m_{\pi\pi}<1100$ MeV as in (\ref{phchange}) but it is
rising again in the
range from 1100 to 1400 MeV -- contrary to all solutions A-D in \cite{em1} and
the one in \cite{CKM2}. Such behaviour
would imply the S wave phase $\varphi_S$ to rise more
rapidly than the resonant D wave phase $\varphi_D$ which is highly
unplausible. Possibly
a transition from one
solution to the other one around $m_{\pi\pi}\sim 1200$ MeV -- consistent with
(\ref{phchange}) --
is allowed within the errors which would resolve this
conflict.\footnote{Taking the
data on $|S|^2$, $|D|^2$ and $|S||D|\cos (\varphi_S-\varphi_D)$
presented in \cite{anis} at face value
we obtain unphysical values $\cos(\varphi_S-\varphi_D)>1$ at
$m_{\pi\pi}\sim 1200$ MeV. }
One should also consider the possibility of small phase shift differences
due to $a_1$ exchange as suggested by the polarized target experiment
and discussed in Sect. \ref{OPE}.
Such systematic effects could be studied further from the angular
distribution moments which have not been presented so far.
In summary, the various phase shift
analyses of $\pi\pi$ scattering suggest the
isoscalar S wave under the $f_2(1270)$ to be slowly
moving with an inelasticity between 0.5 and 1. This does not
correspond to a resonance of usual width.
We can estimate the width $\Gamma$
of a hypothetical resonance
near $m_{\pi\pi}\sim 1300$ MeV from the energy slope of the phase shifts
$\delta_0^0$. Using the data from CM \cite{CERN-Munich1}
or KLR \cite{klr} we find
\begin{equation}
\frac{d\delta_0^0}{dm_{\pi\pi}}=\frac{2}{\Gamma}\frac{1+\eta^0_0}{2 \eta^0_0}
\approx 3.7\ \textrm{GeV}^{-1} \label{gamma}
\end{equation}
in the mass range $1100\lesssim m_{\pi\pi}\lesssim 1400$ MeV
which yields a lower limit on the width
\begin{equation}
\Gamma[f_0(1300)]\ > \ 540\ \textrm{MeV}.
\label{gam1300}
\end{equation}
where the inequality corresponds to $\eta^0_0<1$.
This estimate shows that the slow movement of the phase
does not allow the interpretation in terms of a usual narrow resonance.
On the other hand, there is evidence for a rapid drop in the S-wave
intensity near 1500 MeV indicating a Breit-Wigner type narrow resonance
around this mass.
An analysis which would treat both the
elastic and charge exchange $\pi\pi$ scattering data
together in this mass range has not yet been carried out but
is highly desirable.
\subsubsection{The reaction $\pi\pi\to K\overline K$}
It is argued by BSZ \cite{bsz} that the influence of the additional
scalar state $f_0(1370)$ is marginal for the $\pi^+\pi^-$ (CM)
data whereas it becomes
essential in the $K\overline K$ final states.
The study of this final state is more difficult than $\pi\pi$
as resonances with both G parities can occur in a production experiment where
besides one pion exchange also exchanges with $G=+1$ particles may
contribute.
Phase shift analyses of the $K\overline K$ system have been carried out
(a) at Argonne \cite{argonne}, (b) by the Omega spectrometer experiment
at CERN
\cite{costa} and (c) at BNL \cite{bnl}; (d) the moduli of the partial wave
amplitudes in the $K^+K^-$ final state have been obtained also from
a CERN-Krakow-Munich experiment with polarized target \cite{CKM3}.
In the Argonne experiment (a) a comprehensive analysis of three reactions
\begin{equation}
\pi N\to K \overline K N'
\end{equation}
in different charge states has allowed the separation
of partial waves in both isospin states I=0 and I=1. In the mass region below
1500 MeV S, P and D waves have been included and 8 different
partial wave solutions have been obtained
in the beginning. Requirements of charge symmety,
reasonable t-dependence for the pion exchange reactions and approximate
behaviour
of the P waves as tails of the $\rho$ and $\omega$ resonances have finally
selected a unique solution (called Ib). The absolute phase of the amplitude is
determined relative to the Breit-Wigner phase of the D wave resonances
$f_2$(1270) and $a_2$(1320). It is remarkable that the P waves
of the preferred solution are compatible both in
magnitude and phase with what is expected for the tails of the vector mesons.
The preferred solution shows two features
\begin{description}
\item (i)
The modulus of the $S_0$ amplitude has a
narrow peak near threshold -- possibly caused by the $f_0(980)$ -- and shows
an enhancement near 1300 MeV before
it drops to a small value near $m_{K\overline K}\sim 1600$ MeV (see Fig.
\ref{gbfig3}b).
\item (ii) The phase stays nearly constant up to
$m_{K\overline K}\sim 1300$ MeV,
thereafter it advances by $\Delta \delta_0^0\sim 100^{\rm o}$
when approaching $m_{K\overline
K}\sim 1500$ MeV and then drops.
\end{description}
This phase variation is similar to the one in elastic
$\pi\pi$ scattering ($\Delta \delta_0^0\sim 100^{\rm o}$ in KLR \cite{klr} and
$\Delta \delta_0^0\sim 70^{\rm o}$ in CM \cite{CERN-Munich1}) and
lead to the conclusion \cite{argonne} that the
phase variation in $K\overline K\to K\overline K$ is small and that
the phase variation in $\pi\pi\ \to \ K\overline K$
is related to a resonance $\epsilon(1425)$ which couples
mainly to $\pi\pi$.
The other two experiments (b) and (c) extend their analysis towards higher
masses which leads to more ambiguities. In the mass region considered here
below 1600 MeV there is close similarity with the Argonne results in the
two features (i) and (ii) above: in (b) one of the two ambiguous solution
agrees whereas the second corresponds to a narrow resonance
in the S-wave with $m_{\pi\pi}=1270$ MeV
and $\Gamma=120$ MeV whereas in (c) the favored solution agrees except for
an additional phase variation below 1200
MeV by 30$^{\rm o}$. In addition the
continuation to higher masses with a suggested resonance at 1770 MeV is
presented.
In the region below 1200 MeV (cos$(\phi_S-\phi_D)\approx -1$ at 1200
MeV) another
solution should be possible with $\phi_S-\phi_D \ \to \
\pi-(\phi_S-\phi_D)$; this choice would yield a slightly decreasing phase
$\phi_S$ similar to the one in (a); the D-wave phase $\phi_D$
presented in (c) in
this region is decreasing which would contradict the expected threshold
behaviour.
In (a)
the phase near threshold has been fixed by
comparison with the P-wave taken as tail of the $\rho$ meson.
All these experiments are consistent with a solution with slow phase
variation around 1300 MeV.
Only experiment (d) shows a different result which resembles the
alternative solution in (b)
without peak in the S-wave near 1300 MeV emphasized in (i)
and has been rejected in \cite{argonne} and also in
\cite{bnl} where no ambiguity exists contrary to
the $K^+K^-$ channel. No discussion of ambiguities in terms of
Barrelet zeros nor results
of a phase shift analysis has been presented. For the moment we assume that
the analysis in (d) did not find all ambiguous solutions and
so does not invalidate the
preferred solution quoted above.
We summarize the results from experiments (a)-(c) on the parameters of the
resonance $\epsilon(1420)$
as obtained from the fits to the phases in Table \ref{KKbar}.
\begin{table}
\begin{tabular}{lcccc}
\hline
\rule[0ex]{0ex}{3.5ex}
group & mass range & mass & width & $\Gamma_{K\overline K}/\Gamma_{\pi\pi}$\\
\rule[-2ex]{0ex}{3.5ex}
& of fit [MeV] & [MeV] & [MeV]& \\
\hline
\rule[-2ex]{0ex}{5.5ex}
Argonne \cite{argonne} & 1000 - 1500 & 1425 $\pm$ 15 &
160 $\pm$ 30 & 0.1 - 0.2\\
\rule[-2ex]{0ex}{5.5ex}
OMEGA \cite{costa} & $<$1550 & $\sim$ 1400 & $\sim$ 170 & \\
\rule[-2ex]{0ex}{5.5ex}
BNL \cite{bnl} & 1000 - 2000 & 1463 $\pm$ 9 & $118^{+138}_{-16}$
& \\ \hline
\rule[-2ex]{0ex}{5.5ex}
PDG \cite{PDG}: $f_0(1500)$ & & 1500 $\pm$ 10 & 112 $\pm$ 10
& 0.19 $\pm$ 0.07 \\
\hline
\end{tabular}
\caption{Different determinations of resonance parameters in $K\overline K$
final state near 1500 MeV.}
\label{KKbar}
\end{table}
The comparison of the $\epsilon$ resonance parameters with the
PDG numbers in the last
line suggests the identification
\begin{equation}
\epsilon(1420) \to f_0(1500) \label{epsf0}
\end{equation}
because of comparable width and small $K\overline K$
coupling
although there is a small shift in mass; the latter is smallest in a energy
dependent fit over the full mass range.
For further
illustration we show in Fig. \ref{gbfig4}a the S-wave in an Argand diagram
we obtained as smooth interpolation of the results
by Etkin et al. \cite{bnl}. After an initial decrease
of the amplitude from its maximum near threshold
a resonance circle develops in the region
1200-1600 MeV with small phase velocity at the edges and largest velocity
in the interval 1400-1500 MeV suggesting a resonance pole with negative
residue. The small dip at 1200 MeV appears, as the resonant amplitude moves
first in negative direction of $\re{S_0}$ (to the ``left'')
above the background which moves slowly to the ``right''.
This interference phenomenon yields the peak near 1300 MeV in
Fig. \ref{gbfig3}b, but
there is no evidence for an extra loop corresponding to
the additional state $f_0(1370)$.
The Argand diagram presented by Cohen et al. \cite{argonne} for masses below
1600 MeV shows a qualitatively similar behaviour.
\subsubsection{The reactions $\pi\pi\to \eta\eta,\ \eta\eta'$}
These reactions have been studied by the IHEP-IISN-LAPP Collaboration
\cite{IIL1,IIL2} again
in $\pi p$ collisions
applying the OPE model with absorptive corrections.
The partial wave decomposition of the $\eta\eta$ channel
yields an S wave with an enhancement which
peaks near 1200 MeV and a second peak with parameters
\begin{equation}
m_G=(1592\pm 25)\ \textrm{MeV},\qquad \Gamma_G=(210\pm 40)\ \textrm{MeV}.\
\label{G0mass}
\end{equation}
This state G(1590) has been considered as a glueball
candidate by the authors
as in this mass range there are no resonance signals from $K\overline K$
nor from $\pi^0\pi^0$. The phase difference
$\varphi_D-\varphi_S$ varies with mass in a similar way as the one in the
$K\overline K$ final state: it rises from 210$^{\rm o}$ at 1200 MeV up to the maximum
of 300$^{\rm o}$ at around 1500 MeV and then drops
(see, for example \cite{bnl}). Therefore a similar
interpretation is suggested. The $f_2(1270)$ interferes with the S-wave
which is composed of one Breit-Wigner resonance around 1500 - 1600 MeV
above a background with slowly varying phase.
There is nevertheless a major difference between both channels
if the S-wave magnitude is considered. At its peak
value in $\eta\eta$ near 1600 MeV there is a minimum
in $K\overline K$ and the opposite behaviour around 1300 - 1400 MeV, namely a
peak in $K\overline K$ and a dip in $\eta\eta$
(see Fig. \ref{gbfig3}b,c). Both phenomena can be explained
by a change in the relative sign between
the background and the
resonance amplitude.
In Fig. \ref{gbfig4}b we show the behaviour of the S-wave amplitude
which we obtained using the
data by Binon et al. \cite{IIL1}.
Assuming a
Breit-Wigner phase for the $f_2(1270)$ one finds that the S-wave phase
around 1300 MeV is slowly varying again.
At higher energies a contribution of about 20\% from $f_2(1520)$
is expected in the
D-wave as in the $\pi\pi\ \to \ K\overline K$ channel.
Now the resonance circle in Fig. \ref{gbfig4}b
corresponds to a pole in the amplitude with positive residue and this
explains the rather different mass spectra
emphasized above in the $K\overline K$ and
$\eta\eta$ channels.
The situation may be illustrated by the following simple model
for the $S$ and $D$-wave amplitudes
\begin{equation}
\label{Sf0f2}
\begin{array}{lll}
\pi\pi\to K\overline K \quad\ & S= B_K(m) - x_K^{(0)} f_0(m)e^{i\phi_K};
& \quad\ D=x_K^{(2)} f_2(m)\\% \nonumber \\
\pi\pi\to \eta\eta:\quad\ & S= B_\eta(m)+x_\eta^{(0)} f_0(m)e^{i\phi_\eta};
& \quad\ D=x_\eta^{(2)} f_2(m)
\end{array}
\end{equation}
where $B_i(m)$ denotes the slowly varying
background amplitudes with $\re{B_i}<0$ and
$f_\ell(m)$ the Breit-Wigner amplitudes for spin $\ell$
with $f_\ell(m_{res})=i$
and elasticities $x_i^{(\ell)}$; $\phi_i$ is an additional small
phase due to background.
Then, despite the rather large mass difference between
$\epsilon(1420)$ and $G(1590)$ of 170 MeV, we can
consider both states as representing the same resonance interfering with
the broad background and therefore suggest
\begin{equation}
G(1590)\to f_0(1500). \label{Gf0}
\end{equation}
The $\eta\eta$ experiment has been repeated at higher beam energy which
allows the study of higher mass states. In the lower mass region the
previous results are essentially recovered \cite{IILL}.
The $\eta\eta'$ mass spectrum \cite{IIL2}
shows a threshold
enhancement around 1650 MeV which the authors interpret as another
signal from G(1590).
\subsubsection{$p\overline{p}$ annihilation and the $f_0(1370)$
and $f_0(1500)$ states}
The Crystal Barrel (CB) Collaboration at the LEAR facility at CERN
has measured the $p\overline{p}$ annihilation reaction at rest. In this case
the initial $p\overline{p}$
system is in either one of the three $J^{PC}$ states
\begin{equation}
^{ 1}S_{ 0} \ ( 0^{ - +} ), \quad
^{ 3}P_{ 1} \ ( 1^{ + +} ), \quad
^{ 3}P_{ 2} \ ( 2^{ + +} )\quad \textrm{with}\quad
I^{ G} \ = \ 1^{ -} \ \textrm{or} \ 0^{ +}.
\label{ppbarqz}
\end{equation}
Another experiment has been carried out at the Fermilab antiproton
accumulator (E-760) at the $cms$ energies $\sqrt{s}$ of 3.0 and 3.5 GeV.
The following final states have been investigated
(with $I^{ G}$ in brackets assuming isospin conservation)
\begin{gather}
(a)\ \pi^0 \pi^0 \pi^0\ (1^-)\qquad (b)\ \pi^0 \pi^0 \eta\ (0^+)\qquad
(c)\ \eta \eta \pi^0\ (1^-) \nonumber\\
(d)\ \eta \eta' \pi^0\ (1^-)\qquad
(e)\ \eta \eta \eta\ (0^+)
\label{ppbarfs}
\end{gather}
Reaction (a) has been studied by CB with the very high
statistics of 712 000 events \cite{Ams}.
The Dalitz plot shows clearly the narrow band of low density (region A)
corresponding to the $f_0(980)$ as also seen in the elastic $\pi\pi$
cross section. In the projection to the $\pi^0\pi^0$ mass the broad
structureless bump peaking near 750 MeV and the sharper peak due to the
$f_2(1270)$ are seen. The new feature is the peak at around 1500 MeV
which corresponds to a band of about constant density in the Dalitz plot;
therefore its quantum numbers are determined to be $J^{PC}=0^{++}$ and
the state is now called $f_0(1500)$.
A peak of similar position and width is seen at the higher energies
of E-760 \cite{Armstrong1} together with an increased $f_2(1270)$
signal and a decreased
low mass bump. It is therefore likely that the same state $f_0(1500)$
is observed here although the authors consider it as a $f_2(1520)$
state. This conflict could be solved by means of a phase shift analysis. A
weak signal of the same state is seen in reaction (b) at the higher
energies.
The state $f_0(1500)$ is also identified
by CB in reaction (c) where it appears
as clear peak in the $\eta\eta$ mass spectrum and band in the Dalitz plot
\cite{Ams4,Ams3}. A significant signal is again seen at the higher
$p\overline p$ energies \cite{Armstrong2}.
Furthermore, Amsler et al. relate the threshold enhancement seen in the
reaction (d) to the $f_0(1500)$ \cite{Ams5}.
It is suggestive to identify this state with the resonance discussed before
in this section
where also the phase variation
has been observed directly
and to consider the $f_0(1500)$ as a genuine Breit-Wigner resonance.
In the region between the $f_0(980)$ and $f_0(1500)$ Amsler et al. claim the
existence of a further scalar state, the $f_0(1370)$. In reaction (a) it is
required by the fit as background under the $f_2(1270)$ very much as in
$\pi\pi$ scattering. In reaction (c) it actually appears as clear bump
with rather broad width of $\Gamma=380$ MeV. On the other hand this bump has
disappeared entirely at the higher $cms$ energies \cite{Armstrong2},
whereas the $f_0(1500)$ stays. This
indicates a different intrinsic structure of both states; the disappearence
of the $f_0(1370)$ at higher energies
is reminiscent to the disappearence observed by GAMS of the peak at 700
MeV at production with large $t$ in comparison with the $f_0(980)$
(see Sect. \ref{spectr2}).
In the $p\overline p$ reaction a direct phase shift analysis as in the $\pi\pi$
scattering processes
is not possible. The amplitudes for the different initial states
(\ref{ppbarqz}) are constructed with the angular distributions specified in
\cite{Zemach} and an ansatz for the $\pi\pi$ amplitudes within
the framework of an isobar
model. Unitarity constraints cannot be strictly enforced here as in
case of two body
$\pi\pi$ scattering processes.
The evidence for the $f_0(1370)$ is based on the fits of this model to the
Dalitz plot density.
In the fit by BSZ \cite{bsz} the CB data have been described
with inclusion of the
$f_0(1370)$. Their fit predicts a rapid decrease of the phase
in the channel
$\pi\pi\to K\overline K$ near 1200 MeV; this variation is consistent with the
BNL data \cite{bnl} within their large errors but not
with the slowly varying phases determined by Cohen et al. \cite{argonne}
with smaller errors.
Furthermore a small dip is expected for the S-wave magnitude $|S|$
near the top which is neither observed in the GAMS \cite{GAMS} nor in the
BNL \cite{gunter} experiments on the $\pi^0\pi^0$ final state.
For the moment, we do not accept the $f_0(1370)$ effect as a genuine
Breit-Wigner resonance. It appears to us that the Dalitz plot analysis of the
$p\overline p$ data -- although some phase sensitivity is given -- is less
selective than the phase shift analysis of two-body processes.
The $\pi\pi$, $K\overline K$ and $\eta\eta$ data discussed in the previous
subsections (Figs. \ref{gbfig3},\ref{gbfig4}) speak against a resonance
interpretation of the peak at 1370 MeV.
\section{The $J^{PC}=0^{++}$ nonet of lowest mass}
\label{sectnonet}
After the reexamination of the evidence for scalar states in the region up to
about 1600 MeV we are left with the $f_0(980)$ and $f_0(1500)$ resonances
where we have clear evidence for
a peak and for the phase variation
associated with a Breit-Wigner resonance.
The identification of states in this mass region is so difficult because of
their interference with the large background.
As explained in the previous section, we do not
consider the $f_0(1370)$ signal as evidence for a Breit-Wigner resonance
in between the two $f_0$ states, but rather as the reflection of a yet
broader object or the ``background''.
As members of the scalar nonet we consider then the two $f_0$ states
besides the well known $a_0(980)$ and
the $K_0^*(1430)$. We will now have to clarify
whether this assignment provides a consistent picture for the
various observations
at a given singlet-octet mixing angle to be determined.
Such observations, together with further consequences of our hypotheses,
will be discussed next.
\subsection{Properties of $f_{0} (980)$ and
$f_{0} (1500)$ from $J/\psi \ \rightarrow \ V \ f$ decays}
There are three (primary) mechanisms -- all without
full analytic understanding --
which contribute to purely hadronic
decays of $J/\psi$ into noncharmed final states:
\begin{enumerate}
\item $c \ \overline{c}$ annihilation into three gluons;
\item $c \ \overline{c}$ mixing with noncharmed virtual vector mesons
($\omega_{V} , \ \varphi_{V} , \ 3g_{V}$),\\
here $3g_{V}$ denotes a three gauge boson ``state''
with $J^{\ P C_{n}} \ = \ 1^{ - -}$;
\item $c \ \overline{c}$ annihilation into a virtual photon.
\end{enumerate}
\noindent
In each channel above the hadronization into a given
exclusive, hadronic final state has to be included.
We will {\it assume} that the mechanism 1. above is the dominant one
for the decay modes listed in Table \ref{eq:52}.
\begin{table}[ht]
\[
\begin{array}{rlcl}
\hline
\mbox{decay}&\mbox{modes} & \mbox{symbol}&
\mbox{branching ratios} \ \times \ 10^{\ 4}
\\ \hline
J/\psi \ \rightarrow & \varphi \ \eta^{\ \prime} \ (958) &
X_\varphi^{(-)}& 3.3 \ \ \pm \ 0.4 \\
& \varphi \ f_{ 0} \ (980) & X_\varphi^{(+)} & 3.2 \ \ \pm \ 0.9 \\
& \omega \ \eta^{\ \prime} \ (958) & X_\omega^{(-)} & 1.67 \ \pm \ 0.25 \\
& \omega \ f_{ 0} \ (980) & X_\omega^{(+)} &
1.41 \ \pm \ 0.27 \ \pm \ 0.47
\\ \hline
\end{array} \]
\caption{Branching ratios $X_V^{(\pm)}$
for decays of $J/\psi $ into vectormesons $V$ and scalar (+) or pseudoscalar
($-$) particles according to
the PDG \protect\cite{PDG} }
\label{eq:52}
\end{table}
\noindent
This Table involves only the pseudoscalar-scalar associated pair
($ \eta^{\ \prime} \ (958) \ , \ f_{0} \ (980)$), i.e. only $f_{0} \ (980)$ from the
scalar nonet, whereas
no information is extracted at present for the associated decay modes
into $\varphi \ f_{ 0} \ (1500)$ and $\omega \ f_{0} \ (1500)$
by the PDG \cite{PDG}.
A word of caution on the list in Table \ref{eq:52} is in order. The data
for $\omega \ f_{0} \ (980)$ are based upon a single experiment
(DM2 \cite{Augustin5pi}) and therein
essentially on a single data point.
The result is supported however by
the Mark II experiment \cite{Gidal} in which the recoil spectrum against
the $f_{0} \ (980)$ is measured.
The ratio of the $\omega$ and $\varphi$ peaks are
consistent with the ratio following from Table \ref{eq:52}
although there is some uncertainty
about the background under the $\varphi$ meson.
With this remark in mind we have
a closer look at Table \ref{eq:52}.
The entries for the decays into the scalar and pseudoscalar particles
show an indicative {\it pattern} \footnote{We are indebted
to C. Greub for pointing out that the relevant decay modes of $J/\psi$ comprise
both $\varphi$ {\it and} $\omega$.}:
\begin{equation}
\label{eq:54}
X^{\ (+)}_{\ V} \ \approx \ X^{\ (-)}_{\ V} \ \rightarrow \ X_{\ V}
\hspace*{0.3cm} ; \hspace*{0.6cm}
X_{\ \varphi} \ \approx \ 2 \ X_{\ \omega},
\vspace*{0.5cm}
\end{equation}
\noindent
i.e. the branching fractions into the
$\eta^{\ \prime} \ (958)$ and the $ f_{0} \ (980)$ are very
similar which then suggests a similar quark composition.\footnote{We
neglect here the phase space effects of $\lesssim 15\%$ whereby it is
assumed that for the momenta $p\sim 1$ GeV the threshold behaviour of the
P-wave is reduced to that of the S-wave by formfactors.}
We thus decompose both states $ \eta^{\ \prime} \ (958)$ and $f_{0} \ (980)$
according to their respective strange (s) and nonstrange (ns)
$q \overline{q}$ composition,
neglecting their small mass difference,
\begin{equation}
\label{eq:55}
\begin{array}{l}
\begin{array}{lll}
\eta^{\ \prime} \ (958) \ & \sim &
\ c^{\ (-)}_{\ ns} \ u \overline{u} \ +
\ c^{\ (-)}_{\ ns} \ d \overline{d} \ +
\ c^{\ (-)}_{\ s} \ s \overline{s}
\vspace*{0.3cm} \\
f_{ 0} \ (980) \ & \sim &
\ c^{\ (+)}_{\ ns} \ u \overline{u} \ +
\ c^{\ (+)}_{\ ns} \ d \overline{d} \ +
\ c^{\ (+)}_{\ s} \ s \overline{s}
\end{array}
\end{array}
\end{equation}
with normalization
$ 2 \ | \ c_{\ ns}^{\ \pm} \ |^{\ 2}
\ + \ | \ c_{\ s}^{\ \pm} \ |^{\ 2} \ = \ 1.$
We retain {\it only}
the approximate relations in Eq. (\ref{eq:54}) and,
according to the mechanism 1.
in the above list, we infer for the $f_{0} \ (980)$
\begin{equation}
\label{eq:56}
\begin{array}{lll}
X_{\ \omega} \ \simeq \ 2 \ | \ c_{\ ns} \ |^{\ 2}
\hspace*{0.3cm} & ; & \hspace*{0.3cm}
X_{\ \varphi} \ \simeq \ | \ c_{\ s} \ |^{\ 2}
\vspace*{0.3cm} \\
c_{\ ns}\ = \ c^{\ (+)}_{\ ns} \ = \ c^{\ (-)}_{\ ns}
\hspace*{0.3cm} & ; & \hspace*{0.3cm}
c_{\ s}\ = \ c^{\ (+)}_{\ s} \ = \ c^{\ (-)}_{\ s} .
\end{array}
\end{equation}
The second equation in (\ref{eq:54}) is satisfied if we choose
$c_s=2c_{ns}$. Then we find for the vector
$\vec{c} \ = ( \ c_{\ ns} \ , \ c_{\ ns} \ , \ c_{s} \ )$%
\footnote{The ``mnemonic'' approximate form of $\vec{c}$
is due to H. Fritzsch.}
in case of
$f_{ 0} \ (980)$
\begin{equation}
f_{ 0} \ (980):\qquad \vec{c}
\ = \ \frac{1}{\sqrt{\ 6}} \ ( \ 1 \ , \ 1 \ , \ 2 \ )
\label{f0low}
\end{equation}
and in case of $f_{ 0} \ (1500)$ accordingly the orthogonal composition
\begin{equation}
\label{eq:60}
f_{ 0} \ (1500): \qquad \vec{c}^{\ '} =
\ \frac{1}{\sqrt{\ 3}} \ ( \ 1 \ , \ 1 \ , \ -1 \ ).
\end{equation}
\noindent
These derivations
reveal
-- within the approximations adopted -- the pair
$ \eta^{\ \prime} \ (958)$ and $f_{0} \ (980)$ as a {\it genuine}
parity doublet.
Thus $\eta$ $\eta'$ and $f_{0}\ (980)$ $f_{ 0} \ (1500)$ are related
and governed approximately by the same singlet-octet mixing angle
\begin{equation}
\label{eq:58}
\begin{array}{l}
\Theta \ \approx \ \arcsin \ 1 / 3 \ = 19.47^{\rm o}
\end{array}
\end{equation}
with respect to the vectors
$\vec{e}_0= \frac{1}{\sqrt{\ 3}} \ ( \ 1 \ , \ 1 \ , \ 1 \ )$ and
$\vec{e}_8= \frac{1}{\sqrt{\ 6}} \ ( \ 1 \ , \ 1 \ , \ -2 \ )$
\begin{equation}
\label{eq:57}
\vec{c} \ = \ \vec{e}_0 \ \cos \ \Theta
\ - \ \vec{e}_8 \sin \ \Theta.
\end{equation}
\noindent
There is one difference though in the mass patterns of
the two octets in that the $I=0$ state
closer to the octet is the lighter one in the pseudoscalar case ($\eta$) but
the heavier one in the scalar case ($f_{0} \ (1500)$); then we adopt the
following correspondence in the quark compositions (\ref{f0low}) and
(\ref{eq:60})
\begin{equation}
\eta\quad \leftrightarrow \quad f_{0} (1500) \qquad \mbox{and}
\qquad \eta'\quad \leftrightarrow \quad f_{0} (980).
\label{f0eta}
\end{equation}
With this flavor composition of the $ f_{0}(1500)$ we also predict
the ratio of decay widths
\begin{equation}
R\ =\ \frac{ B(J/\psi \ \rightarrow \ \varphi \ f_{0} (1500))}{
B(J/\psi \ \rightarrow \ \omega \ f_{0} (1500))}
\ = \ \frac{1}{2}
\label{r1500phi}
\end{equation}
which is inverse to the corresponding ratio for $f_0(980)$.
A measurement of this ratio would be an interesting test of our hypotheses.
\subsection{Mass pattern and Gell-Mann-Okubo formula}
We come back to the two extremal possibilities for mixing discussed
in Sect. \ref{Sigma} within the context of the $\sigma$ model:
\begin{description}
\item I) quenched singlet octet mixing,
\item II) strict validity of the OZI rule.
\end{description}
We now conclude, that the $q \overline{q}$ scalar nonet
is nearer to case (I).
Furthermore, we suggest a definite
deviation from I)
parametrized by the {\it approximate} mixing angle (\ref{eq:58}), the same as
found for the pseudoscalar nonet.
This conclusion identifies the scalar
nonet as second one -- besides its pseudoscalar partner -- showing a large
violation of the OZI-rule.
Next we compare these results with the
expected mass pattern
as discussed in Sect. \ref{Sigma}.
In case of quenched singlet octet mixing (case I)
one predicts
from the Gell-Mann-Okubo mass formula for the members of an octet
the heavier scalar $I=0$ member to appear
in the mass range 1550-1600 MeV, if one takes
the $a_0$ and $K_0^*$ masses as input.
The deviation from the observed mass of the $f_0(1500)$ is
7-14\% in the masses squared
which we consider as tolerable; the deviation is attributed to
effects of $O(m_s^2)$.
Then the $f_0(980)$ is close to the
singlet member of the nonet.
On the other hand, for a splitting according to the OZI rule the isoscalar
$s\bar s$ state would be expected at the mass of $\sim$1770 MeV.
In this case the $f_0(980)$ would be a purely non-strange state which is
hardly consistent with the large decay rate $J/\psi\ \rightarrow \ \varphi
f_0(980)$ in Table \ref{eq:52}.
\subsection{Decays into $\pi\pi$, $K\overline K$, $\eta\eta$ and
$\eta\eta'$}
These 2-body decays are
again sensitive to the flavor composition of the $J^{PC}=0^{++}$
particles.
For further analysis we consider the decay of a $q\overline q$
state with arbitrary flavor composition where we define the mixing angle
$\phi$ now with respect to the strange non-strange directions
\begin{equation}
q\overline q\ =\ n\overline n\ \cos \phi\ + \ s\overline s\ \sin \phi;
\qquad n\overline n\ =\ (u\overline u\ + \ d\overline d )/\sqrt{2}.
\label{resmix}
\end{equation}
The decay amplitudes are calculated with flavor symmetry but the
relative amplitude $S$
to adjoin an $s \overline{s}$ pair relative to a
nonstrange
$u \overline{u}$ or $d \overline{d}$ one is assumed
to deviate from symmetry. We assume S to be real with
$0 \ \leq \ S \ \leq \ 1$ and $S \ \sim \ \frac{1}{2}$, but it may depend
also on the mass of the decaying system with a restoration of symmetry at
high energies. For a mixed state as in (\ref{resmix})
this ansatz leads to
the decay amplitudes in
Table \ref{flavor} which agree with the results in \cite{as1}. Here we take
the decomposition of $\eta$ and $\eta'$ as in (\ref{f0low}) and (\ref{eq:60})
with (\ref{f0eta}).%
\footnote{For an analysis with $S=1$, see also \cite{amsclo}.}
We also give the prediction for the two $f_0$ states with the same mixing as
in the pseudoscalar sector as discussed above
and for the glueball taken as colour singlet
state.\footnote{Contrary to Ref. \cite{as1} we assume that the creation of
quarks from the initial gluonic system is flavor symmetric and that the
$s$-quark suppression occurs only in the secondary decay by creation of a soft
$q\overline q$ pair.} We now examine how the predictions from our
hypotheses on the flavor decomposition in Table \ref{flavor} compare with
experiment.\\
\begin{table}
\begin{tabular}{ccccc}
\hline
channel & $q\overline q$ decay ($\phi$) & $f_0 (980)$ & $f_0 (1500)$
& glueball\\
\hline
$\pi^0\pi^0$ & $1\to \cos\ \phi /\sqrt{2}$ &$ 1 \to 1/\sqrt{6}$ &
$ 1 \to 1/\sqrt{3}$ &$ 1 \to 1/\sqrt{3} $\\
$\pi^+\pi^-$ & $1$ &$ 1 $ & $ 1$ & $ 1 $\\
$K^+K^-$ &
$(\protect\sqrt{2}\tan \phi+S)/2$ &$ (2+S)/2 $ & $ (-1+S)/2$ & $ (1+S)/2 $\\
$K^0\protect\overline{K^0}$ &
$(\protect\sqrt{2}\tan \phi+S)/2$ &$ (2+S)/2 $ & $ (-1+S)/2$ & $ (1+S)/2$\\
$\eta\eta$ &
$(2+\sqrt{2}S\protect\tan \phi)/3$ &$ 2(1+S)/3 $ & $
(2-S)/3$ & $ (2+S)/3 $\\
$\eta\eta'$ &
$(\sqrt{2}-2 S\protect\tan \phi)/3$ &$ \sqrt{2}(1-2S)/3 $ & $
\sqrt{2}(1+S)/3$ & $ \sqrt{2}(1-S)/3$\\
$\eta'\eta'$ &
$(1+2\sqrt{2}S\protect\tan \phi)/3$ &$ (1+4S)/3 $ & $
(1-2S)/3$ & $ (1+2S)/3 $\\
\hline
\end{tabular}
\caption{Amplitudes for the decays into two pseudoscalar mesons
of states with flavor mixing as in
(\protect\ref{resmix}),
using for the $ f_0 (980)$, the $ f_0 (1500)$
and the glueball the mixing angles
$\protect\sin\phi=\protect\sqrt{2/3}$, $\protect\sin\phi=-
\protect\sqrt{1/3}$
and $\protect\sin\phi=\protect\sqrt{1/3}$ respectively.
The $\eta - \eta'$ mixing is according to
Eqs. (\protect\ref{f0low}) and (\protect\ref{eq:60}),
S denotes the relative
$ s\protect\overline{s}$ amplitude. Normalization is to $\pi^+\pi^-$,
the first row also indicates after ($\to$) the relative
weights of $\pi\pi$ decays.
For identical particles the width has to be multiplied
by $1/2$.}
\label{flavor}
\end{table}
\noindent {\it Couplings of the $f_0(980)$}\\
\noindent
This state has a mass near the $K\overline K$ threshold. So the directly
measurable quantities are the reduced widths
$ \Gamma_{red}$ into $\pi\pi$ and
$K\overline K$ for which we predict according to Table \ref{flavor}
\begin{equation}
\label{f0decay}
\begin{array}{l}
R_0
\ = \ {\displaystyle
\frac{ \Gamma_{red} (f_{0}(980) \ \to \ K\overline K)}
{ \Gamma_{red} (f_{0}(980) \ \to \ \pi\pi)}
\ = \ \frac{(2+S)^2}{3} }
\end{array}
\end{equation}
The experimental determination from elastic and inelastic $\pi\pi$
scattering is difficult because of the unitarity constraints near the
$K\overline K$ threshold. This can be avoided in a measurement of reaction
(\ref{pinucleon}a) at larger momentum transfer $t$ where the $f_0(980)$ appears
as a rather symmetric peak without much coherent background
as already emphazised. Binnie et al.
\cite{binnie}
used their data on the $\pi^+\pi^-$, $\pi^0\pi^0$ and $K^+K^-$ channels
with $|t|\gtrsim 0.3$ GeV to measure the ratio
(\ref{f0decay}) directly by fitting their distributions to the Breit-Wigner
formula
\begin{equation}
\sigma_{\pi,K}\propto \left|
\frac{\Gamma_{\pi,K}^{1/2}}{m_0-m-i(\Gamma_\pi+\Gamma_K)/2}\right|^2,
\label{bwres2}
\end{equation}
where $\sigma_{\pi,K}$ denote the cross sections in the $\pi\pi$ and
$K\overline K$ channels, respectively. Furthermore,
$\Gamma_\pi=\gamma_{\pi}p_\pi$ and
$\Gamma_K=g_{K}p_{K^+}+g_{K}p_{K^0}$,
where $p_h$ are the momenta of $h$ in the $hh\ cms$. The reduced widths
are given by $\Gamma_{red,\pi}=\gamma_\pi$ and
$\Gamma_{red,K}=2 g_K$. We enter the result by Binnie et al. into Table
\ref{tabR0}. We also show the result of the fits by
Morgan and Pennington (MP) \cite{mp}
to $\pi\pi$ and $K\overline K$ final states from various reactions
taking into account the coherent background
and unitarity constraints.
As we are close to the $K\overline K$ threshold the $S$ parameter may be not
well defined, this leaves a range of predictions also presented in Table
\ref{tabR0}.
We see that the determination by Binnie et al. is comparable to the
theoretical expectation whereas the one by MP
(given without error) comes out a bit smaller.
\begin{table}[ht]
\[
\begin{array}{lllccc}
\hline
& \ \mbox{exp. results} & &S = 0 & S = 0.5 & S = 1.0
\\ \hline
%
R_{ 0} & \ 1.9 \pm \ 0.5 &\textrm{Binnie}\
\protect\cite{binnie} & \ 1.3 & \ 2.1 & \ 3.0
\\
& \ \simeq 0.85 & \textrm{MP} \ \protect\cite{mp} & & &
\\ \hline
\end{array} \]
\caption{The ratio $R_0$ defined in (\ref{f0decay}) as determined from
experiment and predicted for different strange quark amplitudes $S$.}
\label{tabR0}
\end{table}
We want to add that the determination of this ratio $R_0$ needs data on the
$K\overline K$ process. The sensitivity to $g_K$ in the
denominator of (\ref{bwres2}) is very weak. Therefore from fits to the
$\pi\pi$ spectra alone conflicting results are obtained.\\
\noindent{\it Couplings of the $f_0(1500)$}\\
\noindent
With respect to the branching fractions of the $f_{0}(1500)$ into
two pseudoscalars we scrutinize the phase space corrected reduced
rate ratios deduced by C. Amsler \cite{amsams}
\begin{equation}
\label{eq:62b}
R_{1} \ = \ {\displaystyle
\frac{ \Gamma_{\ red} \ ( \ \eta \ \eta \ )}
{ \Gamma_{\ red} \ ( \ \pi \ \pi \ )},
\quad
R_{2} \ = \
\frac{ \Gamma_{\ red} \ ( \ \eta \ \eta^{\ '} \ )}
{ \Gamma_{\ red} \ ( \ \pi \ \pi \ )},
\quad
R_{3} \ = \
\frac{ \Gamma_{\ red} \ ( \ K \ \overline{K} \ )}
{ \Gamma_{\ red} \ ( \ \pi \ \pi \ )}
}
\end{equation}
\noindent where all charge modes of $\pi\pi$ and $K\overline K$ are counted.
The experimental determinations \cite{amsams} are presented in Table
\ref{R123}.
Using our amplitudes for the decays of the $f_0(1500)$ in Table \ref{flavor}
we predict for these ratios
\begin{equation}
\label{eq:62c}
\begin{array}{l}
R_{1} = \ \frac{4}{27} \ ( \ 1 \ - \ \frac{1}{2} \ S \ )^{2}
\hspace*{0.1cm} , \hspace*{0.1cm}
R_{2} = \ \frac{4}{27} \ ( \ 1 \ + \ S \ )^{2}
\hspace*{0.1cm} , \hspace*{0.1cm}
R_{3} = \ \frac{1}{3} \ ( \ 1 \ - \ S \ )^{2}
\end{array}
\end{equation}
\noindent
A $\chi^{2}$ fit for S using the ``data'' in Table \ref{R123} yields
\begin{equation}
\label{eq:62d}
S \ = \ 0.352^{\ +0.131}_{\ -0.109}
\end{equation}
with a satisfactory $\chi^{2} = 0.887$ (for n.d.f. = 2).
The range of values for S obtained in the fit as well as
the reasonable agreement with the derived rates in Ref. \cite{amsams}
is compatible
with the (approximately) identical mixing of {\it both}
scalar and pseudoscalar nonets according to our assignments.
\begin{table}[ht]
\[ \begin{array}{cccc}
\hline
\mbox{ratio} & \mbox{data} & S \ = \ 0.352 & S \ = \ 0.5
\\ \hline
R_{1} & \ 0.195 \pm \ 0.075 & \ 0.101 & \ 0.083
\\ & & & \vspace*{-0.4cm} \\
R_{2} & \ 0.320 \pm \ 0.114 & \ 0.271 & \ 0.333
\\ & & & \vspace*{-0.4cm} \\
R_{3} & \ 0.138 \pm \ 0.038 & \ 0.140 & \ 0.083
\\ \hline
\end{array}\]
\caption{Reduced rate ratios $R_i$ as defined in (\protect\ref{eq:62b}):
experimental determinations by Amsler \protect\cite{amsams}
and predictions for different strange quark amplitudes $S$ according to
(\protect\ref{eq:62c}).}
\label{R123}
\end{table}
In particular we should note the large rate $R_2$ for
the $\eta \eta^{'}$ decay mode, which
strengthens the octet assignment of the $f_0(1500)$ (for a flavor singlet
-- the glueball in Table \ref{flavor} --
this decay mode would disappear in the flavor symmetry limit $S=1$).
On the other hand, the
ratio $R_3$ is rather small which is in contradiction to a
pure $s\overline s$
assignment looked for in some classification schemes. The smallness of $R_3$
is now naturally explained by the negative
interference between the nonstrange and strange components of the $f_0(1500)$
in Table \ref{flavor} which would actually be fully destructive
in the $SU(3)_{fl}$ symmetry limit.\\
\subsection{Decays into two photons}
In this paragraph we focus first on the $f_0(980)$,
which according to our hypotheses is a $q \overline{q}$ resonance (mainly).
We distinguish the $q\overline{q} $ compositions of $f_0$
again according to the
two cases I) -- dominantly singlet -- and II) -- nonstrange --
discussed before
and include the third alternative -- $f_0 \ \sim \ \overline{s} s$ --
denoted T) , historically in the foreground, as proposed
again recently by T\"{o}rnqvist \cite{Torn}. For the decay into two photons
we compare with the neutral component of the isotriplet $a_0(980)$.
Disregarding sizable glueball admixtures to $f_0$ the decay amplitude
to two photons becomes proportional to the characteristic factor
well known from the corresponding decays of $\pi^{0}, \ \eta ,\ \eta^{\ '}$
involving the quark charges in units of the elementary one and proportional
to the number of colors; for a state with quark composition
$ (c_u,\ c_d,\ c_s)$ one obtains
\begin{equation}
\label{eq:40}
S_{\ \gamma \gamma} \ =
\ 3 \ \sum_{\ q = u,d,s} \ c_q \ Q_{\ q}^{\ 2}.
\end{equation}
\noindent
Then we obtain for the ratio of the two photon decay widths of $f_0$ and $a_0$
with (practically) the same phase space
\begin{equation}
\label{eq:41}
R_{\gamma\gamma} \ = \
\frac{ \Gamma \ ( f_0\ (980) \ \rightarrow \ \gamma \gamma \ )}
{ \Gamma \ ( a_{ 0} \ (980) \ \rightarrow \ \gamma \gamma \ )}\ ;
\qquad
R_{\gamma\gamma} \ = \ 2 \ S_{\ \gamma \gamma}^{\ 2}.
\end{equation}
The predictions for the various mixing schemes are given in Table
\ref{eq:39}.
\begin{table}[ht]
\[
\begin{array}{lllccccl}
\hline \vspace*{0.1cm}
\mbox{case}&& & c_{\ u} & c_{\ d} & c_{\ s} &
R_{\gamma\gamma} & = \ 2 \ S_{\ \gamma \gamma}^{\ 2}
\\ \hline \vspace*{0.1cm}
f_0(980)& (\mbox{Ia}) & \mbox{no mixing}
& \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}}
& \frac{24}{9} & = \ 2.67 \\
& (\mbox{Ib}) & \eta-\eta'\ \mbox{mixing}
& \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} & \frac{2}{\sqrt{6}}
& \frac{49}{27} &= \ 1.815 \\
& (\mbox{II}) & \mbox{OZI-mixing}
& \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0
& \frac{25}{9} & = \ 2.78 \\
& (\mbox{T}) & \mbox{pure}\ s\overline s
& 0 & 0 & 1
& \frac{2}{9} &= \ 0.22 \\
a_0(980) & &
& \frac{1}{\sqrt{2}} & - \frac{1}{\sqrt{2}} & 0
& 1 & \vspace*{0.1cm} \\
\hline
\end{array}
\]
\caption{Two photon branching ratio $R_{\gamma\gamma}$ defined in
Eq. \protect(\ref{eq:41}) for different
mixing schemes according to the quark composition $c_q$.}
\label{eq:39}
\end{table}
\noindent
The Particle Data Group gives for the $a_0 (980)$ and the $f_0 (980)$
\begin{equation}
\label{gamma2g}
\begin{array}{lll}
\Gamma \ ( \ a_0 \ \rightarrow \ \gamma \gamma \ )
& = & \left ( \ 0.24^{\ + 0.08}_{\ - 0.07} \ \right ) \ \mbox{keV}
\ / \ B ( \ a_0 \ \rightarrow \ \eta \pi \ )
\vspace*{0.3cm} \\
\Gamma \ ( \ f_0 \ \rightarrow \ \gamma \gamma \ )
& = & ( \ 0.56 \ \pm \ 0.11 \ ) \ \mbox{keV}
\end{array}
\end{equation}
and therefore
\begin{equation}
R_{\gamma\gamma} \ = \ ( \ 2.33 \ \pm \ 0.9 \ )
\ B( a_0 \ \rightarrow \ \eta \pi \ ).
\label{eq:42}
\end{equation}
\noindent
The branching fraction $B( \ a_0 \ \rightarrow \ \eta \pi \ )$
is not determined satisfactorily because of conflicting analyses
by Bugg et al. \cite{Bug} , Corden et al. \cite{Cord}
and Defoix et al. \cite{Defoix}, but the PDG classifies the $\eta\pi$
mode as ``dominant''.
We conclude from the measurement in Eq. (\ref{eq:42})
that case (T) with pure $\overline{s} s$
composition of $f_0(980)$ is untenable. On the other hand,
it becomes obvious that a distinction
between alternatives (Ia), (Ib) and (II) by these measurements would need a
considerable increase in experimental precision.
Finally, we derive the corresponding prediction for the $f_{0}(1500)$
assuming its
$q \ \overline{q}$ composition according to Eq. (\ref{eq:60}),
identical to its $\eta$ pseudoscalar counterpart.
Concerning the deviations from the Gell-Mann-Okubo mass square formula
in Eq. (\ref{eq:21b}) we refer to the well known stability of the
corresponding relation for the pseudoscalar nonet, with similar singlet
octet mixing angle.
\vspace*{0.1cm}
We obtain for the
ratio of decay widths
into $2 \ \gamma$
\begin{equation}
\label{eq:61}
R_{\gamma\gamma}^{\ '} \ =
\frac{ \Gamma \ ( \ f_0 \ (1500) \ \rightarrow \ \gamma \gamma) }
{ \Gamma \ ( \ f_0 \ (980) \ \rightarrow \ \gamma \gamma \ )}
\end{equation}
and the individual decay width of the $f_0\ (1500)$
the following predictions
\begin{equation}
\label{rprpr}
\begin{array}{l}
R_{\gamma\gamma}^{\ '} \ =
{\displaystyle \ \frac{32}{49} \
\left ( \frac{ m \ ( \ f_{0} \ (1500) \ )}{ m \ ( \ f_{0} \ (980) \ )}
\right )^p
}
\vspace*{0.3cm} \\
\Gamma \ ( f_0 \ (1500) \ \rightarrow \ \gamma \gamma \ )
\ \sim \ 0.3 \ (\ 0.1 \ldots 0.6\ ) \ \mbox{keV}
\end{array}
\end{equation}
In Born approximation the power in (\ref{rprpr}) would be $p=3$ and this
power seems appropriate for the light pseudoscalars. At the higher energies,
formfactor effects (typically\footnote{The
transition amplitude should contain the nonperturbative constant
$\langle 0|\overline q q | f_0\rangle=m_0^2$, then $p=-3$ follows for
dimensional reasons.
Experimental data on the $\gamma\gamma$ decays of
the tensor mesons are consistent with $p\sim -3$
assuming ideal mixing.}
$p=-3$)
become important. In (\ref{rprpr}) we give our best estimate,
the lower limit corresponds to $p=-3$,
the upper one corresponds to simple phase space with $p=1$.
The branching ratios into two photons have also been considered in the model
by Klempt et al. \cite{klempt}. Their $f_0(1500)$ with mixing angle
$\sin\phi=-0.88$ is
very close to the octet state ($\sin\phi=-0.82$), yet closer than in our
phenomenological analysis with $\sin\phi=-0.58$.
Then they obtain for the above ratio
$ R_{\gamma\gamma}^{\ '} \sim 0.086$. The results depend strongly
on the mixing angle as $ R_{\gamma\gamma}^{\ '}$ has a
nearby zero at $\sin\phi = -0.96$,
corresponding to the mixture $(1,\ 1,\ -5)/\sqrt{27}$.
For a pure octet assignment we would obtain
$\Gamma \ \sim \ 0.08 \ (0.03 \ldots 0.17)$ keV
instead of (\ref{rprpr}).
It appears possible, that the $2 \ \gamma$ mode of $f_{0} \ (1500)$
can be detected in the double Bremsstrahlung reaction
$e^{\ +} \ e^{\ -} \ \rightarrow \ e^{\ +} \ e^{\ -} \ f_{0} \ (1500)$.
A first search by the ALEPH collaboration at LEP \cite{ALEPHgg} did not show
any $f_0(1500)$ signal. However, no clear signal of $f_0(980)$ has been
observed either although this process is well established. It appears that
the statistics is still too low. Also other
decay modes such as $\eta\eta$ and $K\overline K$
of the $f_{0} \ (1500)$ look promising to be studied.
\subsection{Relative signs of decay amplitudes}
Besides the branching ratios of the states
into various channels the relative signs of their
couplings is of decisive importance. They can also be deduced from Table
\ref{flavor}. The S-wave phases discussed in Sect. \ref{spectr} in the
mass range above 1 GeV
are determined with respect to the phase of the
leading $f_2(1270)$ resonance which
is a nearly nonstrange $q\overline q$ state. In this case
($\phi\approx 0$)
the coupling to all decay channels in Table \ref{flavor} is positive.
For the states discussed here we obtain the signs in Table \ref{signs}.
The predictions for the $f_0(1500)$ are in striking agreement with the data
on inelastic $\pi\pi$ scattering discussed in the previous section,
as can be seen from Fig. \ref{gbfig4}. The resonance loop
in the $K\overline K$ channel is oriented ``downwards'' in
opposite direction to the one in $\eta\eta$ and also opposite to the
$f_2(1270)$ resonance defined as ``upward''.
This is consistent with our assignment
$(1,1,-1)/\sqrt{3}$ in (\ref{eq:60}) for the $f_0(1500)$. It is not
consistent in particular with the expectations for a glueball which would have
positive couplings to all decay channels.
\begin{table}[ht]
\[
\begin{array}{lccc}
\hline
\mbox{decay} & \ f_0(980) & f_0(1500) & \mbox{glueball}
\\ \hline
K\overline{K} & + & - & + \\
\eta \eta & + & + & + \\
\hline
\end{array} \]
\caption{Signs of amplitudes for the decays of scalar states
into $ K\overline{K}$ and $\eta \eta$ relative to the $f_2(1270)$.}
\label{signs}
\end{table}
As for the other two states in Table \ref{signs} we have only the small window
above the $K\overline K$ threshold.
The amplitude in this region is composed of the tail of the $f_0(980)$ and the
``background'', i.e. the supposed glueball state
according to our hypothesis. We note
that in both channels the amplitude has a qualitatively similar behaviour
in accord with the expected positive signs of all components.
At present we have no quantitative model for the absolute
phase around 230$^{\rm o}$ for the superposition of these two states.
\section{The lightest glueball}
Adopting the {\em phenomenological hypotheses} a) - c)
in Sect. 1
we have exhausted in the previous analysis
all positive parity mesons
in the PDG tables below 1600 MeV with the {\em notable} exception of the scalar
$f_{ 0} (400-1200)$ and also
the $f_0(1370)$ which we did not accept as standard
Breit-Wigner resonance.
We consider the spectrum in Fig. \ref{gbfig3} (the ``red dragon'') with the
peaks around 700 and 1300 MeV and possibly
with another one above 1500 MeV
as a reflection of a single very broad object (``background'') which interferes with
the $f_0$ resonances. In elastic $\pi\pi$ scattering this ``background''
is characterized by
a slowly moving phase which passes through $90^{\rm o}$ near 1000 MeV
if the $f_0(980)$ is subtracted (see, for example \cite{mp}).
This ``background'' with a slowly moving phase is also observed in the 1300
MeV region in the inelastic channels $\pi\pi\to \eta\eta,\ K\overline K$
as discussed above.
It is our hypothesis that this very broad object which couples to the
$\pi\pi$, $\eta\eta$ and $K\overline K$ channels is
the lightest glueball
\begin{eqnarray}
f_0(400-1200), \ f_0(1370)& \to & gb_0 (1000) \label{Gdef} \\
\Gamma[gb_0(1000)]& \sim & 500-1000\ \mbox{MeV}. \nonumber
\end{eqnarray}
The large width is suggested by the energy dependent fits in Table
\ref{tabres}. From the speed of the phase shift $\delta_0^0$ near 1000 MeV
-- $\frac{d\delta^0_0}{dm} \simeq 1.8$ GeV$^{-1}$ after the $f_0(980)$ effect
has been subtracted out as in \cite{mp} -- one finds
using Eq. (\ref{gamma}) the larger value $\Gamma(gb_0)\sim 1000$ MeV.
The glueball mass (\ref{Gdef}) corresponds to alternative 1)
($m_{gb_0, \infty} \ \lesssim \ m_{ a_0} \ \sim$ 1 GeV)
as described at the beginning of Sect.
\ref{sec2}.
We do not exclude some mixing with the scalar nonet
states but it should be sufficiently small such as to preserve the main
characteristics outlined in the previous section.
In the following we will investigate whether our glueball assignment
for the states in (\ref{Gdef})
is consistent with general expectations.
\subsection{Reactions favorable for glueball production}
We first examine the processes in which particles are expected to be
produced from gluonic intermediate states.
\noindent
{\it (a)} $pp\to pp \ +\ X_{\rm central}$\\
In this reaction the double pomeron exchange mechanism should be favorable
for the production of gluonic states.
A prominent enhancement at low $\pi\pi$ energies is observed
\cite{akesson,gamspp}
and can be interpreted in terms of the elastic $\pi\pi$ phase shifts
\cite{amp,mp}.
\noindent
{\it
(b) Radiative decays of $J/\psi$}\\
For our study of scalars the most suitable final states
are those with the odd waves
forbidden. The simplest case is $J/\psi\to \gamma \pi^0\pi^0$ which has been
studied by the Crystal Ball Collaboration \cite{crballpi}. The mass
spectrum shows a prominent $f_2(1270)$ signal
but is sharply cut down for masses below 1000 MeV
and the presentation ends at $m_{\pi\pi}\sim 700$ MeV. This cutoff in the mass
spectrum is not due to the limited detector efficiency
which is flat over the full mass region
down to $\sim$ 600 MeV and drops sharply only below this mass value \cite{ral}.
An incoherent background is fitted in \cite{crballpi} under the $f_2$ peak
which reaches the fraction 1/7.5 at the maximum of the peak. This is not
much smaller than 1/(5+1) expected for the S-wave from the counting of spin
states. No data have been presented on the azimuthal angle distribution
which would allow to estimate the amount of S-wave.
No further hint can be obtained from
the $\pi^+\pi^-$ channel analysed by
the Mark III collaboration \cite{Mk3a} because of the larger background.
It appears that -- contrary to our expectation -- there is no low mass
enhancement around 700 MeV in this channel related to the glueball;
its production with higher mass of around 1300 MeV is not inconsistent with
data. For the moment we have no good explanation for the low mass
suppression.
\noindent
{\it (c) Transition between radially excited states}.\\
\noindent
The radially excited states $\psi'$, $Y'$ and $Y''$ can decay by two gluon
emissions into the heavy quark ground state and give therefore rise to
the production of gluonic states. The observed $\pi\pi$ mass spectra
can be described consistently using
the elastic $\pi\pi$ $S$-wave phase shifts \cite{amp} although the
calculation is not very sensitive to their detailed behaviour.
Another example of this kind is the decay of the $\pi(1300)$, presumably a
radial excitation of the stable $\pi$; its decay mode into
$(\pi\pi)_{S-wave}\pi$ is seen \cite{Aaron}.
\noindent
Finally, we comment on the different production phases of the glueball
amplitude with respect to the $f_0(980)$ discussed in Sect. \ref{spectr2}.
In most inelastic reactions the $f_0(980)$ appears as peak above the background (case b)
which is consistent with the phases of the decay amplitudes
for this state and the glueball to be
the same as expected from
Table \ref{flavor}. The dip occurs in elastic scattering
(case a) where a peak is not allowed as the background is already
near the unitarity limit. In two reactions (case c) the large asymmetry
in the mass spectra suggests a background out of phase by $90^{\rm o}$
with respect to the $f_0(980)$ Breit-Wigner amplitude which may be a hint to
different production phases.
\subsection{Flavor properties}
Here we list a few observations which may give a hint towards the flavor
composition along the lines discussed for the $q\overline q$ nonet.\\
\noindent {\it Glueball production in $p\overline p$ annihilation}\\
The Crystal Barrel Collaboration has observed the $f_0(1370)$
in the processes
\begin{equation}
p\overline p \ \to\ f_0(1370)\ \pi^0;\quad f_0(1370)\ \to \ K_LK_L,\
\eta\eta \label{f1300dec}
\end{equation}
where clear peaks in the respective mass spectra have been seen.
The theoretical expectation for the ratio of reduced branching ratios
assuming $f_0(1370)$ to
decay like a glueball according to Table \ref{flavor} is obtained as
\begin{equation}
R_g\ = \ \displaystyle \frac{\Gamma_{red}(f_0(1370)\to\eta\eta)}
{\Gamma_{red}(f_0(1370)\to K\overline K)}
\ = \frac{(2+S)^2}{9 (1+S)^2}.
\label{f1300r}
\end{equation}
From the results summarized by
Amsler \cite{amsams} we derive the quantity (\ref{f1300r}) after correction
for phase space and unseen $K\overline K$ decay modes
\begin{equation}
\mbox{exp. result}: \qquad\qquad R_g \mbox{ } \sim 0.44.
\mbox{ } \qquad\qquad
\label{rgexp}
\end{equation}
This number is to be compared with the theoretical expectations for different
strange quark amplitudes $S$
\begin{equation}
\mbox{theor. result}:\qquad S=(0,\ 0.5,\ 1.0):\qquad R_g=(0.44,\ 0.31,\
0.25).
\label{rgth}
\end{equation}
The value extracted from the measurements
is somewhat larger than expected but looking at the
difficulty to extract such numbers experimentally we consider the result
as encouraging.
Similar results for the $f_0(400-1200)$ cannot be extracted from the data in
Ref. \cite{amsams} because of the overlap with nearby other states.\\
\noindent{\it $J/psi$ decay into glueball + vector mesons}\\
In analogy with the flavor analysis of the $f_0$ states above we
now proceed
with our glueball candidate. In the final state $\phi \pi\pi$ DM2 observes
indeed a broad background under the $f_0(980)$ which extends towards small
masses in the $\pi\pi$ invariant mass
\cite{falvard}. On the other hand the mass spectrum in the
final state $\omega \pi\pi$ looks very different with a peak at low masses
around 500 MeV \cite{Augustin5pi}. Similar results are also seen by Mark-III
\cite{Lockman}.
If the low mass bump in the $\omega \pi\pi$
final state is a real effect and not due to background%
\footnote{In Ref. \cite{Lockman} the important background from
$\phi\eta,\ \eta\to 2\pi+\pi^0$ has been emphasized; it could also appear in
the $\omega$ channel}
it requires
quite a different dynamics in the two vector meson channels. One possibility
is the suppression of low mass $\pi\pi$ pairs from the decay of an
$s\overline s$ pair because of the heavier $s$-quark mass.
This problem could be avoided by a restriction of the
comparison to the mass region above 1 GeV.
\subsection{Suppressed production in $\gamma\gamma$ collisions}
If the mixing of the glueball with $q \overline q$
states is small then the same is true for the two photon coupling.
We consider here the processes
\begin{equation}
\text{(a)}\quad \gamma\gamma\to \pi^+\pi^-\qquad
\text{(b)}\quad \gamma\gamma\to \pi^0\pi^0. \label{ggpipi}
\end{equation}
and distinguish two regions for the mass $W\equiv m_{\pi\pi}$.\\
\noindent {\it Low energies $W\lesssim 700$ MeV}\\
The process (a) is dominated by the Born term with pointlike
pion exchange. This
contribution is avoided in process (b) and the remaining cross section
is smaller by one order of magnitude in the same mass range; furthermore, it
is also very small compared to the dominant cross section at the $f_2(1270)$
resonance position. The reaction (b) has been studied by the Crystal Ball
\cite{Cball} and JADE \cite{JADE} Collaborations.
We compare the cross section in
$\gamma\gamma\to \pi^0\pi^0$
and in the isoscalar elastic $\pi\pi$ scattering
near the peak at $W\sim 600$ MeV. Only in the
second reaction the glueball should have a sizable coupling. We normalize
both cross sections to the $f_2(1270)$ meson peak representing a well
established $q\overline q$ state and obtain
\begin{eqnarray}
\displaystyle
R_\gamma & = & \frac{\sigma_{\gamma\gamma}(W_1=600\ \text{MeV})}
{\sigma_{\gamma\gamma}(W_2=1270\ \text{MeV})} \nonumber \\
&\simeq & 0.067 \label{Rgamma}\\
R_\pi & = & \frac{\sigma_{\pi\pi}^S(W_1=600\ \text{MeV})}
{\sigma_{\pi\pi}^D(W_2=1270\ \text{MeV})}\ =
\ \frac{1}{5x_f}\frac{W_2^2}{W_1^2} \nonumber \\
&\simeq & 1.05 \label{Rpi}.
\end{eqnarray}
Here we used for $R_\gamma$ the data from \cite{Cball}
and for the $\pi\pi$ S-wave the cross section at the unitarity limit and
the same for the $f_2$ meson in the D-wave but with elasticity
$x_f=0.85$. The ratios in (\ref{Rpi}) demonstrate that
the low mass S-wave production in
$\gamma\gamma$ collisions is suppressed by more than an order of
magnitude in comparison to $\pi\pi$ collisions.
The size of the cross sections in both charge states in
(\ref{ggpipi}) can be understood \cite{Cball,mpgamgam}
by including the Born term in (a) only and a
rescattering contribution in both processes. So one can interpret the
reaction (b) as a two step process, first the two photons couple to charged
pions as in (a) then rescattering by charge exchange $\pi^+\pi^-\to
\pi^0\pi^0$:
in this picture the photons do not
couple ``directly'' to the
``quark or gluon constituents'' of the broad structure at 600 MeV
but only to the initial
charged pointlike pions. This is at the same time a minimal
model for the production of the
bare glueball according to our hypothesis without direct
coupling of the photons to the glueball state.\\
\noindent{\it Mass region around the $f_2(1270)$}\\
One may look again for the presence of an S-wave state.
The measurement of the angular distribution does not allow in general a
unique separation of the $S$-wave from the $D_\lambda$-waves
in helicity states $\lambda=0$ and
$\lambda=2$. It turns out, however, that the data
are best fitted
in the mass region $1.1 \leq W \leq 1.4$ GeV by the $D_2$ wave alone without
any $S$ and $D_0$ wave included \cite{Cball,Markii,JADE}. A restriction
on the spin 0 contribution has been derived at the 90\% confidence limit
in \cite{Cball} as
\begin{equation}
\displaystyle
\frac{\sigma_{\gamma\gamma}(\text{spin 0})}
{\sigma_{\gamma\gamma}(\text{total})}\ <\ 0.19
\quad \text{for} \quad 1.1\ \leq \ W\ \leq \ 1.4\ \text{GeV} \label{spin0lim}
\end{equation}
which turns out not yet very restrictive. Taking all three experiments
together a suppression of S wave under the $f_2(1270)$ is suggested.
In summary, the production of the broad S-wave enhancement
is suppressed in $\gamma\gamma$
in comparison to $\pi\pi$ collisions, and this is very clearly seen
at the low energies.
This we consider as a strong hint in favor of
our hypothesis of the mainly gluonic nature of this phenomena
both at low and high energies.
Clearly, the study of scalar states in $\gamma\gamma$ collisions will be of
crucial importance for the determination of their flavor content
and classification into multiplets.
\subsection{Quark-antiquark and gluonic components in $\pi\pi$ scattering}
In our picture, the elastic $\pi\pi$ scattering amplitude in the energy
region below $\sim 2$ GeV is not saturated by $q\overline q$ resonances
in the $s$- and $t$-channel alone.\footnote{
This would follow with "one-component-duality" between direct channel
resonances and $t$-channel Regge-poles as, for example, realized in the
Veneziano model \cite{veneziano} or, alternatively, in resonance pole
expansions in both channels simultaneously, as in
\cite{iiz}, or, more recently, in \cite{zoubugg}.}
There is a second
component which corresponds to Pomeron exchange in the $t$-channel -- dual to
the so-called ``background'' in the $s$-channel. This dual picture with two
components, suggested by Freund and Harari
\cite{fh}, has been very successful in the interpretation of the
$\pi N$ scattering data.
In case of
the $\pi\pi$-interaction a similar situation was found by Quigg~\cite{quigg}:
whereas the $I_t=1$ $t$-channel exchange amplitude
can be saturated
by $q\overline q$ resonances,
the $I_t=0$ amplitude obtains a contribution of about
equal magnitude from the ``background'' as well. This background
is present already in the low energy region around 1 GeV and is seen
clearly in the S-wave amplitude
corresponding to $I_t=0$ \cite{quigg}; it also governs the exotic
$\pi^+\pi^+$ channel.
The Pomeron
exchange is naturally related to glueball exchange.
Then, we consider a third component, obtained by crossing, with glueball
intermediate states in the s-channel and exotic four quark states
in the t-channel. Indeed, the $\pi\pi$
$I_t=2$ exchange amplitude in \cite{quigg}
shows resonance circles with little background and therefore could
correspond to
a glueball amplitude after appropriate averaging. This third component
with exotic exchange
is expected to drop yet faster with energy than the $q\overline q$ resonance
exchange amplitude.
We consider the phenomenological results on the low energy ``background''
\cite{quigg}
as a further independent hint towards a gluonic component in the low energy
$\pi\pi$ scattering.
\section{Completing the basic triplet of gauge boson binaries}
\label{basic_triplet}
After we found the candidate for $gb\ (0^{++})$ at $\sim 1$ GeV
we expect, as discussed in Sect. \ref{sec2},
the two remaining members of the basic triplet with $J^{PC}$
quantum numbers $0^{-+}$ and $2^{++}$ to be heavier than $gb\ (0^{++})$ and
to exhibit a much smaller width because of the reduced strength of the
interaction (coupling $\alpha_s$) at higher mass
\begin{equation}
\label{eq:66}
\begin{array}{ll}
g b \ (0^{-+}): \hspace*{0.3cm}
m_{ 2} \ > \ 1 \ \mbox{GeV}
\hspace*{0.3cm} , \hspace*{0.3cm} &
\Gamma_{ 2} \ \ll \ 1 \ \mbox{GeV};
\vspace*{0.3cm} \\
g b \ (2^{++}): \hspace*{0.3cm}
m_{ 3} \ \gsim \ m_{ 2}
\hspace*{0.3cm} , \hspace*{0.3cm}&
\Gamma_{ 3} \ \ll \ 1 \ \mbox{GeV}.
\end{array}
\end{equation}
\noindent
Thus we are looking for two resonances, the width of which make them appear
much more similar to their prominent, relatively narrow $q \ \overline{q}$
counterparts. The mass range is tentatively set to $1 \ - \ 2$ GeV.
We search for possible candidates in radiative $J/\psi$
decay, on which we focus next.
To this end we list in Table \ref{psirad}
the most prominent radiative decay modes of
$J/\psi \ \rightarrow \ \gamma \ X$
into a single resonance $X$ without charm content.
\begin{table}[ht]
\[ \begin{array}{l}
\hline
\begin{array}{l@{\hspace*{0.4cm}}l@{\hspace*{0.4cm}}r@{\hspace*{0.4cm}}
c@{\hspace*{0.4cm}}c}
& \mbox{name}\ (X) & B(J/\psi\rightarrow \gamma X)\times 10^{3}
& \mbox{partial }B & \mbox{mode}
\\ \hline
1 & \eta^{'}\ (958) & 4.31 \ \pm \ 0.30 & &
\vspace*{0.1cm} \\
2 & \eta \ (1440) & > \ 3.01 \ \pm \ 0.44 & &
\vspace*{0.1cm} \\
& & & 1.7 \ \pm \ 0.4 & \varrho^{0} \varrho^{0}
\vspace*{0.1cm} \\
& & & 0.91 \ \pm \ 0.18 & K \overline{K} \pi
\vspace*{0.1cm} \\
& & & 0.34 \ \pm \ 0.07 & \eta\pi^+\pi^-
\vspace*{0.1cm} \\
& & & 0.064 \ \pm \ 0.014 & \gamma \varrho^{0}
\vspace*{0.1cm} \\
3 & f_{4} \ (2050) & 2.7 \ \pm \ 1.0 & & \pi \pi
\vspace*{0.1cm} \\
4 & f_{2} \ (1270) & 1.38 \ \pm \ 0.14 & & \pi \pi
\vspace*{0.1cm} \\
5 & f_{J} \ (1710) & 0.85^{+1.2}_{-0.9} & & K \overline{K} \\
\hline
\end{array}
\end{array}
\]
\caption{Radiative decay modes of $J/\psi$ into single
non-$c\overline c$ resonances
with branching ratios $B \gsim \ 10^{-3}$
according to the PDG \protect\cite{PDG}.}
\label{psirad}
\end{table}
\noindent
Among the 5 resonances
we recognize
$\eta(1440)$ as a candidate for $gb\ (0^{-+})$ and $f_J(1710)$ with
spin $J$ either 0 or 2, as a candidate for $gb\ (2^{++})$.
\subsection{The glueball with $J^{PC}=0^{-+}$}
A state with these quantum numbers is expected to decay into 3
pseudoscalars ($ps$).
The first experiments on the radiative decays
$J/\psi \ \rightarrow \ \gamma \ 3 \ ps$ were performed by the
MarkII \cite{Mk2a} and Crystal Ball \cite{crball3} collaborations in
the channels $3 \ ps \ = \ K_{s} K^{\pm} \pi^{\mp}$ and
$3 \ ps \ = \ K^{+} K^{-} \pi^{0}$, respectively.
A spin analysis was performed by Crystal Ball \cite{crball3};
it revealed a major intermediary decay mode
\begin{equation}
\label{eq:79}
\begin{array}{l}
\eta (1440) \ \rightarrow \ a_{0} (980) \pi \ \rightarrow \ K \overline{K} \pi
\end{array}
\end{equation}
\noindent
and $J^{PC} [\eta(1440)] \ = \ 0^{-+}$. While the branching fraction product
$B \ ( \ J/\psi \ \rightarrow \ \gamma \eta \ (1440) \ )
\times B \ ( \eta \ (1440) \ \rightarrow \ K \overline{K} \pi )$
was overestimated in Refs. \cite{Mk2a,crball3},
the spin parity assignment was confirmed by Mark-III
\cite{Mk3eta} in the decay mode
\begin{equation}
\label{eq:80}
\begin{array}{l}
\eta (1440) \ \rightarrow \ a_{0} (980) \pi \ \rightarrow \ \eta \pi^{+} \pi^{-}
\end{array}
\end{equation}
\noindent
and by DM2 \cite{Augusteta} in both channels of Eqs. (\ref{eq:79}) and (\ref{eq:80}).
It is therefore natural to associate this state with its large radiative
$J/\psi$ decay mode with the $0^{-+}$ glueball.
On the other hand, in $pp$ and $\pi p$ collisions the central production of
this state is weak in comparison to the leading $q\overline q$ resonances
\cite{omega2} or not resolved at all \cite{Barb1}.
The glueball interpretation has a long
history of debate \cite{Chan,Close}. Doubts have
been brought up, in particular, in view of
the results from lattice QCD calculations referred to in Sect. 2
which suggest a heavier mass above 2 GeV. As we discussed there, we feel that
for a justification of such doubts,
the more complete calculations should be awaited.
However, because of the near absence in central production, the glueball
interpretation is at a more speculative level at present.
\subsection{The glueball with $J^{PC}=2^{++}$}
\label{fJ1710}
This state is expected to decay into two pseudoscalars. $f_J(1710)$ has
long been a prime candidate. The problem for the classification of
this state was and still is \cite{PDG} the ambiguity
in the spin assignment $J=0$ or $J=2$.
In the following, we discuss the results of spin analyses
in various
experiments on $J/\psi$ decays and central hadronic collisions
which will lead us to a definite conclusion concerning the existence of a
$J=2$ state.
\subsubsection{Radiative $J/\psi$ decays}
\noindent {\it Crystal Ball experiment}\\
The first observation of this state was obtained by
the Crystal Ball collaboration at the SPEAR storage ring
in '81 \cite{crballeta}
in the decay channel
\begin{equation}
\label{eq:67}
\begin{array}{l}
J/\psi \ \rightarrow \ \gamma \ \eta \eta.
\end{array}
\end{equation}
\noindent
The useful sample contained $~50$ events in
the $\eta \eta$ invariant mass range from 1200 - 2000 MeV.
The resonance parameters were \cite{crballeta}:
\begin{equation}
\label{eq:68}
\begin{array}{l}
m \ = \ 1640 \ \pm \ 50 \ \mbox{MeV}
\hspace*{0.3cm} , \hspace*{0.3cm}
\Gamma \ = \ 220^{\ +100}_{\ -70} \ \mbox{MeV}.
\end{array}
\end{equation}
\noindent
A spin analysis with respect to the two hypotheses $J = 2$ and $J=0$ was performed
with at least statistical preference of $J^{PC} \ = \ 2^{++}$.
\vspace*{0.1cm}
The same resonance could not be resolved in a significant way by the
same collaboration in the channel
$J/\psi \ \rightarrow \ \gamma \ \pi^{0} \pi^{0}$ \cite{crballpi}.
The scarcity of events is matched by the scarcity
of precise description of the analysis.
\noindent {\it Mark-III and DM2 experiments}\\
A significant improvement in statistics is next reported by
the Mark-III collaboration \cite{Mk3a} in the channels
\begin{equation}
J/\psi \ \rightarrow \ \gamma
\ \pi^{+} \ \pi^{-}, \quad
\gamma \ K^{+} \ K^{-}. \label{eq:69}
\end{equation}
\noindent
We first discuss the results in the $\pi^{+} \ \pi^{-}$ subchannel.
The two resonances
$f_2(1270)$ and $ f_J(1710)$
are clearly resolved
and a
small indication of $f^{\ '}_{2} (1525)$ is visible in the
projected $\pi^{+} \ \pi^{-}$ invariant mass distribution.
A full exposition is given of the
relevant angular acceptances and efficiencies.
Now a fit of four interfering resonances is performed:
\begin{displaymath}
\begin{array}{l}
f_{2} \ (1270) \ , \ f^{'}_{2} \ (1525) \ ,
\ f_{J} \ (1710) \ , \ f \ (2100).
\end{array}
\end{displaymath}
The same reaction was investigated by the DM2 collaboration at the
DCI storage ring in Orsay
\cite{Augustpi}
with rather similar results. The product
of branching ratios in both experiments is given as
\begin{equation}
\label{eq:70}
\begin{array}{l}
\begin{array}{l}
B ( J/\psi \rightarrow \gamma f^{'}_{2} (1525) )\ \times
\ B ( f^{'}_{2} (1525) \rightarrow \pi^{+} \pi^{-} ):
\vspace*{0.3cm}\\
\mbox{Mark-III} :\ \sim \ 3 \times 10^{-5};\qquad
\mbox{DM2} :\ (2.5 \pm 1.0 \pm 0.4 ) \ 10^{-5} \mbox{ }
\end{array}
\end{array}
\end{equation}
\begin{equation}
\label{eq:71}
\begin{array}{l}
B ( J/\psi \rightarrow \gamma f_{J} (1710) )\ \times
\ B ( f_{J} (1710) \rightarrow \pi^{+} \pi^{-} ):
\vspace*{0.3cm}\\
\mbox{Mark-III} :\ ( 1.6 \pm 0.4 \pm 0.3 ) \ 10^{-4};\quad
\mbox{DM2} :\ ( 1.03 \pm 0.16 \pm 0.15 ) \ 10^{-4}
\end{array}
\end{equation}
We remark that both experiments reveal a background of $O ( 20\% )$ in the
$\pi^{+} \pi^{-}$ channel. This we expect to be -- at least in part -- not
incoherent background, but the S-wave part, including the contribution from
$gb \ (0^{++})$ discussed in the last section.
Next we turn to the $K^{+} K^{-}$ channel with results again from Mark-III
\cite{Mk3a} and DM2 \cite{AugustK}. A full spin analysis
is performed by the Mark-III collaboration
for both invariant mass domains corresponding
to $f^{'}_{2} (1525)$ and $f_{J} (1710)$.
The likelyhood functions used to distinguish the two hypotheses
$J = 0$ and $J = 2$ strongly favor the $J = 2$ hypothesis for both resonances.
For the spin 0 assignment to $f_{J}(1710)$
the purely statistical probability
is estimated to be $2 \times 10^{-3}$ only.
Especially the non-uniform
polar angle distribution in the resonance decay requires the
higher spin $J=2$.
This confirms the low statistics spin analysis of
Crystal Ball \cite{crballeta}.
No spin analysis is performed in this channel by DM2 in Ref. \cite{AugustK}.
However, one can see from the Dalitz plot that the density of points along
the $f_{J}(1710)$-band is peaked towards the edges, again favoring
the presence of higher
spin. Furthermore,
in the projected $K^{+} K^{-}$ invariant mass distribution
an interference effect between the two resonances is visible,
without any mention in Ref. \cite{AugustK}. Both phenomena, if
analyzed and eventually confirmed, would yield an independent
indication for the $J = 2$ quantum number of $f_{J} (1710)$.
The branching fraction products corresponding to
Eqs. (\ref{eq:70}) and (\ref{eq:71}) are determined as
\begin{equation}
\label{eq:72}
\begin{array}{l}
B ( J/\psi \rightarrow \gamma f^{'}_{2} \ (1525) )\ \times
\ B ( f^{'}_{2} (1525) \rightarrow K^{+} K^{-} ):
\vspace*{0.3cm}\\
\mbox{Mark-III} : \ ( 3.0 \pm 0.7 \pm 0.6 ) \ 10^{-4}; \quad
\mbox{DM2} : \ ( 2.5 \pm 0.6 \pm 0.4 ) \ 10^{-4}.
\end{array}
\end{equation}
\begin{equation}
\label{eq:73}
\begin{array}{l}
B ( J/\psi \rightarrow \gamma f_{J} (1710) )\ \times
\ B ( f_{J} (1710) \rightarrow K^{+} K^{-} ):
\vspace*{0.3cm}\\
\mbox{Mark-III} : \ ( 4.8 \pm 0.6 \pm 0.9 ) \ 10^{-4}; \quad
\mbox{DM2} : \ ( 4.6 \pm 0.7 \pm 0.7 ) \ 10^{-4}
\end{array}
\end{equation}
From the branching fractions in Eqs. (\ref{eq:70}) - (\ref{eq:73})
we obtain
the following ratio in comparison with the PDG result:
\begin{equation}
\label{eq:74}
\begin{array}{llll}
\mbox{Mark-III and DM2}: \ &
{ \displaystyle
\frac{ B(f^{'}_{2} (1525) \rightarrow \pi\pi) }
{ B(f^{'}_{2} (1525) \rightarrow K\overline K )}
}
\ & = &
\ 0.075 \ \pm \ 0.030
\vspace*{0.3cm} \\
\mbox{PDG}:& \ & = & \ 0.0092 \pm \ 0.0018.
\end{array}
\end{equation}
\noindent
The obvious discrepancy between both numbers may point
towards larger
systematic errors in the relative efficiency of the two channels
in (\ref{eq:69}) and eventually also to errors in the determinations
of $f^{'}_{2} (1525)$ branching fractions in earlier experiments.
However, we tend to believe that the
discrepancy of deduced
branching fractions in Eq. (\ref{eq:74}) is too significant to be
``explained'' by some unknown source of large errors; rather we conclude that
{\em the peaks `` $f^{\ '}_{2} (1525)$ '' as seen in radiative decays
$J/\psi \ \rightarrow \ \gamma \ \pi^{+} \pi^{-} \ , \ K^{+} K^{-}$
are not just $ f^{\ '}_{2} (1525)$.
}
There are further states, in particular in the S-wave, which are not
resolved in the analysis. One candidate is $f_0(1500)$, not yet
established at the time of the Mark-III and DM2 experiments under discussion.
Because of the small branching fraction of
$f_{0} (1500)$ into $K\overline K$ deduced by Amsler \cite{amsams},
the effect is expected to be especially important in the $\pi\pi$ channel.
Furthermore, there could be contributions from the high-mass tail of the
$0^{++}$ glueball or other states in this partial wave.
Such contributions may also affect the spin determinations of
$f_J(1710)$.\\
\noindent
{\it Reanalysis of the $f_{J} (1710)$ spin by Mark-III}\\
The spin analysis in the $K \overline{K}$ channel was subsequently
extended by the Mark-III collaboration with higher statistics and including
the $K_sK_s$ final states
\cite{thesisbo,thesische}. In a mass-independent analysis, both the $J=0$ and
the $J=2$ components have been studied, preliminary results became available
as conference reports \cite{chenrep,chenrep1}.
In these analyses the earlier Mark-III results \cite{Mk3a} are contradicted
in favor of a large $J = 0$ component of $f_{J} (1710)$, although a
contribution of up to 25\% from spin 2 was not excluded.
Looking into these results in more detail, we observe a considerable
qualitative difference between the $K^+K^-$ and the $K_sK_s$ results.
Whereas in the former channel the $J=0$ component dominates over $J=2$ by a
factor 4.5 in the mass range 1600-1800 MeV, the opposite is true for the
neutral kaon mode: in this case, the $J=2$ component dominates by a factor
2.8 over $J=0$. It is interesting to note that the efficiency in the
azimuthal angle $\phi$ is much better in the neutral mode: for $K^+K^-$
pairs the acceptance drops towards its minimum
at $\phi=0,\pi$ to $\sim$15\% of its maximal value, but
for $K_sK_s$ pairs only to 57\%. Therefore, the
results from the neutral mode are very important
despite the somewhat lower statistics.
Breit-Wigner resonance fits to the combined $K\overline K$
data sample are
presented in Fig.2a of \cite{chenrep1}.
In this data compilation, a significant spin 2 component of
$f_J(1710)$ is clearly visible and is comparable in its overall size
with the $f_2'(1525)$ signal. The fitted curve does not describe
the data well near $f_J(1710)$ and
underestimates the observed rates by roughly a factor of two.
In view of the preliminary character of these studies, one might conclude
that both hypotheses $J = 2$ and $J = 0$
should be considered.
\noindent {\it BES experiment}\\
The situation became considerably clarified by the recent results of
the BES collaboration \cite{besK}. At the BEBC facility in Beijing,
the decay $J/\psi \ \rightarrow \ \gamma \ K^{+} K^{-}$ was analyzed
with specific determination of all helicity amplitudes for $J = 0,\ 2$.
The region around 1700 MeV for the $K^{+} K^{-}$ invariant mass spectrum
-- beyond $f^{\ '}_{2} (1525)$ --
reveals a dominant resonant structure with spin 2. Furthermore, the ana\-lysis
provides evidence indeed for a $0^{++}$ resonance, although
weaker and less significant and at a slightly larger mass value.
The parameters of the resonance fit
are given in Table \ref{eq:75a}.
\begin{table}[ht]
\[
\begin{array}{l}
\begin{array}{lc@{\hspace*{0.5cm}}cc}
\hline
J^{\ PC}(X) & \mbox{mass (MeV)} & \mbox{width (MeV)} &
\begin{array}[t]{l}
B \ ( J/\psi \ \rightarrow \ \gamma X \ )
\vspace*{0.1cm} \\
\times B \ ( \ X \ \rightarrow \ K^{+} K^{-} \ ) \times 10^{\ 4}
\end{array}
\\ \hline \vspace*{0.1cm}
2^{++} & 1696 \ \pm \ 5^{\ +9}_{\ -34} & 103 \ \pm \ 18^{\ +30}_{\ -11} &
2.5 \ \pm \ 0.4^{\ +0.9}_{\ -0.4} \vspace*{0.1cm} \\
2^{++} & 1516 \ \pm \ 5^{\ +9}_{\ -15} & 60 \ \pm \ 23^{\ +13}_{\ -20} &
1.6 \ \pm \ 0.2^{\ +0.6}_{\ -0.2} \vspace*{0.1cm} \\
0^{++} & 1781 \ \pm \ 8^{\ +10}_{\ -31} & 85 \ \pm \ 24^{\ +22}_{\ -19} &
0.8 \ \pm \ 0.1^{\ +0.3}_{\ -0.1} \vspace*{0.1cm} \\
\hline
\end{array}
\end{array}
\]
\caption{Resonance parameters from fit to
mass regions near $f^{\ '}_{2}(1525)$ and
$f_J(1710)$ as obtained by the BES collaboration \protect\cite{besK}.}
\label{eq:75a}
\end{table}
The results on $f^{\ '}_{2}(1525)$ are now in good agreement with the PDG
results.
In comparison
with the earlier results in (\ref{eq:73})
both the smaller branching ratios
of $f_J(1710)$ into spin $J=2$ alone and the reduced statistical errors
are to be noted.
In comparison with the preliminary Mark-III results \cite{chenrep1},
we note the good agreement with their branching ratio into $f_2(1525)$
of (1.7 $\pm\ 0.3)\times 10^{-4}$ (for $K^+K^-$ mode
as defined in our Table \ref{eq:75a}). The corresponding
fraction for $f_J(1710)$ with $J=2$ reads (1.0 $\pm \ 0.4)\times 10^{-4}$
\cite{chenrep1} -- which we would increase by a factor 2
to $\sim 2.0 \times 10^{-4}$ as explained above --
to be compared with 2.5$\times 10^{-4}$ in our Table \ref{eq:75a}.
So there are no gross differences in the identification of the
$J=2$ objects in these experiments.
\subsubsection{Hadronic decays
$J/\psi \ \rightarrow \ \omega X \ ; \ \varphi X $}
A new interesting chapter in studying
hadronic $J/\psi$ decays has been opened up by the Mark-III collaboration in the channels
\begin{equation}
\label{eq:76}
\begin{array}{l}
J/\psi \ \rightarrow \ \gamma \ K \overline K; \hspace*{0.3cm} \ \omega \ K \overline{K}
\hspace*{0.3cm} ; \hspace*{0.3cm}
\ \varphi \ K \overline{K}
\end{array}
\end{equation}
\noindent
discussed by L. K\"{o}pke \cite{kop}. The $K\overline K$
invariant mass distributions in the charge state $K^+K^-$ in the three
channels (\ref{eq:76})
are compared in Fig. 2 of Ref. \cite{kop}.
In the $\omega \ K^{+} K^{-}$ channel the (mainly) $f^{\ '}_{2} (1525)$
signal -- clearly visible in the other two decay modes in Eq. (\ref{eq:76})
-- is absent.
The most interesting channel is $\varphi \ K^{+} K^{-}$, where $f_{J} (1710)$
is visible only as a broadening shoulder of the dominant
$f^{\ '}_{2}$ resonance. K\"{o}pke presents two fits to the
acceptance/efficiency corrected invariant mass distributions, one
admitting interference between $f^{\ '}_{2} (1525)$ and $f_{J} (1710)$
and one with incoherent addition of the two resonances.
He shows, that only the coherent superposition admits to assign
mass and width to $f_{J} (1710)$ compatible with the same parameters as
determined in other channels.
For angular integrated mass spectra
the crucial consequence of coherent superposition
is that the two resonances
have to have the same spin. The quantitative distinction between
the two fits is however not disclosed in Ref. \cite{kop}.
Precisely this question is taken up by the DM2 collaboration \cite{falvard}.
Falvard et al. perform three fits, two with
coherent superposition
and one with incoherent superposition.
The respective $\chi^{2}$ (p.d.f.) clearly
favor the two fits with coherence.
We take these results together
as further indication of a large spin $J = 2$ component in
$f_{J} (1710)$.
\subsubsection{Hadronic collisions}
\noindent{\it Central production}\\
If $f_{J} (1710)$ is a glueball it should
also be produced centrally in hadronic collisions. Indeed,
the WA76 collaboration working with the Omega spectrometer \cite{Omega}
has observed a clear signal in the $K^+K^-$ and $K_sK_s$ mass spectra
in
\begin{equation}
pp \rightarrow p_{fast}(K\overline K) p_{slow}
\label{ppcent}
\end{equation}
at 85 and 300 GeV.
Similar to the case of radiative $J/\psi$ decays, two peaks
appear above a smooth
background from $f_2'(1525)$ and $f_J(1710)$. The
polar angle decay distribution in both resonance regions
is rather similar and largely non-uniform. It is concluded that the spin of
$f_J(1710)$ is $J=2$ and the assignments $J^P=1^-$ and $J^P=0^+$
are excluded.
Very recently, new results on reaction (\ref{ppcent}) at 800 GeV have been
presented by the E690 collaboration at Fermilab \cite{Reyes}. In the region
of interest, the mass spectrum again shows two peaks. Surprisingly, the
first peak is now dominated by $f_0(1500)$ besides a smaller contribution
presumably from
$f_2(1525)$. Looking at the small branching ratio into $K\overline K$
(see Table \ref{flavor}) the process (\ref{ppcent}) could serve as a real
$f_0(1500)$ factory if confirmed.
In the region of $f_J(1710)$, there are two solutions with large and
small spin $J=2$ component, respectively. No attempt has been made to
find the most appropriate decomposition into Breit-Wigner resonances
consistent with other knowledge. For the moment, the most accurate data leave
us with a large uncertainty.
\noindent{\it Peripheral production}\\
Finally, we quote the work by Etkin et al. \cite{bnl} measuring
$\pi^- p \rightarrow K_sK_s n$ collisions which we discussed already before in
Sect. \ref{sectnonet} in connection with the $f_0(1500)$ state. In the
higher mass region, the same experiment gave evidence of another scalar state
at 1771 MeV and $\Gamma \ \sim$ 200 MeV which is produced through the one
pion exchange mechanism. It is natural to identify this state with the one
observed in the $f_J(1710)$ region. The higher mass agrees well with the one
observed by BES (see Table \ref{eq:75a}).
\subsubsection{Summary on spin assignments to $f_J(1710)$}
We summarize the experimental indications for both
$J = 2$ and $J=0$ in Table \ref{sumspin}.
\begin{table}[ht]
\[ \begin{array}{l}
\begin{array}{l@{\hspace*{0.5cm}}cc ll}
\hline \vspace*{0.1cm}
\mbox{Collaboration} & J \ = \ 2 & J \ = \ 0 & \mbox{channel}
& \mbox{method}
\\ \hline
(1)\ J/\psi\ \mbox{decays}:&&&& \vspace*{0.1cm} \\
\mbox{Crystal Ball} \ \cite{crballeta}
& \mbox{yes} & \mbox{no} & \gamma \ \eta \eta &
\mbox{spin analysis (sb)}
\vspace*{0.1cm} \\
\mbox{Mark-III} \ \cite{Mk3a}
& \mbox{yes} & \mbox{no} & \gamma \ K^{+} K^{-} &
\mbox{spin analysis (sb)}
\vspace*{0.1cm} \\
\mbox{Mark-III prel.} \ \cite{chenrep1}
& \sim 25\% -40\% & \mbox{yes} & \gamma \ K \overline{K} &
\mbox{spin analysis (mi)}
\vspace*{0.1cm} \\
\mbox{BES} \ \cite{besK}
& 75\% & 25\% & \gamma \ K^{+} K^{-} &
\mbox{spin analysis (mi)}
\vspace*{0.1cm} \\
\mbox{Mark-III} \ \cite{kop} \ , \ \mbox{DM2} \ \cite{falvard} &
\mbox{yes} & \mbox{no} & \varphi \ K^{+} K^{-} &
\mbox{interference} \vspace*{0.1cm}\\
(2)\ \mbox{central production}:&&&& \vspace*{0.1cm}\\
\mbox{WA76} \ \cite{Omega}
& \mbox{yes} & \mbox{no} & p \ K \overline K \ p &
\mbox{spin analysis (sb)} \vspace*{0.1cm} \\
\mbox{E690} \ \cite{Reyes}
& \mbox{yes} & \mbox{yes} & p \ K_s K_s\ p &
\mbox{spin analysis (mi)} \vspace*{0.1cm} \\
(3)\ \mbox{peripheral production}:&&&& \vspace*{0.1cm}\\
\mbox{BNL} \ \cite{bnl}
& \mbox{no} & \mbox{yes} & n \ K_s K_s &
\mbox{spin analysis (mi)} \vspace*{0.1cm} \\
\hline
\end{array}
\end{array}\]
\caption{Summary of spin assignments to the $f_J(1710)$
in the various ana\-lyses for three reaction types. The spin determination
is carried out in a single mass bin (sb) or mass-independent analysis (mi).}
\label{sumspin}
\end{table}
\noindent
The experiments in group (1) and (2),
analyzing a single mass interval around 1700 MeV, all prefer
$J=2$ clearly over $J=0$. The more refined experiments with higher statistics
performing a mass-independent ana\-lysis find a spin zero component
in addition.
As a $J=0$ state is found in peripheral collisions (3) in this mass range,
it is most natural to associate it with a scalar quarkonium state
$f_0(1770)$ MeV, slightly higher in mass than $f_J(1710)$.
On the other hand, the prominent
peak in the $J=2$ wave only appears in the gluon-rich reactions (1) and (2),
and is therefore
our primary
glueball candidate
\begin{equation}
\label{eq:75c}
\begin{array}{l}
J=2:\qquad f_{J} \ (1710
\rightarrow \ g b \ ( \ 2^{++} \ )
\end{array}
\end{equation}
which completes the basic triplet of binary glueballs.
\section{Conclusions}
In this paper we have reanalysed the spectroscopic evidence for
various hadronic states
with the aim to find the members of the
$q\overline q$ nonet with $J^{PC}=0^{++}$ of lowest mass
and to identify the triplet of lightest binary glueballs. We draw the
following conclusions from our study:
\begin{description}
\item {\it 1. The $0^{++}$ $q\overline q$ nonet}\\
As members of this multiplet we identify the
isoscalar states $f_0(980)$ and $f_0(1500)$ together with $a_0(980)$ and
$K_0^*(1430)$. The mixing between the isoscalars is about the same as in
the pseudoscalar nonet, i.e. little mixing between singlet and
octet states, with the correspondence and approximate flavor decomposition
$(u\overline u,d\overline d,s\overline s)$
\[
\begin{array}{lll}
\eta \ \leftrightarrow\ f_0(1500)&\quad \frac{1}{\sqrt{3}}\ (1,\ 1,\ -1)
&\quad\mbox{close to octet},\\
\eta'\ \leftrightarrow\ f_0(980)&\quad \frac{1}{\sqrt{6}}\ (1,\ 1,\ 2)
&\quad \mbox{close to singlet},
\end{array}
\]
whereby the $(\eta',\ f_0(980))$ pair forms a parity doublet
approximately degenerate in mass.
The support for this assignment comes
from the Gell-Mann-Okubo mass formula
(after rejecting the $K\overline K$ bound state interpretation of
$a_0(980)$),
the $J/\psi\to \phi/\omega + f_0(980)$
decays, the branching ratios for decays of the scalars
into pairs of pseudoscalars
as well as the amplitude signs we obtained.
The most important information comes from phase shift analyses of
elastic and inelastic $\pi\pi$ scattering as well as from the recent
analyses of $p\overline p$ annihilation near threshold.
\item {\it 2. The $0^{++}$ glueball of lowest mass}\\
The broad object which extends in mass from 400 MeV up to about 1700 MeV
is taken as the lightest $0^{++}$ glueball. In this energy range, the
$\pi\pi$ amplitude describes a full loop in the Argand diagram after the
$f_0(980)$ and $f_0(1500)$ states are subtracted. In particular, we do not
consider the occasionally suggested $\sigma(700)$ and the $f_0(1370)$, listed
by the Particle Data Group as genuine resonances, since
the related phase movements are too small.
This hypothesis is further supported by the occurrence of this state
in most reactions which provide the gluon-rich environment favorable for
glueball production, also by the decay properties in the 1300 MeV region and
especially by the strong suppression in $\gamma\gamma$ collisions.
An exception is perhaps the decay $J$/$\psi\to
\pi\pi\gamma$, but no complete amplitude ana\-lysis is available yet in this
case.
\item {\it 3. The $0^{-+}$ and $2^{++}$ glueballs}.\\
The triplet of binary glueballs is completed by the state $f_J(1710)$,
for which by now overwhelming evidence exists in favor of
a dominant spin 2 component,
and the $0^{-+}$ state $\eta(1440)$. They appear with large branching
ratios in the radiative decays of the $J$/$\psi$, in
agreement with the expectations for a glueball.
Central production in $pp$ collisions is observed for $f_2(1710)$,
but less significantly for $\eta(1440)$, so this
assignment is at a more tentative level.
\end{description}
\begin{table}[ht]
\[
\begin{array}{c@{\hspace*{0.6cm}}c@{\hspace*{0.4cm}}
l@{\hspace*{0.4cm}}c@{\hspace*{0.4cm}}c}\\ \hline
\mbox{name} & \mbox{PDG} & \mbox{mass (MeV)}& \mbox{mass}^2 \mbox{(GeV)}^2
& \mbox{width (MeV)}
\\ \hline
gb \ ( \ 0^{++} \ ) & f_0(400-1200)& \sim 1000 & \sim 1. &
\hspace*{0.2cm} 500-1000
\vspace*{0.1cm} \\
& f_0(1300) &&&
\vspace*{0.1cm}\\
g b \ ( \ 0^{-+} \ ) & \eta (1440)& 1400\ -\ 1470 & 2.07 &
\hspace*{0.2cm} 50\ -\ 80
\vspace*{0.1cm} \\
gb \ ( \ 2^{++} \ ) & f_{J} (1710)& 1712 \ \pm \ 5 & 2.93
& 133 \ \pm \ 14 \\
\hline
\end{array}
\]
\caption{Properties of the basic triplet of binary glueballs $gb$.}
\label{tabsum}
\end{table}
\noindent
The properties of the basic triplet of binary glueballs are
summarized in Table \ref{tabsum}.
Interestingly, the mass square of these states are separated by about 1
GeV$^2$ as in case of the $q\overline q$ Regge recurrences.
Whereas this overall picture of the low-mass $q\overline q$ and $gg$ states
seems to accommodate the relevant experimental facts, there is certainly a
need for further improvements of the experimental evidence,
for which we give a few examples:
\begin{description}
\item {\it 1. Elastic and inelastic $\pi\pi$ scattering}\\
The status of elastic $\pi\pi$ scattering above 1.2 GeV is still not
satisfactory. The phase shift analysis of available $\pi^0\pi^0$ data
could be of valuable help to establish the parameters of the $f_0(1500)$ in
this channel and to determine the behaviour of the ``background'' amplitude,
the same applies for the $\eta\eta$ channel. It will be interesting to
obtain a decomposition of the ``background'' from $f_0(980)$ and find the
relative signs of the components.
\item {\it 2. Branching ratios of scalar mesons}\\
Of particular interest are the tests of predictions on the decays
$J/\psi\to \phi/\omega + f_0(1500)$ to further establish the quark
composition of this state. The same applies for the 2$\gamma$ widths
of both isoscalar states.
\item{\it 3. Production and decay of the lightest glueball}\\
The radiative decays of the $J/\psi$ into $ \pi\pi$ and other pseudoscalars
are naturally expected to show a signal
from the lightest glueball. So far, the experimental results have been
plagued by background problems and the dominance of
higher spin states like $f_2(1270)$; a spin ana\-lysis is required to get
more clarity. The decays of this object into other pseudoscalars
above 1 GeV is of interest.
\item{\it 4. Glueballs in $\gamma\gamma$ collisions}\\
If the mixing with $q\overline q$ is small, the production of the
glueballs should be suppressed. For the lightest glueball this is observed
in the mass region below 1 GeV. It is of crucial importance to demonstrate
this suppression in the region above 1 GeV in the $0^{++}$ wave.
Here, only
$f_0(980)$ and the $f_0(1500)$ should remain as dominant features.
\end{description}
Our hypotheses on the spectroscopy of low-lying glueballs and $q\overline q$
states are not in contradiction with theoretical expectations.
The masses in Table \ref{tabsum} are in good agreement with the expectations
from the bag model. Also the QCD sum rules suggest a strong gluonic coupling
of $0^{++}$ states around 1 GeV.
It will be interesting
to see whether the more complete lattice calculations now on their way
yield a ``light''
gluonic $0^{++}$ state around 1 GeV
as well as
``light'' scalar $q\overline q$ mesons. It is expected that a light glueball
is much broader than the heavier brothers, and this is consistent
with our scheme in
Table \ref{tabsum}.
We found the
most general effective potential for the scalar nonet sigma variables
to be compatible with the $a_0 - f_0$ mass degeneracy,
independently of the strange
quark mass $m_s$. The mass splitting $O(m_s)$ shows a continuum of breaking
patterns not necessarily along the OZI rule, as often assumed from the
beginning. It remains an open question in this approach,
though, what the physical origin
of the $a_0 - f_0$ mass degeneracy is and
the same holds for the mirror symmetry of the mixing patterns
in the scalar and pseudoscalar nonets.
A possible explanation for the latter
structure is suggested by a renormalizable model with an instanton induced
$U_A(1)$-breaking interaction.
\vfill
\begin{center}
\resizebox*{4cm}{4cm}{\includegraphics{dragon.ps}}
\end{center}
\vfill
\clearpage
\newpage
| proofpile-arXiv_065-8062 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection*{\centering Abstract}
{\em
This paper is an exposition of the relationship between Witten's functional integral and Vassiliev invariants.
}
\section{Introduction}
\noindent
This paper shows how the Kontsevich Integrals, giving Vassiliev
invariants in knot theory, arise naturally in the perturbative expansion of Witten's functional integral. The paper is a sequel to \cite{WittKont}. Since the writing of \cite{WittKont} I became aware of the work of Labastida and P$\acute{e}$rez \cite{LP} on this same subject. Their work comes to an identical conclusion, interpreting the Kontsevich integrals in terms of the light-cone gauge and thereby extending the original work of Fr\"ohlich and King \cite{Frohlich and King}. The purpose of this paper is to give an exposition of these relationships and to introduce diagrammatic techniques that illuminate the connections. In particular, we use a diagrammatic operator method that is useful both for Vassiliev invariants and for relations of this subject with the quantum gravity formalism of Ashtekar, Smolin and Rovelli \cite{ASR}. An aspect that this paper does not treat is the perturbation expansion via three-space integrals leading to Vassiliev invariants as in \cite{Altschuler-Friedel}. See also \cite{Bott-Taubes}. Nor do we deal with the combinatorial reformulation of Vassiliev invariants that proceeds from the Kontsevich integrals as in \cite{Cart}.
\vspace{3mm}
The paper is divided into three sections. Section 2 discusses Vassiliev invariants and
invariants of rigid vertex graphs. The section three on the functional integral introduces
the basic formalism and shows how the functional integral is related directly to Vassiliev
invariants. In this section we also show how our formalism works for the loop transform of Ashtekar,Smolin and Rovelli.
Finally section 4 shows how the Kontsevich integral arises in the perturbative
expansion of Witten's integral in the axial gauge. One feature of section 4 is a new and
simplified calculation of the necessary correlation functions by using the complex numbers and the two-dimensional Laplacian. We show how the Kontsevich integrals are the Feynman integrals for this theory.
\vspace{3mm}
\noindent
{\bf Acknowledgement.} It gives the author pleasure to thank Louis Licht, Chris King
and Jurg Fr\"ohlich for helpful conversations and to thank the National Science
Foundation for support of this research under NSF Grant DMS-9205277 and the NSA for
partial support under grant number MSPF-96G-179.
\vspace{3mm}
\section{Vassiliev Invariants and Invariants of Rigid Vertex Graphs}
If $V(K)$ is a (Laurent polynomial valued, or more generally - commutative ring valued)
invariant of knots, then it can be naturally extended to an invariant of rigid vertex graphs
\cite{Kauffman-Graph} by defining the invariant of graphs in terms of the knot invariant
via an unfolding of the vertex. That is, we can regard the vertex as a "black box" and
replace it by any tangle of our choice. Rigid vertex motions of the graph preserve the
contents of the black box, and hence implicate ambient isotopies of the link obtained by
replacing the black box by its contents. Invariants of knots and links that
are evaluated on these replacements are then automatically rigid vertex invariants of the
corresponding graphs. If we set up a collection of multiple replacements at the vertices
with standard conventions for the insertions of the tangles, then a summation over all
possible replacements can lead to a graph invariant with new coefficients corresponding to
the different replacements. In this way each invariant of knots and links implicates a large
collection of graph invariants. See \cite{Kauffman-Graph}, \cite{Kauffman-Vogel}.
\vspace{3mm}
The simplest tangle replacements for a 4-valent vertex are the two crossings, positive and
negative, and the oriented smoothing. Let V(K) be any invariant of knots and links.
Extend V to the category of rigid vertex embeddings of 4-valent graphs by the formula
$$V(K_{*}) = aV(K_{+}) + bV(K_{-}) + cV(K_{0})$$
where $K_{+}$ denotes a knot diagram $K$ with a specific choice of positive crossing,
$K_{-}$ denotes a diagram identical to the first with the positive crossing replaced by a
negative crossing and $K_{*}$ denotes a diagram identical to the first with the positive
crossing replaced by a graphical node.
\vspace{3mm}
This formula means that we define $V(G)$ for an embedded 4-valent graph $G$ by
taking the sum
$$V(G) = \sum_{S} a^{i_{+}(S)}b^{i_{-}(S)}c^{i_{0}(S)}V(S)$$
\noindent
with the summation over all knots and links $S$ obtained from $G$
by replacing a node of $G$ with either a crossing of positive or negative type, or with a
smoothing of the crossing that replaces it by a planar embedding of non-touching segments
(denoted $0$). It is not hard to see that if $V(K)$ is an ambient isotopy invariant of
knots, then, this extension is an rigid vertex isotopy invariant of graphs. In rigid vertex
isotopy the cyclic order at the vertex is preserved, so that the vertex behaves like a rigid
disk with
flexible strings attached to it at specific points.
\vspace{3mm}
There is a rich class of graph invariants that can be studied in this manner. The Vassiliev
Invariants
\cite{Vassiliev},\cite{Birman and Lin},\cite{Bar-Natan}
constitute the important special case of these graph invariants where $a=+1$, $b=-1$ and
$c=0.$ Thus $V(G)$ is a Vassiliev invariant if
$$V(K_{*}) = V(K_{+}) - V(K_{-}).$$
\noindent
Call this formula the {\em exchange identity} for the Vassiliev invariant $V.$ See Figure 1
\vspace{3mm}
\begin{figure}[htbp]
\vspace*{140mm}
\special{psfile=F1.ps}
\vspace*{13pt}
\begin{center}
{\bf Figure 1 --- Exchange Identity for Vassiliev Invariants}
\end{center}
\end{figure}
\vspace{3mm}
$V$ is said to be of {\em finite type} $k$ if $V(G) = 0$ whenever $|G| >k$ where
$|G|$ denotes the number of (4-valent) nodes in the graph $G.$ The notion of finite type is
of extraordinary significance in studying these invariants. One reason for this is the
following basic Lemma.
\vspace{3mm}
\noindent {\bf Lemma.} If a graph $G$ has exactly $k$ nodes, then the value of a
Vassiliev invariant $v_{k}$ of type $k$ on $G$, $v_{k}(G)$, is independent of the
embedding of $G$.
\vspace{3mm}
\noindent {\bf Proof.} The different embeddings of $G$ can be represented by link
diagrams with some of the 4-valent vertices in the diagram corresponding to the nodes of
$G$. It suffices to show that the value of $v_{k}(G)$ is unchanged under switching of
a crossing. However, the exchange identity for $v_{k}$ shows that this difference is
equal to the evaluation of $v_{k}$ on a graph with $k+1$ nodes and hence is equal to
zero. This completes the proof.//
\vspace{3mm}
The upshot of this Lemma is that Vassiliev invariants of type $k$ are intimately involved
with certain abstract evaluations of graphs with $k$ nodes. In fact, there are restrictions
(the four-term relations) on these evaluations demanded by the topology and it follows
from results of Kontsevich \cite{Bar-Natan} that such abstract evaluations actually
determine the invariants. The knot invariants derived from classical Lie algebras are all
built from Vassiliev invariants of finite type. All this is directly related to Witten's
functional integral \cite{Witten}.
\vspace{3mm}
In the next few figures we illustrate some of these main points.
In Figure 2 we show how one associates a so-called chord diagram to represent the abstract graph associated with an embedded graph. The chord diagram is a circle with arcs connecting those points on the circle that are welded to form the corresponding graph. In Figure 3 we illustrate how the four-term relation is a consequence of topological invariance. In Figure 4 we show how the four term relation is a consequence of the abstract pattern of the commutator identity for a matrix Lie algebra. This shows that the four term relation is directly related to a categorical generalisation of Lie algebras. Figure 5 illustrates how the weights are assigned to the chord diagrams in the Lie algebra case - by inserting Lie algebra matrices into the circle and taking a trace of a sum of matrix products.
\vspace{3mm}
\begin{figure}[htbp]
\vspace*{80mm}
\special{psfile=F2.ps}
\vspace*{13pt}
\begin{center}
{\bf Figure 2 --- Chord Diagrams}
\end{center}
\end{figure}
\vspace{3mm}
\begin{figure}[htbp]
\vspace*{160mm}
\special{psfile=F3.ps}
\vspace*{13pt}
\begin{center}
{\bf Figure 3 --- The Four Term Relation from Topology}
\end{center}
\end{figure}
\vspace{3mm}
\begin{figure}[htbp]
\vspace*{160mm}
\special{psfile=F4.ps}
\vspace*{13pt}
\begin{center}
{\bf Figure 4 --- The Four Term Relation from Categorical Lie Algebra}
\end{center}
\end{figure}
\vspace{3mm}
\begin{figure}[htbp]
\vspace*{60mm}
\special{psfile=F5.ps}
\vspace*{13pt}
\begin{center}
{\bf Figure 5 --- Calculating Lie Algebra Weights}
\end{center}
\end{figure}
\vspace{3mm}
\section{Vassiliev Invariants and Witten's Functional Integral}
In \cite{Witten} Edward Witten proposed a formulation of a class of 3-manifold invariants
as generalized Feynman integrals taking the form $Z(M)$ where
$$Z(M) = \int DAe^{(ik/4\pi)S(M,A)}.$$
\noindent
Here $M$ denotes a 3-manifold without boundary and $A$ is a gauge field (also called a
gauge potential or gauge connection) defined on $M$. The gauge field is a one-form on a
trivial $G$-bundle over $M$ with values in a representation of the Lie algebra of $G.$
The group $G$ corresponding to this Lie algebra is said to be the gauge group. In this
integral the action $S(M,A)$ is taken to be the integral over $M$ of the trace of the
Chern-Simons three-form $A \wedge dA + (2/3)A \wedge A \wedge A$. (The product is
the wedge product of differential forms.)
\vspace{3mm}
$Z(M)$ integrates over all gauge fields modulo gauge equivalence (See \cite{Atiyah:YM}
for a discussion of the
definition and meaning of gauge equivalence.)
\vspace{3mm}
The formalism and internal logic of Witten's integral supports the existence of a large
class of topological invariants of 3-manifolds and associated invariants of knots and links
in these manifolds.
\vspace{3mm}
The invariants associated with this integral have been given rigorous
combinatorial descriptions \cite{RT},\cite{Turaev-Wenzl},\cite{Kirby-Melvin},\cite{Lickorish}, \cite{Walker},\cite{TL},
but questions and conjectures arising from the integral formulation are still outstanding.
(See for example \cite{Atiyah}, \cite{Garoufalidis},\cite{Gompf&Freed},
\cite{Jeffrey},\cite{Rozansky}, \cite{Adams}.)
Specific conjectures about this integral take the form of just how it implicates invariants of
links and 3-manifolds, and how these invariants behave in certain limits of the coupling
constant $k$ in the integral. Many conjectures of this sort can be verified through the
combinatorial models. On the other hand, the really outstanding conjecture about the
integral is that it exists! At the present time there is no measure theory or generalization of
measure theory that supports it. Here is a formal structure of great beauty. It is also a
structure whose consequences can be verified by a remarkable variety of alternative means.
\vspace{3mm}
We now look at the formalism of the Witten integral in more detail and see how it
implicates invariants of knots and links corresponding to each classical Lie algebra. In
order to accomplish this task, we need to introduce the Wilson loop. The Wilson loop is an
exponentiated version of integrating the gauge field along a loop $K$ in three space that
we take to be an embedding (knot) or a curve with transversal self-intersections. For this
discussion, the Wilson loop will be denoted by the notation $W_{K}(A) = <K|A>$ to
denote the dependence on the loop $K$ and the field $A$. It is usually indicated by the
symbolism $tr(Pe^{\oint_{K} A})$ . Thus
$$W_{K}(A) = <K|A> = tr(Pe^{\oint_{K} A}).$$ Here the $P$ denotes path ordered
integration - we are integrating and exponentiating matrix valued functions, and so must
keep track of the order of the operations. The symbol $tr$ denotes the trace of the
resulting matrix.
\vspace{3mm}
With the help of the Wilson loop functional on knots and links, Witten writes down a
functional integral for link invariants in a 3-manifold $M$:
$$Z(M,K) = \int DAe^{(ik/4 \pi)S(M,A)} tr(Pe^{\oint_{K} A}) $$
$$= \int DAe^{(ik/4 \pi)S}<K|A>.$$
\noindent
Here $S(M,A)$ is the Chern-Simons Lagrangian, as in the previous discussion. We
abbreviate $S(M,A)$ as $S$ and write $<K|A>$ for the Wilson loop. Unless otherwise
mentioned, the manifold $M$ will be the three-dimensional sphere $S^{3}$
\vspace{3mm}
An analysis of the formalism of this functional integral
reveals quite a bit about its role in knot theory. This analysis depends upon key facts
relating the curvature of the gauge field to both the Wilson loop and the Chern-Simons
Lagrangian. The idea for using the curvature in this way is due to Lee Smolin
\cite{Smolin} (See also \cite{Ramusino}).
To this end, let us recall the local coordinate structure of the gauge field $A(x)$, where
$x$ is a point in three-space. We can
write $A(x) = A^{a}_{k}(x)T_{a}dx^{k}$ where the index $a$ ranges from $1$ to
$m$ with the Lie
algebra basis $\{T_{1}, T_{2}, T_{3}, ..., T_{m}\}$. The index $k$ goes from $1$ to
$3$. For each choice of $a$ and $k$, $A^{a}_{k}(x)$ is a smooth function defined
on three-space.
In $A(x)$ we sum over the values of repeated indices. The Lie algebra generators
$T_{a}$ are matrices corresponding to a given representation of the Lie algebra of the
gauge group $G.$ We assume some properties of these matrices as follows:
\vspace{3mm}
\noindent 1. $[T_{a} , T_{b}] = i f^{abc}T_{c}$ where $[x ,y] = xy - yx$ , and
$f^{abc}$
(the matrix of structure constants) is totally antisymmetric. There is summation over
repeated indices.
\vspace{3mm}
\noindent 2. $tr(T_{a}T_{b}) = \delta_{ab}/2$ where $\delta_{ab}$ is the Kronecker
delta ($\delta_{ab} = 1$ if $a=b$ and zero otherwise).
\vspace{6mm}
We also assume some facts about curvature. (The reader may enjoy comparing with the
exposition in \cite{K and P}. But note the difference of conventions on the use of $i$ in
the Wilson loops and curvature definitions.) The first fact is the relation of Wilson loops
and curvature for small loops:
\vspace{3mm}
\noindent {\bf Fact 1.} The result of evaluating a Wilson loop about a very small planar
circle around a point $x$ is proportional to the area enclosed by this circle times the
corresponding value of the curvature tensor of the gauge field evaluated at $x$. The
curvature tensor is written $$F^{a}_{rs}(x)T_{a}dx^{r}dy^{s}.$$
It is the local coordinate expression of $F = dA +A \wedge A.$
\vspace{3mm}
\noindent
{\bf Application of Fact 1.} Consider a given Wilson line $<K|S>$.
Ask how its value will change if it is deformed infinitesimally in the neighborhood of a
point $x$ on the line. Approximate the change according to Fact 1, and regard the point
$x$ as the place of curvature evaluation. Let $\delta<K|A>$ denote the change
in the value of the line. $\delta <K|A>$ is given by the formula
$$\delta <K|A> = dx^{r}dx^{s}F_{a}^{rs}(x)T_{a}<K|A>.$$
This is the first order approximation to the change in the Wilson line.
\vspace{3mm}
In this formula it is understood that the Lie algebra matrices $T_{a}$ are to be inserted
into the Wilson line at the point $x$, and that we are summing over repeated indices. This
means that each $T_{a}<K|A>$ is a new Wilson line obtained from the original line
$<K|A>$ by leaving the form of the loop unchanged, but inserting the matrix $T_{a}$
into that loop at the point $x$. In Figure 6 we have illustrated this mode of insertion of Lie algebra into the Wilson loop. Here and in further illustrations in this section we use $W_{K}(A)$ to denote the Wilson loop. Note that in the diagrammatic version shown in Figure 6 we have let small triangles with legs indicate $dx^{i}.$ The legs correspond to indices just as in our work in the last section with Lie algebras and chord diagrams. The curvature tensor is indicated as a circle with three legs corresponding to the indices of $F_{a}^{rs}.$
\vspace{3mm}
\noindent
{\bf Notation.} In the diagrams in this section we have dropped mention of the factor of $(1/ 4 \pi)$ that occurs in the integral. This convention saves space in the figures. In these figures $L$ denotes the Chern--Simons Lagrangian.
\vspace{3mm}
\begin{figure}[htbp]
\vspace*{60mm}
\special{psfile=F6.ps}
\vspace*{13pt}
\begin{center}
{\bf Figure 6 --- Lie algebra and Curvature Tensor insertion into the Wilson Loop}
\end{center}
\end{figure}
\vspace{3mm}
\noindent {\bf Remark.} In thinking about the Wilson line
$<K|A> = tr(Pe^{\oint_{K} A})$, it is helpful to recall Euler's formula for the
exponential:
$$e^{x} = lim_{n \rightarrow \infty}(1+x/n)^{n}.$$
\noindent
The Wilson line is the limit, over partitions of the loop $K$, of
products of the matrices $(1 + A(x))$ where $x$ runs over the partition. Thus we can
write symbolically,
$$<K|A> = \prod_{x \in K}(1 +A(x))$$
$$= \prod_{x \in K}(1 + A^{a}_{k}(x)T_{a}dx^{k}).$$
\noindent
It is understood that a product of matrices around a closed loop connotes the trace of the
product. The ordering is forced by the one dimensional nature of the loop. Insertion of a
given matrix into this product at a point on the loop is then a well-defined concept. If $T$
is a given matrix then it is understood that $T<K|A>$ denotes the insertion of $T$ into
some point of the loop. In the case above, it is understood from context in the formula that
the insertion is to be performed at the point $x$ indicated in the argument of the curvature.
\vspace{3mm}
\noindent {\bf Remark.} The previous remark implies the following formula for the
variation of the Wilson loop with respect to the gauge field:
$$\delta <K|A>/\delta (A^{a}_{k}(x)) = dx^{k}T_{a}<K|A>.$$
\noindent
Varying the Wilson loop with respect to the gauge field results in the insertion of an
infinitesimal Lie algebra element into the loop. Figure 7 gives a diagrammatic form for this formula. In that Figure we use a capital $D$ with up and down legs to denote the derivative $\delta /\delta (A^{a}_{k}(x)).$ Insertions in the Wilson line are indicated directly by matrix boxes placed in a representative bit of line.
\vspace{3mm}
\begin{figure}[htbp]
\vspace*{50mm}
\special{psfile=F7.ps}
\vspace*{13pt}
\begin{center}
{\bf Figure 7 --- Differentiating the Wilson Line}
\end{center}
\end{figure}
\vspace{3mm}
\noindent {\bf Proof.}
$$\delta <K|A>/\delta (A^{a}_{k}(x))$$
$$= \delta \prod_{y \in K}(1 + A^{a}_{k}(y)T_{a}dy^{k})/\delta (A^{a}_{k}(x))$$
$$= \prod_{y<x \in K}(1 + A^{a}_{k}(y)T_{a}dy^{k}) [T_{a}dx^{k}] \prod_{y>x \in
K}(1 + A^{a}_{k}(y)T_{a}dy^{k})$$
$$= dx^{k}T_{a}<K|A>.$$
\vspace{3mm}
\noindent
{\bf Fact 2.} The variation of the Chern-Simons Lagrangian $S$ with respect to the
gauge potential at a given point in three-space is related to the values of the curvature
tensor at that point by the following formula:
$$F^{a}_{rs}(x) = \epsilon_{rst} \delta S/\delta (A^{a}_{t}(x)).$$
\noindent
Here $\epsilon_{abc}$ is the epsilon symbol for three indices, i.e. it is $+1$ for positive
permutations of $123$ and
$-1$ for negative permutations of $123$ and zero if any two indices are repeated. A diagrammatic for this formula is shown in Figure 8.
\vspace{3mm}
\begin{figure}[htbp]
\vspace*{60mm}
\special{psfile=F8.ps}
\vspace*{13pt}
\begin{center}
{\bf Figure 8 --- Variational Formula for Curvature}
\end{center}
\end{figure}
\vspace{3mm}
With these facts at hand we are prepared to determine how the Witten integral behaves
under a small deformation of the loop $K.$
\vspace{3mm}
\noindent
{\bf Theorem.}
1. Let $Z(K) = Z(S^{3},K)$ and let $\delta Z(K)$ denote the change of
$Z(K)$ under an infinitesimal change in the loop K. Then
$$ \delta Z(K) = (4 \pi i/k) \int dA e^{(ik/4\pi)S}[Vol] T_{a} T_{a} <K|A>$$
\noindent
where $Vol = \epsilon_{rst} dx^{r} dx^{s} dx^{t}.$
The sum is taken over repeated indices, and the insertion is taken of the matrices
$T_{a}T_{a}$ at the chosen point $x$ on the loop $K$ that is regarded as the center
of the deformation. The volume element
$Vol = \epsilon_{rst}dx_{r}dx_{s}dx_{t}$ is taken
with regard to the infinitesimal directions of the loop deformation from this point on the
original loop.
\vspace{3mm}
\noindent
2. The same formula applies, with a different interpretation, to the case where $x$ is a
double point of transversal self intersection of a loop K, and the deformation consists in
shifting one of the crossing segments perpendicularly to the plane of
intersection so that the self-intersection point disappears. In this case, one $T_{a}$ is
inserted into each of the transversal crossing segments so that $T_{a}T_{a}<K|A>$
denotes a Wilson loop with a self intersection at $x$ and insertions of $T_{a}$ at $x +
\epsilon_{1}$ and $x + \epsilon_{2}$ where $\epsilon_{1}$ and $\epsilon_{2}$ denote
small displacements along the two arcs of $K$ that intersect at $x.$ In this case, the
volume form is nonzero, with two directions coming from the plane of movement of one
arc, and the perpendicular direction is the direction of the other arc.
\vspace{3mm}
\noindent {\bf Proof.}
$$\delta Z(K) = \int DA e^{(ik/4 \pi)S} \delta <K|A>$$
$$= \int DA e^{(ik/4 \pi)S} dx^{r}dy^{s} F^{a}_{rs}(x) T_{a}<K|A>$$
$$= \int DA e^{(ik/4 \pi)S} dx^{r}dy^{s} \epsilon_{rst} (\delta S/\delta (A^{a}_{t}(x))) T_{a}<K|A>$$
$$= (-4 \pi i/k) \int DA (\delta e^{(ik/4 \pi)S}/\delta (A^{a}_{t}(x))) \epsilon_{rst}
dx^{r}dy^{s}T_{a}<K|A>$$
$$= (4 \pi i/k) \int DA e^{(ik/4 \pi)S} \epsilon_{rst} dx^{r}dy^{s} (\delta
T_{a}<K|A>/\delta (A^{a}_{t}(x)))$$
(integration by parts and the boundary terms vanish)
$$= (4 \pi i/k) \int DA e^{(ik/4 \pi)S}[Vol] T_{a}T_{a}<K|A>.$$
This completes the formalism of the proof. In the case of part 2., a change of interpretation
occurs at the point in the argument when the Wilson line is differentiated. Differentiating a
self intersecting Wilson line at a point of self intersection is equivalent to differentiating the
corresponding product of
matrices with respect to a variable that occurs at two points in the product (corresponding
to the two places where the loop passes through the point). One of these derivatives gives
rise to a term with volume form equal to zero, the other term is the one that is described in
part 2. This completes the proof of the Theorem. //
\vspace{3mm}
\noindent
The formalism of this proof is illustrated in Figure 9.
\vspace{3mm}
\begin{figure}[htbp]
\vspace*{170mm}
\special{psfile=F9.ps}
\vspace*{13pt}
\begin{center}
{\bf Figure 9 --- Varying the Functional Integral by Varying the Line}
\end{center}
\end{figure}
\vspace{3mm}
In the case of switching a crossing the key point is to write the crossing switch as a
composition of first moving a segment to obtain a transversal intersection of the diagram
with itself, and then to continue the motion to complete the switch. One then analyses
separately the case where $x$ is a double point of transversal self intersection of a loop
$K,$ and the deformation consists in shifting one of the crossing segments
perpendicularly to the plane of
intersection so that the self-intersection point disappears. In this case, one $T_{a}$ is
inserted into each of the transversal crossing segments so that $T^{a}T^{a}<K|A>$
denotes a Wilson loop with a self intersection at $x$ and insertions of $T^{a}$ at
$x + \epsilon_{1}$ and $x + \epsilon_{2}$ as in part $2.$ of the Theorem above. The
first insertion is in the moving line, due to curvature. The second insertion is the
consequence of differentiating the self-touching Wilson line. Since this line can be regarded
as a product, the differentiation occurs twice at the point of intersection, and it is the second
direction that produces the non-vanishing volume form.
\vspace{3mm}
Up to the choice of our conventions for constants, the switching formula is, as shown
below (See Figure 10).
$$Z(K_{+}) - Z(K_{-}) = (4 \pi i/k)\int DA e^{(ik/4\pi)S} T_{a}T_{a}<K_{**}|A>$$
$$= (4 \pi i/k) Z(T^{a}T^{a}K_{**}),$$
\noindent
where $K_{**}$ denotes the result of replacing the crossing by a self-touching crossing.
We distinguish this from adding a graphical node at this crossing by using the double star
notation.
\vspace{3mm}
\begin{figure}[htbp]
\vspace*{70mm}
\special{psfile=F10.ps}
\vspace*{13pt}
\begin{center}
{\bf Figure 10 --- The Difference Formula}
\end{center}
\end{figure}
\vspace{3mm}
A key point is to notice that the Lie algebra insertion for this difference is exactly what is
done (in chord diagrams) to make the weight systems for Vassiliev invariants (without the
framing compensation). Here we take formally the perturbative expansion of the Witten
integral to obtain Vassiliev invariants as coefficients of the powers of ($1/k^{n}$). Thus
the formalism of the Witten functional integral takes one directly to these weight systems in
the case of the classical Lie algebras. In this way the functional integral is central to the
structure of the Vassiliev invariants.
\vspace{3mm}
\subsection{The Loop Transform}
Suppose that $\psi (A)$ is a (complex valued) function defined on gauge fields. Then we define formally the {\em loop transform}
$\widehat{\psi}(K)$, a function on embedded loops in three dimensional space, by the formula
$$\widehat{\psi}(K) = \int DA \psi(A) W_{K}(A).$$
\noindent
If $\Delta$ is a differential operator defined on $\psi(A),$ then we can use this integral transform to shift the effect of $\Delta$ to an operator on loops via integration by parts:
$$\widehat{ \Delta \psi }(K) = \int DA \Delta \psi(A) W_{K}(A)$$
$$ = - \int DA \psi(A) \Delta W_{K}(A).$$
\noindent
When $\Delta$ is applied to the Wilson loop the result can be an understandable geometric or topological operation. In Figures 11, 12 and 13 we illustrate this situation with diagrammatically defined operators $G$ and $H.$
\vspace{3mm}
\begin{figure}[htbp]
\vspace*{160mm}
\special{psfile=F11.ps}
\vspace*{13pt}
\begin{center}
{\bf Figure 11--- The Loop Transform and Operators G and H}
\end{center}
\end{figure}
\vspace{3mm}
\begin{figure}[htbp]
\vspace*{160mm}
\special{psfile=F12.ps}
\vspace*{13pt}
\begin{center}
{\bf Figure 12 --- The Diffeomorphism Constraint}
\end{center}
\end{figure}
\vspace{3mm}
\begin{figure}[htbp]
\vspace*{160mm}
\special{psfile=F13.ps}
\vspace*{13pt}
\begin{center}
{\bf Figure 13 --- The Hamiltonian Constraint}
\end{center}
\end{figure}
\vspace{3mm}
\noindent
We see from Figure 12 that
$$\widehat{ G \psi }(K) = \delta \widehat{ \psi }(K)$$
\noindent
where this variation refers to the effect of varying $K$ by a small loop. As we saw in this section, this means that if $\widehat{ \psi }(K)$ is a topological invariant of knots and links, then
$\widehat{ G \psi }(K) =0$ for all embedded loops $K.$ This condition is a transform analogue of the equation $G \psi(A) =0.$
This equation is the differential analogue of an invariant of knots and links. It may happen that $\delta \widehat{ \psi }(K)$ is not strictly zero, as in the case of our framed knot invariants.
For example with $$\psi(A) = e^{(ik/4\pi) \int tr(A \wedge dA + (2/3)A \wedge A \wedge A)}$$
we conclude that $\widehat{ G \psi }(K)$ is zero for flat deformations (in the sense of this section) of the loop $K,$ but can be non-zero in the presence of a twist or curl. In this sense the loop transform provides a subtle variation on the strict condition $G \psi(A) =0.$
\vspace{3mm}
In \cite{ASR} and earlier publications by these authors, the loop transform is used to study a reformulation and quantization of Einstein gravity. The differential geometric gravity theory is reformulated in terms of a background gauge connection and in the quantization, the Hilbert space consists in functions $\psi(A)$ that are required to satisfy the constraints
$$G \psi =0$$
\noindent
and
$$H \psi =0$$
\noindent
where $H$ is the operator shown in Figure 13. Thus we see that
$\widehat{G}(K)$ can be partially zero in the sense of producing a framed knot invariant, and (from Figure 13 and the antisymmetry of the epsilon) that $\widehat{H}(K)$ is zero for non-self intersecting loops. This means that the loop transforms of $G$ and $H$ can be used to investigate a subtle variation of the original scheme for the quantization of gravity. This program is being actively pursued by a number of researchers. The Vassiliev invariants arising from a topologically invariant loop transform should be of significance to this theory. This theme will be explored in a subsequent paper.
\vspace{3mm}
\section{Wilson Lines, Axial Gauge and the Kontsevich Integrals}
In this section we follow the gauge fixing method used by Fr\"ohlich and King
\cite{Frohlich and King}. Their paper was written before the advent of Vassiliev
invariants, but contains, as we shall see, nearly the whole story about the Kontsevich
integral. A similar approach to ours can be found in \cite{LP}. In our case we have simplified the determination of the inverse operator for this formalism and we have given a few more details about the calculation of the correlation functions than is customary in physics literature. I hope that this approach makes this subject more accessible to mathematicians. A heuristic argument of this kind contains a great deal of valuable mathematics. It is clear that these matters will eventually be given a fully rigorous treatment. In fact, in the present case there is a rigorous treatment, due to Albevario and Sen-Gupta \cite{AS} of the functional integral {\em after} the light-cone gauge has been imposed.
\vspace{3mm}
\noindent
Let $(x^{0}, x^{1}, x^{2})$ denote a point in three dimensional space.
Change to light-cone coordinates
$$x^{+} = x^{1} + x^{2}$$ and
$$x^{-} = x^{1} - x^{2}.$$
\noindent
Let $t$ denote $x^{0}.$
\vspace{3mm}
\noindent
Then the gauge connection can be written in the form
$$A(x) = A_{+}(x)dx^{+} + A_{-}(x)dx^{-} + A_{0}(x)dt.$$
\vspace{3mm}
\noindent
Let $CS(A)$ denote the Chern-Simons integral (over the three dimensional sphere)
$$CS(A) = (1/4\pi)\int tr(A \wedge dA + (2/3) A \wedge A \wedge A).$$
\noindent
We define {\em axial gauge} to be the condition that $A_{-} = 0.$
We shall now work with the functional integral of the previous section under the axial
gauge restriction. In axial gauge we have that
$A \wedge A \wedge A = 0$ and so
$$CS(A) = (1/4\pi)\int tr(A \wedge dA).$$
\noindent
Letting $\partial_{\pm}$ denote partial differentiation with respect to $x^{\pm}$, we get
the following formula in axial gauge
$$A \wedge dA = (A_{+} \partial_{-} A_{0} - A_{0} \partial_{-}A_{+})dx^{+} \wedge
dx^{-} \wedge dt.$$
\noindent
Thus, after integration by parts, we obtain the following formula for the Chern-Simons
integral:
$$CS(A) = (1/2 \pi) \int tr(A_{+} \partial_{-} A_{0}) dx^{+} \wedge dx^{-} \wedge
dt.$$
\noindent
Letting $\partial_{i}$ denote the partial derivative with respect to $x_{i}$, we have that
$$\partial_{+} \partial_{-} = \partial_{1}^{2} - \partial_{2}^{2}.$$ If we replace
$x^{2}$ with $ix^{2}$ where
$i^{2} = -1$, then $\partial_{+} \partial_{-}$ is replaced by
$$\partial_{1}^{2} + \partial_{2}^{2} = \nabla^{2}.$$
We now make this replacement so that the analysis can be expressed over the complex
numbers.
\vspace{3mm}
\noindent
Letting $$z = x^{1} + ix^{2},$$ it is well known that
$$\nabla^{2} ln(z) = 2 \pi \delta(z)$$
where $\delta(z)$ denotes the Dirac delta function and $ln(z)$ is the natural logarithm of
$z.$ Thus we can write
$$(\partial_{+} \partial_{-})^{-1} = (1/2 \pi)ln(z).$$
Note that $\partial_{+} = \partial_{z} = \partial /\partial z$ after the replacement of
$x^{2}$ by $ix^{2}.$ As a result we have that
$$(\partial_{-})^{-1} = \partial_{+} (\partial_{+} \partial_{-})^{-1} =
\partial_{+} (1/2 \pi)ln(z) = 1/2 \pi z.$$
\noindent
Now that we know the inverse of the operator $\partial_{-}$ we are in a position to treat
the Chern-Simons integral as a quadratic form in the pattern
$$ (-1/2)<A, LA> = - iCS(A)$$
where the operator
$$L = \partial_{-}.$$
Since we know $L^{-1}$, we can express the functional integral as a Gaussian integral:
\vspace{3mm}
\noindent
We replace
$$Z(K) = \int DAe^{ikCS(A)} tr(Pe^{\oint_{K} A})$$
\noindent
by
$$Z(K) = \int DAe^{iCS(A)} tr(Pe^{\oint_{K} A/\sqrt k})$$
\noindent
by sending $A$ to $(1/ \sqrt k)A$. We then replace this version by
$$Z(K) = \int DAe^{(-1/2)<A, LA>} tr(Pe^{\oint_{K} A/\sqrt k}).$$
\noindent
In this last formulation we can use our knowledge of $L^{-1}$ to determine the the
correlation functions and express $Z(K)$ perturbatively in powers of $(1/ \sqrt k).$
\vspace{3mm}
\noindent
{\bf Proposition.}
Letting
$$<\phi(A)> = \int DA e^{(-1/2)<A, LA>}\phi(A) / \int DA e^{(-1/2)<A, LA>}$$ for
any functional $\phi(A)$,
we find that
$$<A_{+}^{a}(z,t)A_{+}^{b}(w,s)> = 0,$$
$$<A_{0}^{a}(z,t)A_{0}^{b}(w,s)> = 0,$$
$$<A_{+}^{a}(z,t)A_{0}^{b}(w,s)> = \kappa \delta^{ab} \delta(t-s)/(z-w)$$ where
$\kappa$ is a constant.
\vspace{3mm}
\noindent
{\bf Proof Sketch.}
Let's recall how these correlation functions are obtained.
The basic formalism for the Gaussian integration is in the pattern
$$<A(z)A(w)> = \int DA e^{(-1/2)<A, LA>} A(z)A(w) / \int DA e^{(-1/2)<A, LA>}$$
$$ = ((\partial / \partial J(z)) (\partial / \partial J(w)) |_{J=0})
e^{(1/2)<J, L^{-1}J>}$$
\noindent
Letting $G*J(z) = \int dw G(z-w)J(w)$, we have that when
$$LG(z) = \delta(z)$$
\noindent
($\delta(z)$ is a Dirac delta function of $z$.) then
$$LG*J(z) = \int dw LG(z-w)J(w) = \int dw \delta(z-w) J(w) = J(z)$$
\noindent
Thus $G*J(z)$ can be identified with $L^{-1}J(z)$.
\vspace{3mm}
\noindent
In our case
$$G(z) = 1/ 2 \pi z$$
\noindent
and
$$L^{-1}J(z) = G*J(z) = \int dw J(w)/(z-w).$$
\noindent
Thus
$$<J(z),L^{-1}J(z)> = <J(z), G*J(z)> = (1/ 2\pi) \int tr(J(z) (\int dw J(w)/(z-w)) dz$$
$$ = (1/ 2\pi) \int \int dz dw tr(J(z)J(w))/(z-w).$$
\noindent
The results on the correlation functions then follow directly from differentiating this
expression. Note that the Kronecker delta on Lie algebra indices is a result of the corresponding Kronecker delta in the trace formula
$tr(T_{a}T_{b}) = \delta_{ab}/2$ for products of Lie algebra generators. The Kronecker delta for the $x^{0} = t, s$ coordinates is a consequence of the evaluation at $J$ equal to zero.//
\vspace{3mm}
We are now prepared to give an explicit form to the perturbative expansion for
$$<K>= Z(K)/\int DAe^{(-1/2)<A, LA>}$$
$$= \int DAe^{(-1/2)<A, LA>} tr(Pe^{\oint_{K} A/\sqrt k})/ \int DAe^{(-1/2)<A, LA>}$$
$$ = \int DAe^{(-1/2)<A, LA>} tr(\prod_{x \in K} (1 + (A/\sqrt k)))/\int DAe^{(-
1/2)<A, LA>}$$
$$ = \sum_{n} (1/k^{n/2}) \oint_{K_{1} < ... < K_{n}} <A(x_{1}) ... A(x_{n})>.$$
\noindent
The latter summation can be rewritten (Wick expansion) into a sum over products of pair
correlations, and we have already worked out the values of these. In the formula above we
have written $K_{1} < ... < K_{n}$ to denote the integration over variables $x_{1} , ...
x_{n}$ on $K$ so that $x_{1} < ... < x_{n}$ in the ordering induced on the loop $K$ by
choosing a basepoint on the loop. After the Wick expansion, we get
$$<K> = \sum_{m} (1/k^{m}) \oint_{K_{1} < ... < K_{n}}
\sum_{P= \{x_{i} < x'_{i}| i = 1, ... m\}}
\prod_{i}<A(x_{i})A(x'_{i})>.$$
\noindent
Now we know that
$$<A(x_{i})A(x'_{i})> =
<A^{a}_{k}(x_{i})A^{b}_{l}(x'_{i})>T_{a}T_{b}dx^{k}dx^{l}.$$
\noindent
Rewriting this in the complexified axial gauge coordinates, the only contribution is
$$<A_{+}^{a}(z,t)A_{0}^{b}(s,w)> = \kappa \delta^{ab} \delta(t-s)/(z-w).$$
\noindent
Thus
$$<A(x_{i})A(x'_{i})>$$
$$=<A^{a}_{+}(x_{i})A^{a}_{0}(x'_{i})>T_{a}T_{a}dx^{+} \wedge dt +
<A^{a}_{0}(x_{i})A^{a}_{+}(x'_{i})>T_{a}T_{a}dx^{+} \wedge dt$$
$$= (dz-dz')/(z-z') [i/i']$$
where $[i/i']$ denotes the insertion of the Lie algebra elements
$T_{a}T_{a}$ into the Wilson loop.
\vspace{3mm}
\noindent
As a result, for each partition of the loop and choice of pairings
$P= \{x_{i} < x'_{i}| i = 1, ... m\}$ we get an evaluation $D_{P}$ of the trace of these
insertions into the loop. This is the value of the corresponding chord diagram in the weight
systems for Vassiliev invariants. These chord diagram evaluations then figure in our
formula as shown below:
$$<K> = \sum_{m} (1/k^{m}) \sum_{P} D_{P} \oint_{K_{1} < ... < K_{n}}
\bigwedge_{i=1}^{m}(dz_{i} - dz'_{i})/((z_{i} - z'_{i})$$
\noindent
This is a Wilson loop ordering version of the Kontsevich integral. To see the usual form of
the integral appear, we change from the time variable (parametrization) associated with the
loop itself to time variables associated with a specific global direction of time in three
dimensional space that is perpendicular to the complex plane defined by the axial gauge
coordinates. It is easy to see that this results in one change of sign for each segment of the
knot diagram supporting a pair correlation where the segment is oriented (Wilson loop
parameter) downward with respect to the global time direction. This results in the rewrite
of our formula to
$$<K> = \sum_{m} (1/k^{m}) \sum_{P} (-1)^{|P \downarrow |} D_{P} \int_{t_{1} <
... < t_{n}}
\bigwedge_{i=1}^{m}(dz_{i} - dz'_{i})/((z_{i} - z'_{i})$$
\noindent
where $|P \downarrow |$ denotes the number of points $(z_{i},t_{i})$ or $(z'_{i},t_{i})$
in the pairings where the knot diagram is oriented downward with respect to global time.
The integration around the Wilson loop has been replaced by integration in the vertical time
direction and is so indicated by the replacement of $\{ K_{1} < ... < K_{n} \}$ with $\{
t_{1} < ... < t_{n} \}$
\vspace{3mm}
\noindent
The coefficients of $1/k^{m}$ in this expansion are exactly the Kontsevich integrals for
the weight systems $D_{P}$. See Figure 14.
\vspace{3mm}
\begin{figure}[htbp]
\vspace*{160mm}
\special{psfile=F14.ps}
\vspace*{13pt}
\begin{center}
{\bf Figure 14 --- Applying The Kontsevich Integral}
\end{center}
\end{figure}
\vspace{3mm}
\noindent
It was Kontsevich's insight to see (by different means) that
these integrals could be used to construct Vassiliev invariants from arbitrary weight
systems satisfying the four-term relations. Here we have seen how these integrals arise
naturally in the axial gauge fixing of the Witten functional integral.
\vspace{3mm}
\noindent
{\bf Remark.} The careful reader will note that we have not made a discussion of the role of the maxima and minima of the space curve of the knot with respect to the height direction ($t$). In fact one has to take these maxima and minima very carefully into account and to divide by the corresponding evaluated loop pattern (with these maxima and minima) to make the Kontsevich integral well-defined and actually invariant under ambient isotopy (with appropriate framing correction as well). The corresponding difficulty appears here in the fact that because of the gauge choice the Wilson lines are actually only defined in the complement of the maxima and minima and one needs to analyse a limiting procedure to take care of the inclusion of these points in the Wilson line. This points to one of the places where this correspondence with the Kontsevich integrals as Feynman integrals for Witten's functional integral could stand closer mathematical scrutiny. One purpose of this paper has been to outline the correspondences that exist and to put enough light on the situation to allow a full story to eventually appear.
\vspace{3mm}
| proofpile-arXiv_065-8068 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The development of ever more efficient and sophisticated Quantum Monte
Carlo (QMC) algorithms has greatly advanced our understanding of
quantum many body systems and the various phases that exist in such
models (superfluid, Mott insulators, Bose glasses, supersolids etc)
and the transitions between them. Examples of such QMC algorithms
include the ``path integral algorithm'' which was used to study the
details of the Helium superfluid transition in 2 and 3
dimensions\cite{roy}, the World Line algorithm\cite{worldline}, which
was used to study the various phases of the bosonic Hubbard
model.\cite{batrouni1} Improvements of the World Line algorithm for
hard core bosons (infinitely repulsive contact interaction) came in
the form of {\it cluster} algorithms\cite{cluster} where one updates
many variables at a time. Such algorithms converge much faster and
suffer much less from critical slowing down. The next improvement was
to eliminate Trotter errors\cite{trotter} which come from discretizing
the imaginary time ({\it i.e.} inverse temperature) direction. These
continuous imaginary time cluster algorithms\cite{cont-cluster} are
the state of the art in hardcore boson simulations although their
efficiency goes down in the presence of disorder and longer range
interactions like next near neighbour.
In the case of extreme soft-core high density bosonic models a
different algorithm was developed based on duality tranformation. In
this case, the amplitude of the order parameter is assumed constant,
leaving only the phase, giving rise to a model of the $XY$ variety
called the Quantum Phase Model (QPM). The dual of this model is easily
obtained when the Villain aproximation\cite{villain} of the action is
taken, leaving a model of interacting conserved integer current loops
that can be quite easily simulated.\cite{loop} This loop algorithm was
used productively by several groups to study the phases of the QPM
with or without disorder.\cite{loop}
One major disadvantage of the above algorithms is their inability to
measure the correlation function of the order parameter, $\langle
a_ia_j^{\dagger}\rangle$ when $|i-j|$ is greater than $1$. This
quantity is very interesting since in phase transitions the behaviour
of the order parameter is of prime importance.
We will outline below how to perform the duality transformation
exactly for the soft and hard core bosonic Hubbard models. Then,
concentrating on the hardcore case, we will construct the (exact) loop
algorithm and demonstrate how it can be used to obtain the correlation
function of the order parameter in addition to the the usual
quantities of interest (energy, density-density correlation function,
superfluid density).
\section{The Bosonic Hubbard Model}
We are interested in simulating models whose Hamiltonian has the form
\smallskip
$$ H=-t\sum_{\langle
ij\rangle}(a_{i}^{\dagger}a_{j}+a_{j}^{\dagger}a_{i})-
\sum_{i}\mu_{i}\hat n_{i}+V_{0}\sum_{i}\hat
n_{i}^{2}+V_{1}\sum_{\langle ij \rangle}\hat n_{i}\hat n_{j}+
V_{2}\sum_{\langle\langle ik\rangle\rangle}\hat n_{i}\hat
n_{k}.\eqno(1)
$$
\smallskip
\noindent In this equation, $t$ is the transfer integral (the hopping
parameter) which sets the energy scale (in the simulations we set it
equal to $1$), $V_0$, $V_1$, and $V_2$ are respectively the contact,
the nearest neighbor, and the next near neighbor interactions, which
we always take to be repulsive. On the square lattice, the next near
neighbor is chosen along the diagonal, while in one dimension the
choice is the obvious one. $a_i$ and $a_i^{\dagger}$ are destruction
and creation operators on site $i$ satisfying the usual softcore boson
commutation laws, $[a_i,a_j^{\dagger}]=\delta_{i,j}$, and $\hat
n_i=a_i^{\dagger}a_i^{}$ is the number operator on site $i$. In
Eq. (1), $\langle ij\rangle$ and $\langle\langle ik\rangle\rangle$
label sums over near and next near neighbors. The first term which
describes the hopping of the bosons among near neighbor sites gives
the kinetic energy. $\mu_{i}=\mu+\delta_{i}$ is the site dependent
chemical potential. For a clean system {\it i.e.} with no disorder, we
take $\delta_{i}=0$. To include the effect of disorder, we take
$\delta_{i}$ to be a uniformly distributed random number
$-\Delta\le\delta_i\le\Delta$. We have chosen to disorder the system
with a random site energy, which preferentially attracts or repels
bosons to particular sites. Other choices are possible, such as
disordering the hopping parameter, $t$, or some of the interaction
strengths, $V_{0,1,2}$. It is thought that these different ways of
introducing disorder yield similar results.
The quantum statistical mechanics of this system is given by the
partition function Z,
\smallskip
$$ Z = {\rm Tr}e^{-\beta H},\eqno(2)$$
\smallskip
\noindent where $\beta$ is the inverse temperature. The expectation
value of a quantum operator is given by
\smallskip
$$ \langle {\cal O} \rangle = {1 \over Z} {\rm Tr}({\cal O}e^{-\beta
H}).\eqno(3)$$
\smallskip
\section{Coherent State Representation}
We cannot do numerical simulations directly on this formulation of the
partition function because it is in terms of operators. We must first
find a c-number representation which can be incorporated into a
simulation algorithm. There are several such representations. For
example the wavefunction representation leads to what is known as the
``path integral'' method\cite{roy} and the occupation state
representation leads to the World Line algorithm.\cite{worldline} Here
we will use the {\it coherent states} representation, {\it i.e.}
eigenstates of the destruction operator,
\smallskip
$$a_i|\{ \Phi \} \rangle = \phi(i)|\{ \Phi \} \rangle,\eqno(4a)$$
\smallskip
$$\langle \{ \Phi \} | a_i^{\dagger} = \langle
\{ \Phi \} | \phi^{\star}(i),\eqno(4b)$$
\smallskip
\noindent where the eigenvalues $\phi(i)$ are complex numbers that
are defined on the sites, $i$, of the lattice. In terms of the more
familiar occupation number representation vacuum ($a|0\rangle=0,
a^{\dagger}|0\rangle=|1\rangle$) the coherent state $|{\Phi}\rangle$
is defined as follows
\smallskip
$$|\{\Phi \}\rangle = exp\Biggl (\sum_i(-{{\phi^{\star}(i)\phi(i)}
\over 2} +\phi(i)a_i^{\dagger})\Biggr ) |0\rangle.\eqno(5)$$
\smallskip
\noindent With this normalization we obtain the inner product of two
coherent states
\smallskip
$$\langle \{ \Psi \} | \{ \Phi \} \rangle = exp\Biggl (\sum_i\Bigl
(\psi^{\star}(i)\phi(i)-{1\over 2}\phi^{\star}(i)\phi(i)- {1\over
2}\psi^{\star}(i)\psi(i) \Bigr ) \Biggr ),\eqno(6)$$
\smallskip
\noindent and the resolution of unity
\smallskip
$$1 = \int \prod_i {{d^2\phi(i)} \over {2\pi}} | \{ \Phi \} \rangle \langle \{
\Phi \}|.\eqno(7)$$
Now we write the partition function, Eq. (2), as
\smallskip
$$ Z = {\rm tr} \bigl ( e^{-\delta H} e^{-\delta H}
e^{-\delta H} ... e^{-\delta H}\bigr ),\eqno(8)
$$
\smallskip
\noindent where $\delta \equiv \beta/L_{\tau}$, $L_{\tau}$ is a large
enough integer such that $\delta \ll 1$. We express the partition
function this way because we can now express the exponentials in
Eq. (8) in a form suitable for easy (albeit approximate)
evaluation. Between each pair of exponentials introduce the resolution
of unity, Eq. 7. Using standard manipulations\cite{negele} we find
\smallskip
$$Z=\int \prod_{r,\tau} {{d^2 \phi(r,\tau)}\over \pi}
e^{-S(\phi^{\star},\phi)},\eqno(9)$$
\smallskip
\noindent where the action is given to first order in $\delta$
by\cite{caution}
\smallskip
$$S = \sum_{r,\tau} \phi^{\star}(r,\tau)\Delta_{-\tau}\phi(r,\tau) +
\delta \sum_{\tau} H[\phi^{\star}(r,\tau+1),\phi(r,\tau)].\eqno(10)$$
\smallskip
In Eq. 10, $\tau$ denotes imaginary time, {\it i.e.} the inverse
temperature direction $\beta$. Also,
$H[\phi^{\star}(r,\tau),\phi(r,\tau-1)]$ simply means that at the
imaginary time $\tau$, we replace in the Hamiltonian, Eq. 1, the
destruction operator $a_r$ by the complex field $\phi(r,\tau)$, and
the creation operator, $a^{\dagger}_r$, by
$\phi^{*}(r,\tau+1)$.\cite{negele} In this article our notation for
the forward and backward finite difference operators are
\smallskip
$$\Delta_{\mu} \phi(r) \equiv \phi(r+{\hat \mu})-\phi(r),\eqno(11a)$$
\smallskip
$$\Delta_{-\mu} \phi(r) \equiv \phi(r)-\phi(r-{\hat \mu}),\eqno(11b)$$
\smallskip
\noindent It is well known\cite{negele} that in the first term of
Eq. 10 we must have $\Delta_{-\tau}$ and not $\Delta_{\tau}$ although
in the continuum limit one might argue that they both lead to the same
result. Such arguments are incorrect. What is perhaps less strongly
emphasized is that the following ``approximation'',
\smallskip
$$\delta \sum_{\tau} H[\phi^{\star}(r,\tau),\phi(r,\tau-1)]=\delta
\sum_{\tau} H[\phi^{\star}(r,\tau),\phi(r,\tau)] + {\cal O}(\delta
\beta).\eqno(12)$$
\smallskip
\noindent which is often used in the literature, is not correct. The
terms ignored are in fact of order $\beta$, not $\delta \beta$, which
in effect changes the Hamiltonian under consideration.\cite{batrouni2}
This can be easily illustrated using with the quantum harmonic
oscillator. We will not use this approximation in what follows.
\section{The Duality Transformation}
In what follows we will take $V_0 = V_1 = V_2 = 0$ in order to
simplify the presentation. Nonzero values of these parameters can be
dealt with in a straight-forward way.\cite{batrouni2} Interestingly,
the hardcore boson case (with no near and next near neighbor
repulsion) is obtained directly from this case of zero interaction
(see below).
To effect the duality transformation on the partition function, Eqs. 9
and 10, we follow the method of reference~\cite{batrouni3}. To this end
we first write the coherent states field in terms of the amplitudes
and phases: $\phi(r,\tau)= \alpha(r,\tau)
e^{i\theta(r,\tau)}$. Consequently,
\smallskip
$$\int \prod_{r,\tau} d^2 \phi(r,\tau) \rightarrow
\int \prod_{r,\tau} \alpha(r,\tau) d\alpha(r,\tau)
d\theta(r,\tau),\eqno(13)$$
\smallskip
\noindent where we are dropping irrelevant overall constant
factors. The action \vfil\eject
\smallskip
$$
S = \sum_{ {\vec r}, \tau } \Biggl ( \phi^{\star}({\vec r},
\tau)\Delta_{-\tau}\phi({\vec r},
\tau)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$$
$$~~~~~~~~~~~~~~~~~~~~~~~-t\delta \sum_{k=1}^{d}\Bigl (
\phi^{\star}({\vec r}, \tau+1)\phi({\vec r}+{\hat k}, \tau)+\\
\phi^{\star}({\vec r}+{\hat k}, \tau+1)\phi({\vec r}, \tau)\Bigr )
\Biggr ),\eqno(14)
$$
\smallskip
\noindent becomes
\smallskip
$$
S = \sum_{{\vec r}, \tau} \Biggl ( \alpha({\vec
r},\tau)e^{-i\theta({\vec r},\tau)} \Bigl ( \alpha({\vec
r},\tau)e^{i\theta({\vec r},\tau)}-\alpha({\vec
r},\tau-1)e^{i\theta({\vec r},\tau-1)} \Bigr
)
$$
$$~~~~~~~~~~~~~-t\delta\sum_{k=1}^{d}\Bigl ( \alpha({\vec r},
\tau+1)\alpha({\vec r}+{\hat k},\tau)e^{i(\theta({\vec r}+{\hat
k},\tau)-\theta({\vec r},\tau+1))}
$$
$$~~~~~~~~~~~~~~~~~~~~~~~~~+\alpha({\vec r}, \tau)\alpha({\vec
r}+{\hat k},\tau+1)e^{i(\theta({\vec r},\tau)-\theta({\vec
r}+{\hat k},\tau+1))}\Bigr ) \Biggr ).\eqno(15)
$$
\smallskip
\noindent Here we took the model to be in $d$ space dimensions
indicated by the index $k=1,...,d$. ${\hat k}$ is a unit vector in the
the $k$th direction.
We see in this equation that the phase, $\theta({\vec r}, \tau)$,
which is a site variable, appears only as differences of near neighbor
sites\cite{footnote}. Thus we can see the beginings of an $XY$-like
model. The fact that we always have this combination of variables
motivates us to change the variable of integration from the site
variable $\theta({\vec r}, \tau)$ to the {\it link} or {\it bond}
variables $\theta_k({\vec r}, \tau)\equiv \Delta_k \theta({\vec r},
\tau)$ and $\theta_{\tau}({\vec r}, \tau)\equiv \Delta_{\tau}
\theta({\vec r},\tau)$. In two space dimensions, the partition
function thus becomes\cite{batrouni3}
\smallskip
$$
Z = \int \prod_{r,\tau,\mu=1,2,3}\Bigl ( \alpha(r,\tau) d\alpha(r,\tau)
d\theta_{\mu}(r,\tau) \Bigr ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$$
$$~~~~~~~~~~~~~~\prod_{plaquettes} \Biggl (\delta \Big [
e^{i\epsilon_{\mu \nu \rho}\Delta_{\nu}\theta_{\rho}({\vec r},\tau)}-1\Bigr
] \Biggr )~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$$
$$~~~~~~~~~~~~~\prod_{\vec r} \Biggl ( \delta \Bigl [
e^{i\sum_{\tau}\theta_{\tau}({\vec r},\tau)}-1\Bigr ]\Biggr )
e^{-S},~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\eqno(16)
$$
\smallskip
\noindent where $\epsilon_{\mu \nu \rho}$ is the totally antisymmetric
tensor in three dimensions. Note that the product over plaquettes is
over all space-space and space-time plaquettes. The $\delta$-functions
are the only Jacobian of this variable change.\cite{batrouni3} Its
geometrical interpretation is simple. Even though the model is now
expressed in terms of bonds, it is actually a site model. It is clear
from the definitions of the link variables that if we sum them along
any directed closed path the result is zero (mod $2\pi$). This is
known as the Bianchi identity which is lost when the model is
expressed in terms of bonds and needs to be enforced as a
constraint. The first set of $\delta$-functions in the above equation
enforces the ``local'' Bianchi identities, {\it i.e.} those due to
topologically trivial loops. The second set enforces the ``global''
Bianchi identities, {\it i.e.} those due to topologically nontrivial
loops in the imaginary time direction due to the periodic boundary
conditions. These constraints have several interesting relationships
to various geometrical aspects of the theory.\cite{batrouni3} Here we
will simply exploit its relationship to the duality transformation. As
was shown in Ref.~\cite{batrouni3} the dual variables are the Fourier
conjugates to these constraints. In other words, the Fourier
expansions of these $\delta$-functions
\smallskip
$$
\prod_{plaquettes} \Biggl (\delta \Big [ e^{i\epsilon_{\mu
\nu \rho}\Delta_{\nu}\theta_{\rho}({\vec r},\tau)}-1\Bigr ] \Biggr )=
\sum_{\{ l_{\mu}({\vec r},\tau)=-\infty\} }^{+\infty} e^{i\sum_{{\vec
r},\tau}l_{\mu}({\vec r},\tau)\epsilon_{\mu
\nu \rho}\Delta_{\nu}\theta_{\rho}({\vec r},\tau)}
$$
$$
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~=\sum_{\{ l({\vec
r},\tau)=-\infty\} }^{+\infty} e^{-i\sum_{{\vec
r},\tau}\theta_{\rho}({\vec r},\tau)\epsilon_{\mu
\nu \rho}\Delta_{-\nu}l_{\mu}({\vec r},\tau)},\eqno(17)
$$
\smallskip
\noindent and
\smallskip
$$
\prod_{\vec r} \Biggl ( \delta \Bigl [
e^{i\sum_{\tau}\theta_{\tau}({\vec r},\tau)}-1\Bigr ]\Biggr )=
\sum_{\{ n_{\tau}({\vec r})=-\infty \} }^{+\infty} e^{i\sum_{{\vec
r},\tau}n_{\tau}({\vec r})\theta_{\tau}({\vec r},\tau)},\eqno(18)
$$
\smallskip
\noindent
immediately give the dual variables. In the case of the local Bianchi
constraints, Eq 17, the dual is the integer valued bond variable
$l_{\mu}({\vec r},\tau)$, while in the global case, Eq 18, the dual
variable is the integer valued field $n_{\tau}({\vec r})$. Note that
whereas $l_{\mu}({\vec r},\tau)$ is a {\it vector} bond variable which
depends on both coordinates ${\vec r}$ and $\tau$ and which has
components in the $x, y, {\rm and} \tau$ directions, the variable
$n_{\tau}({\vec r})$ is global and only points in the time
direction. It depends only on the spatial coordinates and gives the
value of the current flowing in the time direction from $\tau=0$ to
$\tau=L_{\tau}$ where $L_{\tau}$ is the number of steps in the
imaginary time direction.\cite{batrouni3} Also note that the form in
which the dual variable $l_{\mu}({\vec r},\tau)$ appears is always
$\epsilon_{\mu \nu \rho}\Delta_{-\nu}l_{\mu}({\vec r},\tau)$. So we
define the local electric current
\smallskip
$$
j_{\rho}({\vec r},\tau) = -\epsilon_{\mu \nu
\rho}\Delta_{-\nu}l_{\mu}({\vec r},\tau).\eqno(19)
$$
\smallskip
\noindent Due to the totally antisymmetric tensor $\epsilon$, it is
clear that the local integer current $j_{\rho}({\vec r},\tau)$ is
conserved.
Substituting Eqs 17, 18 and 19 in Eq 16 we can integrate over the
original variables, $\theta_{\mu}({\vec r},\tau)$ and $\alpha({\vec
r},\tau)$. The details of the integration will be given
elsewhere.\cite{batrouni2} This leaves the partition function
expressed only in terms of the dual variables $n_{\tau}, j_{\mu}$ and
$s_k$. The new variable $s_k$ is positive semidefinite integer valued
and is nothing but the dual to the amplitude field $\alpha$. The
partition function thus becomes:
\smallskip
$$
Z = \sum_{\{ n_{\tau},j_{\mu},s_{k}\} } (t\delta)^{\sum_{{\vec
r},\tau}2s_{k}({\vec r}, \tau)} \Biggl (\prod_{{\vec r},\tau}
{{[n_{\tau}({\vec r})+j_{\tau}({\vec r},\tau)]!}\over {[n_{\tau}({\vec
r})+j_{\tau}({\vec r},\tau)-M({\vec r},\tau)]!}}\Biggr)~~~~~~~~~~~
$$
$$~~~~~~~~~~~~~~~~~~~~~~~~~\Biggl (\prod_{{\vec r},\tau} [1+\delta \mu({\vec
r})]^{n_{\tau}({\vec r})+j_{\tau}({\vec r},\tau)-M({\vec r},\tau)}
\Biggr )$$
$$~~~~~~~~~~~~~~~~~~~~~~~~~\Biggl ( \prod_{{\vec r},\tau,k}{1\over {[s_k({\vec
r},\tau)]! [s_k({\vec r},\tau)+j_k({\vec r},\tau)]!}}\Biggr ),\eqno(20)
$$
\smallskip
\noindent where
\smallskip
$$ M({\vec r},\tau) \equiv \sum_{k=1,2} \bigl (s_k({\vec
r},\tau)+s_k({\vec r}-{\hat k},\tau)+j_k({\vec r}-{\hat
k},\tau).\eqno(21) $$
\smallskip
\noindent In this expression, which will be greatly simplified below,
we allowed for the possibility of disorder in the chemical
potential. Also, even though originally the values of $n_{\tau}$ ran
from $-\infty$ to $+\infty$ (Eq. 18), the $\theta_{\tau}$ integrals
impose the condition that $n_{\tau}$ is positive semidefinite. In fact
we can easily prove\cite{batrouni2} that the total current traversing
a bond in the time direction is nothing but the number of bosons
traversing that bond. In addition, it is understood that all the
arguments of factorials are positive semidefinite. This imposes
several severe constraints on the allowed configurations which we will
exploit to simplify the partition function.
For simplicity, although this is not necessary, we will consider the
no disorder case, {\it i.e.} $\mu(r)\rightarrow \mu$. In addition, we
will consider the very important case of hardcore bosons where
$n_{\tau}({\vec r})+j_{\tau}({\vec r},\tau)$ can take only the values
$0$ or $1$. Combining this with the previously mentioned constraints
on the arguments of the factorials, allows us to solve for the allowed
electric loop configurations.\cite{batrouni2} In this case the
partition function simplifies drastically. For example for the one
dimensional model it becomes
\smallskip
$$
Z_{Q} = \sum_{\{ n_{\tau}=0,1\} }\sum_{\{ j_{\mu}=0,\pm 1\} } e^{\beta
\mu\sum_{x}n_{\tau}(x)} \bigl (t\delta\bigr
)^{\sum_{x,\tau}|j_x(x,\tau)|},\eqno(22)
$$
\smallskip
\noindent for the grand canonical ensemble and
\smallskip
$$ Z = \sum_{\{ j_{\mu}=0,\pm 1\} } \bigl (t\delta\bigr
)^{\sum_{x,\tau}|j_x(x,\tau)|},\eqno(23) $$
\smallskip
\noindent for the canonical case. These are the duals to the hardcore
boson Hubbard model. The interpretation is very simple: The dual is a
model of conserved integer current loops that take the values $0$ or
$1$ in the time direction and $0,\pm1$ in the space directions. The
partition function is a sum over all deformations with each spatial
hop (in this case $x$) costing $t\delta$. This is a very simple and
appealing form and is very amenable to numerical simulations.
The algorithm is now very simple. For example for the canonical case,
we start with all local currents zero, and with the desired number of
nonzero global time currents, $n_{\tau}$, corresponding to the number
of bosons in the system. Then we visit each plaquette and randomly
choose to add to it a positive or negative elementary current loop
(plaquette). This attempt is rejected if (i) it introduces negative
time currents, or (ii) it introduces currents larger than $1$. If this
test is satisfied, we accept this current loop in accordance with
detailed balance: If $(t\delta)^2$ is greater than or equal to a
uniformly distributed random number we accept the change.
\section{Physical Quantities}
Now that we have the partition function and the algorithm we need
expressions for the physical quantities we want to measure. The
expression for the energy can be easily obtained from $<E>=-
{{\partial}\over {\partial \beta}}{\rm ln}Z$. This gives
\smallskip
$$
\langle E \rangle = -{1\over \beta} \langle \sum_{x,\tau}
|j_x(x,\tau)| \rangle.\eqno(24)
$$
\smallskip
\noindent The numerical values for $\langle E\rangle$ obtained with
the above algorithm and Eq 24 are shown in Fig. 1 and compared with
the results of exact diagonalization. We see that the agreement
between the two results is excellent.
The expression for the equal time density-density correlation function
is
\smallskip
$$
\langle n(x_1)n(x_2) \rangle = \langle \Bigl ( n_{\tau}(x_1) +
j_{\tau}(x_1,\tau) \Bigr )\Bigl ( n_{\tau}(x_2) + j_{\tau}(x_2,\tau)
\Bigr )\rangle,\eqno(25)
$$
\smallskip
\noindent since the number of particles on site $(x,\tau)$ is given
by $( n_{\tau}(x) + j_{\tau}(x,\tau))$. We compare the numerical
values of this correlation function with the exact values in
Fig. 2. Again we see that agreement is excellent.
The superfluid density is, in general, related to the winding number,
$W$, of current configurations by\cite{batrouni1}
\smallskip
$$
\rho_s = {{\langle W^2 \rangle }\over {2t\beta}}.\eqno(26)
$$
\smallskip
\noindent We have so far only discussed local moves in our Monte Carlo
algorithm, and such moves can never change the winding number of a
configuration. Therefore, the system is stuck in the winding number
sector of the initial configuration. If the initial configuration has
$W=0$, this will give $\langle W^2\rangle =0$ and therefore zero
superfluid density. This obstacle can be overcome with a trick. Define
the {\it total} $x$-current at a given time $\tau$ by
$J_x(\tau)=\sum_x j_x(x,\tau)$ and calculate the Fourier transform of
the current-current correlation function
\smallskip
$$
{\tilde{\cal J}}(\omega) = \sum_{\tau, \tau_0} e^{i{{2\pi}\over
L_{\tau}}\omega \tau} \langle J_x(\tau_0)J_x(\tau_0 +
\tau)\rangle.\eqno(27)
$$
\smallskip
\noindent We can show\cite{batrouni1} that ${\tilde{\cal J}}(\omega
\rightarrow 0)= W^2$ which allows us to calculate the superfluid
density $\rho_s$. In Fig. 3 we show $\rho_s$ as a function
$(\rho_c-\rho)$ where $\rho_c=1$ is the critical density at which the
hardcore boson system becomes an incompressible Mott insulator and
$\rho$ is the boson density. From scaling arguments\cite{fisher} we
expect $\rho_s \sim (1-\rho)^{\nu z}$ with $\nu z=1$.\cite{footnote2}
Our numerically obtained value on a small system is $\nu z=0.96$ in
excellent agreement with the theoretical values.
These three quantities can be easily measured by existing QMC
algorithms mentioned earlier. What is qualitatively new here is that
we can measure the correlation function of the order parameter very
easily. Let ${\cal N}(r)$ be the number of jumps of length $r$ in a
spatial direction (in this case $x$) in a given loop configuration. We
can show\cite{batrouni2}
\smallskip
$$
\langle a(x_1) a^{\dagger}(x_2) \rangle = {1\over {\beta
\delta^{|x_1-x_2|-1}}} \langle \sum_{l=0}^{{{L_x}\over 2}-|x_1-x_2|}
(l+1){\cal N}(|x_1-x_2|+l)\rangle,\eqno(28)
$$
\smallskip
\noindent where $|x_1-x_2|\ge 1$. This quantity is shown in Fig. 4 for
$\beta=0.5$ and $4$ for $L_x=8$. We see that agreement with exact
values is excellent for $\beta=0.5$ and gets worse for $\beta=4$ as
$|x_1-x_2|$ increases. The reason is that the exact diagonalization
results sum over all winding number sectors whereas our simulation
stays in the $W=0$ sector. At high temperature ($\beta=0.5$) the
correlation length is short and the boundary conditions are less
important so we get good agreement. At lower temperatures the
correlation length is longer and consequently the boundary conditions
become very important for small systems. For the more interesting
larger system sizes the boundary conditions will become less
important.
\section{Conclusions}
We have outlined how to perform the duality transformation exactly for
the soft and hard core bosonic Hubbard models. In the hard core case
the dual model is particularly simple. The dual partition function was
used to construct a loop Quantum Monte Carlo algorithm and we showed
that the numerical results agree very well with those of exact
diagonalization for a system with $8$ sites and $4$ bosons.
In particular we showed that with our algorithm we can calculate
easily the order parameter correlation function, $\langle a(x_1)
a^{\dagger}(x_2)\rangle$ which is very difficult to do with most other
QMC algorithms. This interesting quantity is very important for the
elaboration of the quantum phase transitions exhibited by the system.
This representation of the Hubbard model has many common features with
the World Line representation for which very efficient cluster
algorithms have been constructed.\cite{cluster,cont-cluster} It,
therefore, seems possible to improve the efficiency of this algorithm
in the same way. A cluster algorithm would improve the convergence
properties and in addition would lift the restriction to zero winding
number sector.\cite{cluster} This would greatly improve the
calculation of the order parameter correlation function. This is
curently under investigation.
{\it Note added.} After the completion of this work we became aware of
two other algorithms which allow to calculate the correlation function
of the order parameters. These are the ``worm'' algorithm\cite{worm}
and the cluster algorithm with a new interpretation of the
clusters.\cite{brower}
\section{Acknowledgements}
We thank Richard Scalettar for many very helpful discussions and for
the exact diagonalization results used in this article.
\vfill\eject
\centerline{\bf FIGURE CAPTIONS}
\begin{itemize}
\item[FIG. 1] The average energy per site as a function of
$\beta$. $L_x, N_b$ are the number of lattice sites and bosons
respectively.
\item[FIG. 2] The density-density correlation function as a function
of distance. Open symbols are exact diagonalization values,
full symbols are simulation results. The lines are just to
guide the eyes.
\item[FIG. 3] The superfluid density, $\rho_s$, as a function of
$(1-\rho)$. The theoretical value of $\nu z$ is 1.
\item[FIG. 4] The correlation function of the order parameter, $<a(x_1)
a^{\dagger}(x_2)>$ as a function of distance for two values of
$\beta$.
\end{itemize}
\vfill\eject
| proofpile-arXiv_065-8072 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Motivations}
In the low-energy regime, M-theory is
$D=11$, $N=1$ supergravity. In the matrix model
the fundamental degrees of freedom of
M-theory are 0-branes (that is, Derichlet particles).
For this model to be a correct
description of M-theory, it must then reproduce supergravity in
the long-distance regime. In particular,
0-brane scattering amplitudes in $D=10$ must reproduce those of
compactified (from $D=11$ down to 10) supergravity,
for which the gravitons carry momentum in the compactified direction.
Such a correspondence between amplitudes in these two
different-looking theories plays an important role because
it can be computed explicitely.
It has now been succesfully checked in the two- and three-graviton
scattering amplitudes.
\section{Two-graviton scattering}
The scattering of two graviton carrying momentum in a
compactified direction has been studied several times in the
literature~\cite{2grav}.
The simplest way to compute it is by means of the effective
lagrangian~\cite{BBPT}
\begin{equation}
L = - p_- \dot x^- = - p_- \frac{\sqrt{1 - h_{--} v^2} -1}{h_{--}}\, ,
\end{equation}
where $h_{--} = f(r)/2 \pi R_{11}$ and
$f(r) = 2 \kappa^2 M/7 \Omega\, r^7$ for the space-time of the shock
wave generated by the graviton moving with momentum
$p_- = N_2/R_{11}$. Actually, this is a special case of shock wave
in which the 11-th dimension has been smeared. By expanding in the relative
velocity $v$, we find
\begin{equation}
L = - p_- \left\{ \frac{v^2}{2} + a_1 \: \frac{v^4}{r^7} +
a_2 \:
\frac{v^6}{r^{14}} \cdots \right\}\, ,
\end{equation}
where the exact values of the coefficients $a_1$ and $a_2$ are known.
The corresponding amplitude in matrix theory can derived from the
gauge fixed action, the bosonic part of which reads
\begin{eqnarray}
S &=& \int \mbox{d} t \: \:\mathop{\mbox{Tr}}\,\bigg(\dot a_0^2 + \dot x_i^2 +
4\,i\,\dot R_k\,[a_0, x_k]
-[R_k, a_0]^2 - [R_k, x_j]^2\nonumber\\
&&+2\,i\,\dot x_k\,[a_0, x_k] + 2\,[R_k, a_0][a_0, x_k]
-2\,[R_k, x_j][x_k, x_j] \nonumber\\
&&-[a_0,x_k]^2 - \frac{1}{2}[x_k, x_j]^2 \bigg), \label{action}
\end{eqnarray}
where $a_0$ and $x_k$ are hermitian matrices representing the fluctuations
and $R_k$ is the background. The fermionic and ghost terms must also be
included in addition to (\ref{action}) but are here omitted for semplicity.
The units are such that
\begin{equation}
g_{\mbox{\rm \scriptsize YM}}=\left( R_{11}/
\lambda_{\mbox{\rm \scriptsize P}}^2 \right) ^{3/2}=1 \, ,
\end{equation}
the quantities $R_{11}$, $\lambda_{\mbox{\rm \scriptsize P}}$
and $g_{\mbox{\rm \scriptsize YM}}$
being the
compactification radius, the Planck length and the Yang-Mills
coupling, respectively.
The relevant gauge group depends on the process
under stady. It is the rank one (only one independent velocity)
group $SU(2)$ in the two-body scattering.
The corresponding computations at one- and two-loop level
in matrix theory yield
\begin{equation}
a_1 = \frac{15}{16} \: \frac{N_1 N_2}{R^3 M^9}
\quad \mbox{(one loop)~\cite{BBPT}}
\end{equation}
and
\begin{equation}
a_2 = \frac{225}{64} \: \frac{N_1^2 N_2}{R^5 M^{18}}
\quad \mbox{(two loops)~\cite{BC}} \, ,
\end{equation}
in agreement with what found in supergravity.
\section{Three-graviton scattering}
The simplest way to obtain supergravity amplitudes is by means of string
theory. Since it is a tree-level amplitude, it is consistent with
conformal invariance in any
dimensionality, in particular in $D=11$. We consider the {\it bona fide}
superstring theory (where there is no tachyon) and the scattering amplitude
of three ($11$-dimensional) gravitons, and look at suitable {\it pinching}
limits,
where only intermediate massless states are coupled to the external
gravitons. Those states are themselves $11$-dimensional gravitons.
We then compactify the $10^{\rm th}$ space dimension giving mass
to the external gravitons, which will thus correspond to
$10$-dimensional $D0$-branes. Keeping zero momentum transfer in
the $10^{\rm th}$ dimension, the intermediate states remain massless
and correspond to the various massless fields of $10$-dimensional
supergravity.
By considering only the part of the complete amplitude that is
proportional to
\begin{equation}
\varepsilon_1 \cdot \varepsilon_1' \:
\varepsilon_2 \cdot \varepsilon_2' \: \varepsilon_3 \cdot \varepsilon_3' \, ,
\end{equation}
$\varepsilon$ being the external graviton polarization tensor,
we obtain the amplitude $A_6$ for six graviton vertices~\cite{FIR}:
\begin{eqnarray}
A_6 & = & \varepsilon_1 \cdot \varepsilon_1' \:
\varepsilon_2 \cdot \varepsilon_2' \: \varepsilon_3 \cdot \varepsilon_3' \:
\frac{\kappa^4 (\alpha')^3}{4 \pi^3} \int \mbox{d} ^2 x\: \mbox{d} ^2 y\: \mbox{d} z^2
|1-y|^{-2 + \alpha' p_2'\cdot p_2} \nonumber \\
&&\times \: |y|^{\alpha' p_3\cdot p_2'}
|1-x|^{\alpha' p_2\cdot p_1'} |x|^{\alpha' p_3\cdot p_1'}
|1-z|^{\alpha' p_3'\cdot p_2} \nonumber \\
&&\times \: |z|^{-2 + \alpha' p_3\cdot p_3'}
|z-x|^{\alpha' p_3'\cdot p_1'} |z-y|^{\alpha' p_3'\cdot p_2'}
|x-y|^{\alpha' p_2'\cdot p_1'} \nonumber \\
&&\times \: \left\{ \frac{p_3' \cdot p_1' \: p_2' \cdot p_1'}{(y-x)(z-x)} +
\frac{p_3 \cdot p_2' \: p_3' \cdot p_1'}{y(z-x)} -
\frac{p_3' \cdot p_2' \: p_3 \cdot p_1'}{x(z-y)} \right. \nonumber \\
&& \left. + \frac{p_2' \cdot p_3' \: p_2' \cdot p_1'}{(y-x)(z-y)} +
\frac{p_3' \cdot p_2 \: p_2' \cdot p_1'}{(z-1)(y-z)} \right\}
\wedge \Biggl\{ c.c.
\Biggr\}
\end{eqnarray}
where $p_i = (E_i, {\bf p}_i-{\bf q}_i /2, M_i)$,
$p_i' = (-E_i', - {\bf p}_i-{\bf q}_i /2, -M_i)$,
$ p_i^2=0$, $E_i \simeq M_i + ({\bf p}_i-{\bf q}_i /2)^2/2M_i$
and $M_i=N_i/R_{11}$. Moreover, we have that
$\sum_i {\bf q}_i = 0$ and $\sum_i {\bf p}_i \cdot {\bf q}_i = 0$.
In the long-distance regime we are interested in we find that
$A_6 = A_\vee + A_Y$ where
\begin{eqnarray}
A_\vee & = & 2 \:
\kappa^4 \: \varepsilon_1 \cdot \varepsilon_1' \:
\varepsilon_2 \cdot \varepsilon_2' \: \varepsilon_3 \cdot \varepsilon_3'
\; \frac{1}{{\bf q}_1^2\: {\bf q}_2^2} \nonumber \\
&& \times \left\{
({\bf p}_3 - {\bf p}_2)^2 \: ({\bf p}_3 - {\bf p}_1)^2 \left[
({\bf p}_2 - {\bf p}_1)^2 - ({\bf p}_3 - {\bf p}_1)^2 -
({\bf p}_3 - {\bf p}_2)^2 \right]
\right. \nonumber \\
&& -\: ({\bf p}_3 - {\bf p}_2)^2 \: ({\bf p}_3 - {\bf p}_1)^2
\left[
({\bf p}_3 - {\bf p}_2)^2 \: \frac{ {\bf q}_2 \cdot ({\bf p}_3 - {\bf p}_1) }
{ {\bf q}_1 \cdot ({\bf p}_3 - {\bf p}_1)} \right. \nonumber \\
&& \left. \left . + \:
({\bf p}_3 - {\bf p}_1)^2 \: \frac{ {\bf q}_1 \cdot ({\bf p}_3 - {\bf p}_2) }
{ {\bf q}_2 \cdot ({\bf p}_3 - {\bf p}_2)} \right]
\right\} \: + \: \mbox{symmetric}
\end{eqnarray}
and
\begin{eqnarray}
A_Y & = & - 2 \:
\kappa^4 \: \varepsilon_1 \cdot \varepsilon_1' \:
\varepsilon_2 \cdot \varepsilon_2' \: \varepsilon_3 \cdot \varepsilon_3'
\; \frac{1}{{\bf q}_1^2\: {\bf q}_2^2\: {\bf q}_3^2} \nonumber \\
& & \times \: \Biggl\{
({\bf p}_2 - {\bf p}_3)^2
\Bigl[
{\bf q}_3 \cdot ({\bf p}_3 -{\bf p}_1) +
{\bf q}_2 \cdot ({\bf p}_1 -{\bf p}_2)
\Bigr] \Biggr. \nonumber \\
&& \quad + \: ({\bf p}_3 - {\bf p}_1)^2
\Bigl[
{\bf q}_3 \cdot ({\bf p}_2 - {\bf p}_3) +
{\bf q}_1 \cdot ({\bf p}_1 - {\bf p}_2)
\Bigr] \nonumber \\
&& \Biggl. \quad +\: ({\bf p}_1 - {\bf p}_2)^2
\Bigl[
{\bf q}_2 \cdot ({\bf p}_2 -{\bf p}_3) +
{\bf q}_1 \cdot ({\bf p}_3 -{\bf p}_1)
\Bigr]
\Biggr\}^2
\end{eqnarray}
Notice that $A_\vee = 0$ and $A_Y = 0$
whenever two of the three momenta are equal or
the three momenta are parallel. $A_Y$ is subleading in the relevant regime and
we can neglect it.
In order to compare $A_\vee$
with matrix theory we consider the {\it eikonal expression}
where we integrate over the time $t$ along the world-line trajectories
the Fourier transform
\begin{equation}
a_\vee = \int \frac{\mbox{d}^9{\bf q}_1 \mbox{d}^9{\bf q}_2}{(2\pi)^{18}} \: A_\vee
\: \exp \Bigl[ i \: {\bf q}_1 \cdot ({\bf r}_1 - {\bf r}_3)
+ i \: {\bf q}_2 \cdot ({\bf r}_2 - {\bf r}_3)\Bigr] \, ,
\end{equation}
where ${\bf r}_{i} = (v_i {\bf\hat n}_1 t +{\bf b}_{i})$,
${\bf b}_i\cdot{\bf\hat n}_1=0$ and
$B\equiv |{\bf b}_1 -{\bf b}_2| \gg b \equiv |{\bf b}_2 -{\bf b}_3|$.
We write the momenta in terms of the
velocities as ${\bf p}_i =M_i {\bf v}_i$ while
bearing in mind that $M_i\sim N_i$.
We normalize the amplitude by dividing the result
by the product of the $M_i$ and find~\cite{FFI}
\begin{equation}
\tilde{a}_\vee \sim \int \mbox{d} t\;
\frac{N_1 N_2 N_3 v_{23}^2 v_{13}^2 v_{12}^2}{(v_{23}^2t^2 + B^2)^{7/2}
(v_{12}^2t^2 + b^2)^{7/2}} \sim
\frac{N_1 N_2 N_3 |v_{23}| v_{13}^2 v_{12}^2}{B^7 b^6}
\end{equation}
that is to be compared to matrix theory.
A bit of controversy arised concerning the term $\tilde{a}_\vee$.
It was thought to be impossible
in matrix theory~\cite{DR}. However, the argument was not correct, as first
shown in~\cite{FFI}.
The matrix theory computation is in this case
based on the rank two group $SU(3)$.
We choose the background
\begin{equation}
R_1 =\pmatrix{v_1 t & 0 & 0 \cr
0 & v_2 t & 0 \cr
0 & 0 & v_3 t \cr} \qquad\hbox{and}\qquad
R_k =\pmatrix{b_k^1 & 0 & 0 \cr
0 & b_k^2 & 0 \cr
0 & 0 & b_k^3 \cr}\quad k>1.
\end{equation}
We can factor out the motion
of the center of mass by imposing $v_1 + v_2 + v_3 = 0$ and
$b_k^1 + b_k^2 + b_k^3 = 0$.
We use a Cartan basis for $SU(3)$, where $H^1$ and $H^2$ denote the
generators of the Cartan sub-algebra and $E_\alpha$ ($\alpha=\pm\alpha^1,
\pm\alpha^2,\pm\alpha^3$) the roots. We also define the space vectors
\begin{equation}
{\bf R}^\alpha = \sum_{a=1,2}\alpha^a\mathop{\mbox{Tr}}\, \Big(H^a {\bf R}\Big) \, .
\label{lim}
\end{equation}
With the standard choice of $H^a$ and $\alpha$, this definition singles out
the relative velocities and impact parameters, e.g.
$ R_1^{\alpha^1} = (v_2 - v_3)t\equiv v^{\alpha^1}t$ plus cyclic
and, for $k>1$, $ R_k^{\alpha^1} = b_k^2 - b_k^3\equiv b_k^{\alpha^1}$
plus cyclic.
According to the previous section we choose the
relative distance of the first particle with the other two to be much larger
than the relative distance of particle two and three, in other words, we set
\begin{equation}
|{\bf b}^{\alpha^2}|\approx|{\bf b}^{\alpha^3}|\approx B \gg
|{\bf b}^{\alpha^1}|\approx b \quad \mbox{and} \quad
B\, b \gg v \, . \label{regime}
\end{equation}
The propagators and vertices can be easily worked out from the gauge fixed
action (\ref{action}), with two points worth stressing:
first, the quadratic part (yielding the propagators) is diagonal in root
space; second, contrary to the $SU(2)$ case, there are now vertices with
three massive particles (corresponding to the three different roots). The
second point is particularly crucial because it is from a diagram
containing those vertices that we find the supergravity term.
We find twenty real massless bosons and
thirty massive complex bosons.
We only need consider some of the latter
to construct the diagram. Writing $x_k = x_k^a H^a + x_k^\alpha E_\alpha$, with
$x_k^{-\alpha} = x_k^{\alpha *}$, we define the propagators as
\begin{equation}
\langle x_k^{\alpha *}(t_1)x_k^{\alpha}(t_2) \rangle =
\Delta\Big( t_1, t_2 \: \Big|\: (b^{\alpha})^2,
v^{\alpha}\Big) \, .
\end{equation}
As for $x_1$, (the fluctuation
associated to the background $R_1$), it
mixes with the field $a_0$ (the fluctuation of the gauge potential). Writing
$x_1^\alpha=z^\alpha+w^\alpha$ and $a_0^\alpha=i(z^\alpha-w^\alpha)$ yields
\begin{eqnarray}
\langle z^{\alpha *}(t_1)z^{\alpha}(t_2) \rangle &=&
\Delta\Big(t_1, t_2 \: \Big| \: (b^{\alpha})^2+2v^{\alpha},
v^{\alpha}\Big) \nonumber \\
\langle w^{\alpha *}(t_1)w^{\alpha}(t_2) \rangle &=&
\Delta\Big( t_1, t_2 \: \Big| \: (b^{\alpha})^2-2v^{\alpha},
v^{\alpha}\Big) \, ,
\end{eqnarray}
where
\begin{equation}
\Delta_i = \int \mbox{d} s \: e^{-\beta_i^2 s}
\sqrt{\frac{v^{\alpha^i}}{2\pi\sinh 2\, v^{\alpha^i} s}}\exp\left\{
{-h(v^{\alpha^i}, s)\:t^2
-k(v^{\alpha^i}, s)\:T^2}\right\}
\end{equation}
where $t=(t_1-t_2)/2$, $T=(t_1+t_2)/2$,
$\beta_1^2 = b^2$, $\beta_2^2 = B^2 + 2 v_{13}$, $\beta_3^2=B^2$ and
\begin{eqnarray}
h(v^{\alpha^i}, s)&=&\frac{v^{\alpha^i}}{\sinh 2\,v^{\alpha^i}s}
\Bigl( \cosh 2 \,v^{\alpha^i}s + 1 \Bigr)\nonumber\\
k(v^{\alpha^i}, s)&=&\frac{v^{\alpha^i}}{\sinh 2\,v^{\alpha^i}s}
\Bigl( \cosh2\,v^{\alpha^i} s - 1 \Bigr) \, .
\end{eqnarray}
The vertex we need is contained in the term of the effective
action~(\ref{action}) of type
\begin{equation}
-2\: \mathop{\mbox{Tr}}\, \Big( [R_1, x_j][x_1, x_j] \Big) \, ,
\end{equation}
which gives a vertex with two massive bosons and a massless one and another
one with all three massive bosons. Focusing on the second case and choosing a
particular combination of the roots we obtain a term of the type
\begin{equation}
v^{\alpha^1}t\; z^{\alpha^2}x_j^{\alpha^1}x_j^{\alpha^3}
\equiv v_{23}\:t\: z^{13}x_j^{23}x_j^{12} \, , \label{verti}
\end{equation}
and a similar term with $z^{\alpha}$ replaced by $w^{\alpha}$.
The diagrams we have considered
are two-loop diagrams in the bosonic sector---there are
various similar diagrams which can give rise to the same
behavior---and we have analyzed in detail one of them, the {\it setting-sun}
diagram with all massive propagators,
which only arises in the three-body problem.
It can be written as
\begin{equation}
\tilde a_\ominus = (v^{\alpha^1})^2
\int \mbox{d}\, t \:\mbox{d}\, T \: \left( T^2 - t^2 \right) \Delta_1
\Delta_2\Delta_3 \label{a}
\end{equation}
The appropriate powers of $N_i$ can be
deduced---following~\cite{BBPT}--- from the double-line notation in which the
setting-sun diagram is of order $N^3$; this factor must be $N_1 N_2 N_3$
for the diagram to involve all three particles.
Expanding (\ref{a}) in the limit (\ref{lim}) yields
\begin{equation}
\tilde{a}_\ominus \sim
\frac{N_1 N_2 N_3 |v_{23}| v_{12}^2 v_{13}^2}{B^7b^6}
\end{equation}
which reproduces the behavior of the
supergravity result, that is,
$\tilde a_\ominus \sim \tilde a_\vee$.
The same result can be obtained in the framework of an effective action in
which the degrees of freedom related to the ``heavy'' modes (those
exchanged at distance $B$)
are integrated out and the action is discussed in
terms of the ``light'' modes (exchanged at distance $b$).
Claims about a vanishing
result in such an effective-action approach~\cite{WT} are discussed and
shown to be irrelevant for the three-graviton problem in~\cite{FFI2}.
The preliminary result of ~\cite{FFI}
concerning a single diagram has been confirmed by
the complete computation performed in~\cite{OY}. They found
perfect numerical agreement for the three graviton scattering in supergravity
and matrix theory.
| proofpile-arXiv_065-8093 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The existence problem for periodic orbits of vector fields or diffeomorphisms
occupies one of the central places in the theory of dynamical systems and
adjacent areas such as mechanics and symplectic geometry.
This question is often well motivated for a particular system,
but in many cases the answer gives very little information about
the dynamics of the system in general. However, the methods developed
to solve the existence problem may have an impact extending well beyond
the scope of the original problem.
The search for dynamical systems without periodic orbits has been inspired
by a few questions. One of them is to determine the
limits of the existence theorems. For instance,
Seifert's theorem on periodic orbits of vector fields on $S^3$
led to the famous Seifert conjecture recently disproved by K. Kuperberg,
\cite{kuk}. However, in addition to this, the systems found
as a result of this search
sometimes exhibit a new type of dynamics and extend our understanding
of the qualitative behavior that can occur for a given class
of flows.
In this review we focus on examples of Hamiltonian systems without
periodic orbits on a compact energy level in the context of the
Seifert conjecture. The paper is organized
as follows.
In Section \ref{sec:seifert} we recall Seifert's theorem
and the Seifert conjecture. We discuss the history of counterexamples
to the Seifert conjecture from Wilson's theorem (with its proof)
to the ultimate solution due to K. Kuperberg. We also briefly touch upon
related results on volume--preserving flows.
Section \ref{sec:Ham} is devoted to Hamiltonian vector
fields. We outline the constructions of counterexamples to the Hamiltonian
Seifert conjecture in dimensions greater than or equal to six,
mainly following the method introduced by the author of the
present review. We also state the Hamiltonian version of Seifert's
theorem.
Finally, a list of all known at this moment
constructions of Hamiltonian flows without periodic flows is
presented in Section~\ref{sec:list}.
\subsection*{Acknowledgments.} The author is deeply grateful to
Ana Cannas da Silva, Carlos Gutierrez, Ely Kerman, Greg and Krystyna
Kuperberg, Debra Lewis,
Richard Montgomery, Marina Ratner, Andrey Reznikov, Claude Viterbo,
and the referee for their advice, remarks,
and useful discussions.
He would also like to thank the Universit\'{e} Paris-Sud
for its hospitality during the period when the work on this manuscript
was started.
\section{The Seifert Conjecture}
\labell{sec:seifert}
\subsection{Seifert's Theorem}
The history of examples of dynamical systems without periodic orbits
as we understand it today begins with a result of Seifert that
a $C^1$-smooth vector field on $S^3$ which is $C^0$-close to the Hopf
field has at least one periodic orbit, \cite{Seifert-1950}. Later this
theorem was generalized by Fuller, \cite{Fuller}, as follows.
\begin{Theorem}
\labell{Theorem:fuller}
Let $E\to B$ be a principal circle bundle over a compact manifold
$B$ with $\chi(B)\neq 0$. Let also $X$ be a $C^1$-smooth vector field
$C^0$-close to the field $X_0$ generating the $S^1$-action on $E$. Then
$X$ has at least one periodic orbit.
\end{Theorem}
Today we know two approaches to proving existence theorems for
periodic orbits such as Theorem \ref{Theorem:fuller}, neither of which
has been trivialized and made into a part of mathematical pop culture.
The first approach, not counting the original Seifert proof, relies on
the notion of the Fuller index, \cite{Fuller}, an analogue of the Euler
characteristic for periodic orbits. The second one, analytical, is
due to Moser \cite{Moser-1976}. Moser's method uses a version of an
infinite-dimensional inverse function theorem or, more precisely, its
proof. In fact, as has been recently noticed by Kerman, \cite{ely},
under the additional assumption that $X$ is $C^1$-close to $X_0$
(but not only $C^0$-close), Moser's argument can be significantly
simplified by applying the inverse function theorem in Banach spaces.
This stronger closeness hypothesis holds in virtually
all applications of the theorem known to the author. Note also that
in results such as Theorem~\ref{Theorem:fuller}, bounding
$X-X_0$ with respect to a higher order norm
often considerably simplifies the proof.
A representative corollary of Theorem~\ref{Theorem:fuller} (for which
the $C^1$-closeness assumption is sufficient) is as follows:
\emph{
Let $f\colon {\mathbb R}^{2n}\to {\mathbb R}$ be a smooth function
having a non-degenerate minimum at the origin and, say, $f(0)=0$.
Assume that all eigenvalues of $d^2 f(0)$ are equal.
Then for every sufficiently small $\epsilon>0$ the level $\{f=\epsilon\}$
carries a periodic orbit of the Hamiltonian flow of $f$.}
In effect, the assumption that all eigenvalues of $d^2 f(0)$ are equal is
purely technical and immaterial. Moreover, there are at least $n$
periodic orbits of the Hamiltonian flow on $\{f=\epsilon\}$.
(See \cite{weinstein-1973}, \cite{Moser-1976}, \cite{Bottkol}.)
For example, when all eigenvalues are distinct the existence of
$n$ periodic orbits readily follows from the inverse function theorem.
\begin{Remark} It is interesting to point out that Seifert's theorem
on $(4n+1)$-dimensional spheres (i.~e., Theorem~\ref{Theorem:fuller}
for the Hopf fibration $S^{4n+1}\to {\mathbb C}{\mathbb P}^{2n}$) can be proved by the
standard algebraic topological methods. Namely, let
$D_x\subset S^{4n+1}$ with $x\in S^{4n+1}$ be a small embedded
$4n$-dimensional disc transversal to
the fibers and centered at $x$. We can choose $D_x$ to depend
smoothly on $x$. Let $P(x)$ be the first intersection with $D_x$ of the
integral
curve of $X$ through $x$. Clearly $P(x)=x$ if and only if the integral
curve closes up after one revolution along the fiber. There exists
a unique vector $v_x$ tangent to $D_x$ at $x$ such that $P(x)$ lies
on the geodesic in $D_x$ beginning at $x$ in the direction $v_x$
and the distance from $x$ to $P(x)$ is $\parallel v_x \parallel$.
Hence, $v_x=0$ if and only if $P(x)=x$. Thus it suffices to prove
that the vector field $v$ vanishes at least at one point on $S^{4n+1}$.
Note also that $v$ is normal to $X_0$ with respect to a suitably
chosen metric and so $v$ and $X_0$ would be linearly independent if
$v$ did not vanish.
The sphere $S^{4n+1}$ does not admit two linearly independent vector
fields. (This is a very particular case of Adams' theorem that can be
proved by using the Steenrod squares; see \cite{steenrod}.)
Therefore, on $S^{4n+1}$ the field $v$ vanishes somewhere and $X$
has a periodic orbit.
\end{Remark}
\subsection{The Seifert Conjecture}
\labell{subsec:seif}
In the same paper, \cite{Seifert-1950}, where he proved his theorem
discussed above,
Seifert asked the question whether every non-singular vector field on $S^3$
has a periodic orbit or not. The hypothetically affirmative answer to this question
has become known as the Seifert conjecture. The three--dimensional
sphere plays, of course, a purely historical role in the
conjecture and a similar question can be asked for other manifolds
and also for more restricted classes of vector fields (e.~g.,
real--analytic, divergence--free, or Hamiltonian) or for non-vanishing
vector fields in a fixed homotopy class (see Remark \ref{rmk:homotop}).
The first counterexample to the generalized Seifert conjecture is due
to Wilson, \cite{wilson}, who showed that the smooth Seifert conjecture
fails in dimensions greater than three by proving the following
\begin{Theorem}
\labell{thm:wilson}
Let $M$ be a compact connected manifold with $\chi(M)=0$ and
$m=\dim M\geq 4$. Then there exists a smooth vector field on $M$
without periodic orbits and singular points.
\end{Theorem}
\begin{proof
Let $v_0$ be a vector field on $M$ without zeros.
The idea is to modify $v_0$ so as to eliminate periodic orbits.
First assume that $v_0$ has a finite number of periodic orbits.
(Note that this
is not a generic property.) Then each of the periodic orbits can be
eliminated by the following procedure which is
common to many constructions of vector fields without closed
integral curves.
A plug is a manifold $P=B\times I$, where $I=[-1,1]$ and
$B$ is a compact manifold of dimension $m -1$ with boundary,
and a non-vanishing vector field $w$ on $P$ with the following
properties\footnote{This definition, more restrictive than that given in
\cite{kug,kugk}, is only one of several existing definitions
of plugs. See also \cite{kuk}, \cite{gi:seifert},
and references therein. However,
the differences between these definitions seem to be of a technical
rather than conceptual nature.}:
\begin{enumerate}
\item
\emph{The boundary condition}:
$w=\partial/\partial t$ near $\partial P$, where $t$ is the coordinate on $I$.
As a consequence, an integral curve of $w$ can only leave the plug
through $B\times \{1\}$.
\item
\emph{Existence of trapped trajectories}:
There is a trajectory of $w$ beginning on $B\times \{-1\}$ that
never exits the plug. Such a trajectory is said to be trapped in $P$.
\item
\emph{Aperiodicity}:
$w$ has no periodic orbits in $P$. In other words, the ``flow''
of $w$ is aperiodic.
\item
\emph{Matched ends or the entrance--exit condition}:
If two points $(x, -1)$, the ``entrance'', and $(y,1)$,
the ``exit'', are on the same integral curve of $w$, then
$x=y$. Hence, every
trajectory of $w$ which enters and exists $P$ has its exit point right
above the entrance point (with $I$ being regarded as the vertical
direction).
\item
\emph{The embedding condition}:
There exists an embedding $i\colon P\hookrightarrow {\mathbb R}^m$ such that
$i_*(w)=\partial/\partial x_m$ near $\partial P$. In other words, near the boundary
of $P$, the embedding $i$ preserves the vertical direction.
\end{enumerate}
Assuming that the plug has been constructed, we can use it to modify $v_0$.
Namely, consider a small flow box near a point on a periodic orbit
of $v_0$. We choose coordinates on the box so that $v_0=\partial/\partial x_m$.
Using $i$, we embed $P$ into the box so that the trapped trajectory
matches the periodic orbit of $v_0$. Let $v$ be the vector field
obtained by replacing $v_0$ by $w$ inside of $i(P)$. By
the first property of the plug (the boundary condition), $v$ is
smooth. By the third condition (aperiodicity) and
the fourth (matched ends), $v$ has one periodic orbit less than $v_0$.
By applying this procedure to every periodic orbit of $v_0$, we
eliminate all of them.
Let us now turn to the construction of a plug. First observe
that when a plug $P'$ satisfying all of the above conditions but
the entrance--exit condition is constructed, it is easy to
find a genuine plug $P$. Namely, $P$ is the union of two copies
of $P'$ with the second copy put upside-down on the top of the
first copy. Hence, from now on we may forget about the entrance--exit
condition. This trick, with minor modifications, is present in
all constructions of plugs.
The plug from \cite{wilson}, the so-called \emph{Wilson's plug}, is
the cylinder over the unit ball $D^{m-1}\subset {\mathbb R}^{m-1}$.
Thus $B=D^{m-1}$ and $t=x_m$, and the plug is automatically embedded into
${\mathbb R}^m$. To define $w$, fix an embedding
${\mathbb T}^2\hookrightarrow D^{m-1}\times 0$ whose normal bundle is trivial.
(Hence the requirement that $m\geq 4$.) Then a tubular neighborhood
$U$ of ${\mathbb T}^2$ in $P$ is diffeomorphic to
${\mathbb T}^2\times D^{m-2}(\epsilon)$, where $D^{m-2}(\epsilon)$ is the
$(m-2)$-dimensional ball of a small radius $\epsilon>0$. Let $w_1$ be an
irrational vector field on ${\mathbb T}^2$ and let $f$ be a bump function on
$D^{m-2}(\epsilon)$ equal to 1 at the center of the ball and vanishing near
$\partial D^{m-2}(\epsilon)$. Since ${\mathbb T}^2$ is embedded into $D^{m-1}\times 0$,
we may
assume that each $y\times D^{m-2}(\epsilon)$, $y\in{\mathbb T}^2$, is parallel to
the vertical direction, i.~e., $\partial/\partial x_m$ is tangent to the fibers
$y\times D^{m-2}(\epsilon)$ of the tubular neighborhood $U$. Now we set
\begin{equation}
\labell{eq:plug}
w=fw_1+(1-f)\frac{\partial }{\partial x_m}
\end{equation}
on $U$ and $w=\partial/\partial x_m$ on $P\smallsetminus U$. It is easy to check that
$(P,w)$ satisfies the plug conditions (except the entrance--exit condition).
In particular, all trapped trajectories are asymptotic to ${\mathbb T}^2$.
This completes the construction of the plug. In what
follows we will refer to ${\mathbb T}^2$ with an irrational flow embedded into
the plug as to the \emph{core} of the plug.
\begin{Remark}
Wilson's plug defined in this section is clearly only $C^\infty$-smooth.
However, its construction can be modified to apply in the real
analytic category; see, e.~g.,
\cite{ghys,kugk} and references therein.
\end{Remark}
When the periodic orbits of the flow of $v_0$ are not isolated, the
plug $P$ needs to be slightly altered. Namely, one can
construct $P$ so that the beginnings in $B\times\{-1\}$
of trapped trajectories form a set with non-empty interior. Then a finite
number of plugs are inserted into $M$ so as to interrupt every
trajectory of $v_0$ (regardless of whether it is closed or not). As before,
no new periodic orbit is created, but the original ones are eliminated.
\end{proof}
\begin{Remark}
In the plug used in the case where there are infinitely many
periodic orbits, we clearly have ${\mathit div}\, w\neq 0$.
This amounts to the fact that in the volume--preserving version of
Wilson's argument, one has to start with a vector field having only
a finite number of periodic orbits.
\end{Remark}
The next step after Wilson's result was the construction, due to
Schweitzer \cite{schweitzer}, of a $C^1$-smooth non-vanishing
vector field on $S^3$ (or on any compact three-dimensional manifold)
without periodic
orbits. The proof again is by inserting plugs. Schweitzer's plug
uses as the core the closure of a trajectory of the Denjoy flow on
${\mathbb T}^2$ instead of an irrational flow. (Hence, only $C^1$-smoothness.)
Since
the trajectory is neither dense nor closed, one can take as $B$
the torus ${\mathbb T}^2$ with
a small open disc deleted in order to have the embedding condition
satisfied. The vector field on $P$ is then given by \eqref{eq:plug}
where $w_1$ is the Schweitzer field on $B$ and $f$ is an appropriately
chosen cut-off function.
Schweitzer's construction was later improved by Harrison,
\cite{harrison}, to obtain a $C^{2+\epsilon}$-smooth vector field.
Finally, a major breakthrough came when K. Kuperberg, \cite{kuk},
constructed a $C^\infty$-smooth (and even real--analytic)
three-dimensional plug rendering thus a non-singular real--analytic
flow without periodic orbits on every compact three-manifold.
(See also \cite{ghys} and \cite{kugk}.)
Kuperberg's plug is entirely different from those described above.
One begins with a ``plug'' $P'$ satisfying all of the conditions of the
plug but aperiodicity -- there are exactly two periodic trajectories
inside of $P'$. Then one builds $P$ by inserting parts of $P'$
into $P'$ again (self-insertion) so that the vector field on the
inserted parts matches the original vector field. The self-insertion is
performed so as to guarantee that the resulting flow on $P$ is
free of periodic orbits.
\begin{Remark}
\labell{rmk:proliferation}
The common feature of the constructions described above (with
the exception of Kuperberg's plug) is that a non-singular vector field
without periodic orbits is used as the core of the plug in order to
trap a trajectory. For example, in Wilson's plug the core flow is an
irrational flow on the torus and in Schweitzer's plug the core is the
Denjoy flow. Hence, one non-singular vector field without periodic orbits
gives rise to a multitude of such vector fields with various
higher--dimensional phase spaces (proliferation of aperiodic flows).
This idea is also applied to find counterexamples to the Seifert
conjecture in other categories.
\end{Remark}
\begin{Remark}
\labell{rmk:homotop}
The vector field $v$ constructed in the proof of Theorem \ref{thm:wilson}
is homotopic to the original vector field $v_0$ in the class of
non-singular vector fields. This follows from that the field $w$
on Wilson's plug is homotopic to the vertical vector field. The same
holds for many other constructions of plugs including
Kuperberg's plug. Hence, the homotopy type of a non-singular vector
field is preserved while the vector field is altered by inserting
the plugs to eliminate the periodic orbits.
\end{Remark}
\subsection{Volume--Preserving Flows.}
\labell{subsec:vol}
The Seifert conjecture for this class of flows
seems to be rather similar to the Seifert conjecture in the
smooth and real analytic categories. To be more precise, as was pointed
out by A. Katok,
\cite{katok}, \emph{a divergence--free smooth
non-vanishing vector field $v_0$ on a compact manifold of dimension
$m\geq 4$ can be changed into one without periodic orbits,
provided $v_0$ has only a finite number of periodic orbits.}
In fact, the field $w$ given by \eqref{eq:plug} on Wilson's plug
(for isolated periodic orbits) can be chosen divergence--free.
This yields a smooth volume--preserving flow on $S^{2n+1}$, $2n+1\geq 5$,
without periodic orbits.
Much less is known in dimension three. Let us state two important
results due to G. Kuperberg, \cite{kug}.
The first one is that \emph{every compact three--manifold
$M$ possesses a volume--preserving $C^\infty$-smooth flow
with a finite number of periodic orbits and no fixed points},
\cite{kug}. This follows from the
fact that $M$ can be obtained from ${\mathbb T}^3$ by a series of Dehn
surgeries (provided that $M$ is orientable). Let us equip ${\mathbb T}^3$ with
an irrational flow. A Dehn surgery can be interpreted as the insertion
of a version of a smooth volume--preserving plug $P$ into ${\mathbb T}^3$.
These plugs differ from those described in Section \ref{subsec:seif}
in some essential ways.
The plugs $P$ are not aperiodic -- each $P$ carries exactly two periodic
orbits. Moreover, the flow on $P$ is not standard on the boundary
to account for the Dehn--twist in the surgery. Note that this method
also yields a flow on $M$ with exactly two periodic orbits.
The second result is that \emph{every compact three--manifold
possesses a non-vanishing divergence--free $C^1$ vector
field without periodic orbits}, \cite{kug}. In
particular, when applied to $S^3$, this theorem gives a
volume--preserving
$C^1$ counterexample to the Seifert conjecture. The proof of the theorem
uses the previously mentioned result and a volume--preserving version
of Schweitzer's plug to eliminate periodic orbits. To construct such a
plug, G. Kuperberg applies, in a non-trivial way, stretching in the
vertical direction to compensate for the area change resulting from
the Denjoy flow. (See \cite{kug} for more details.)
\begin{Remark}
\labell{rmk:Ratner}
There is a broad class of volume--preserving flows without periodic
orbits on homogeneous spaces of Lie groups. These flows arise as actions
of unipotent subgroups.
For example, the horocycle flow (Section \ref{subsec:proofs}) and
its Hamiltonian analogues from Example \ref{exam:hor-high} are
among such flows. Deep results on the closures of orbits for
these flows and on their invariant measures are obtained by Ratner.
(See \cite{ratner} and references therein).
\end{Remark}
\section{Hamiltonian Vector Fields without Periodic Orbits}
\labell{sec:Ham}
The question of (non-)existence of periodic orbits for Hamiltonian
dynamical systems differs from that for general smooth dynamical
systems in at least one important way -- Hamiltonian systems tend
to have a lot of periodic orbits. For example, it is reasonable
to expect that a given Hamiltonian system with a proper smooth
Hamiltonian has a closed orbit on every regular energy level.
Taken literally, this statement is not correct in general. (For example,
Zehnder, \cite{zehnder}, found a Hamiltonian flow on ${\mathbb T}^{2n}$,
$2n\geq 4$, with an irrational symplectic structure such that there
are no periodic orbits for a whole
interval of energy values; see Example \ref{ex:Zehn} below.) However,
periodic orbits are known
to exist for almost all energy values for a broad class of symplectic
manifolds. For instance, as shown by
Hofer, Zehnder, and Struwe, \cite{ho-ze:per-sol,str},
almost all levels of a smooth proper
function on ${\mathbb R}^{2n}$ carry at least one periodic orbit.
A similar theorem holds for cotangent bundles, \cite{hv}. (The reader
interested in a detailed discussion and further references
should consult \cite{ho-ze:book}.)
Furthermore, according to the $C^1$-closing lemma of Pugh and Robinson,
\cite{pugh-rob}, a periodic orbit can be created from a recurrent
trajectory by a $C^1$-small smooth perturbation of the original
dynamical system. (As Carlos Gutierrez pointed out to the author,
the perturbation can in fact be made $C^\infty$-smooth,
\cite{gut:letter}.) As a consequence of the closing lemma, a
$C^1$-generic system has the union of its periodic trajectories
dense in the set of its recurrent points. Both of these results hold
for Hamiltonian systems, \cite{pugh-rob}.
The situation becomes more subtle when the $C^1$-topology is replaced
by the $C^k$-topology with $k>1$. The $C^2$-closing lemma has not been
yet proved or disproved. (See, e.g., \cite{gut:example}, \cite{Car},
and \cite{AZ} for partial results, examples, and references.)
In the Hamiltonian setting,
the systems whose periodic trajectories are dense may no longer be
generic if $k$ is roughly speaking greater than the dimension
of the manifold. For example, according to M. Herman,
\cite{herman1,herman2}, Hamiltonian vector fields
with Hamiltonians $C^k$-close to Zehnder's Hamiltonian on
${\mathbb T}^{2n}$ do not have
periodic orbits for an interval of energy values, provided that
the symplectic form satisfies a certain Diophantine condition and $k>2n$.
These examples are, however, in some sense exceptional. In fact, the
theorems on the density of energy values for periodic orbits can be
interpreted as that the Hamiltonian $C^k$-closing lemma holds in a very
strong form for many symplectic manifolds.
It is still not known if the $C^k$-closing lemma with $k\geq 2$ fails
in general for Hamiltonian flows when the symplectic form is exact near
the energy level.
The examples of Hamiltonian flows without periodic orbits on
a compact energy level are scarce. In Section \ref{sec:list} we
attempt to list all known constructions of such flows.
In this section, we focus on the Hamiltonian Seifert conjecture
and on one particular method to construct Hamiltonian vector fields
without periodic orbits.
\begin{Remark}
There are a number of results concerning existence of
periodic orbits on a fixed
energy level. Recall that according to Weinstein's conjecture, there
is a periodic orbit on an energy level of contact type, \cite{we:conj}.
This conjecture was proved for ${\mathbb R}^{2n}$ by Viterbo in \cite{vi:Theorem}
and then for many other symplectic manifolds (e.~g., for cotangent
bundles in \cite{hv}). The reader interested in details and more
up-to-date references should consult \cite{ho}, \cite{ho-ze:book}, or
\cite{vi:functors}.
Also note that in the context of contact topology and hydrodynamics the Seifert conjecture
is discussed in \cite{GE}.
\end{Remark}
\subsection{The Seifert Conjecture for Hamiltonian Vector Fields}
\labell{subsec:seif-ham}
The Seifert conjecture can be extended to Hamiltonian flows in a number
of ways. For example, one may ask if there is a proper smooth
function on a given symplectic manifold (e.g., ${\mathbb R}^{2n})$, having
a regular level without periodic orbits.
Recall that a \emph{characteristic} of a two-form $\eta$ of rank
$(2n-2)$ on a $(2n-1)$-dimensional manifold is an integral curve
of the field of directions formed by the null-spaces $\ker\eta$.
Thus, the question can be reformulated as whether or not in a given
symplectic manifold there exists
a regular compact hypersurface without closed characteristics. One can
even ask whether the manifold admits a function with a sufficiently
big set of energy levels without periodic orbits.
Let $i\colon M\hookrightarrow W$ be an embedded smooth compact
hypersurface without boundary in a $2n$-dimensional symplectic manifold
$(W,\sigma)$.
\begin{Theorem}
\labell{Theorem:main1}
Assume that $2n\geq 6$ and that $i^*\sigma$ has only a finite number of
closed characteristics. Then there exists a $C^\infty$-smooth embedding
$i'\colon M\hookrightarrow W$, which is $C^0$-close and isotopic to $i$,
such that ${i'}^*\sigma$ has no closed characteristics.
\end{Theorem}
An irrational ellipsoid $M$ in the standard symplectic vector space
${\mathbb R}^{2n}=W$ corresponds to a collection of $n$ uncoupled harmonic
oscillators whose frequencies are linearly independent over
${\mathbb Q}$, i.~e., $M$ is the unit level of a quadratic Hamiltonian
with rationally independent frequencies. Thus, $M$ carries exactly $n$
periodic orbits.
Applying Theorem \ref{Theorem:main1}, we obtain the following
\begin{Corollary}
\labell{Corollary:sphere}
For $2n\geq 6$, there exists a $C^\infty$-embedding
$S^{2n-1}\hookrightarrow {\mathbb R}^{2n}$ such that the restriction of the
standard symplectic form to $S^{2n-1}$ has no closed characteristics.
\end{Corollary}
\begin{Corollary}
\labell{Corollary:function}
For $2n\geq 6$, there exists a $C^\infty$-function
$h\colon{\mathbb R}^{2n}\to {\mathbb R}$, $C^0$-close and isotopic
(with a compact support) to a positive definite quadratic form,
such that the Hamiltonian flow of $h$ has no closed trajectories on
the level set $\{ h=1\}$.
\end{Corollary}
\begin{Remark}
These results, proved in \cite{gi:seifert} and, independently,
by M. Herman \cite{herman-fax}, first
required the ambient space to be at least eight-dimensional, i.~e.,
$2n\geq 8$, in the $C^\infty$-case. A
$C^{2+\epsilon}$-hypersurface
in ${\mathbb R}^6$ was found by M. Herman \cite{herman-fax}. In
\cite{gi:seifert97}, Theorem \ref{Theorem:main1} and its corollaries
were extended to $2n=6$.
\end{Remark}
\begin{Remark} Almost nothing is known on how big the
set of energy values ${\mathcal E}$ of the levels without periodic orbits can be.
It is clear that as in Corollary \ref{Corollary:function}, there exists
a function on ${\mathbb R}^{2n}$ for which ${\mathcal E}$ is infinite (but discrete).
It also seems plausible that ${\mathcal E}$ can have limit points at critical
values of the Hamiltonian.
However, it is unknown whether or not ${\mathcal E}$ can have a limit point
that is a regular value.
As is clear from the argument of \cite{gi:seifert,gi:seifert97}, there
exists a $C^0$-foliation of an open neighborhood of
an ellipsoid in ${\mathbb R}^{2n}$ such that every leaf is $C^\infty$-smooth and
isotopic to the sphere and no leaf carries closed characteristics.
\end{Remark}
\begin{Remark}
Theorem \ref{Theorem:main1} can be applied to any compact
hypersurface with a finite number of closed characteristics.
The only known examples of such hypersurfaces in ${\mathbb R}^{2n}$
that are not
diffeomorphic to $S^{2n-1}$ are non-simply connected hypersurfaces
constructed by Laudenbach, \cite{laud}.
\end{Remark}
An alternative way to formulate the Hamiltonian Seifert conjecture is
to consider an odd--dimensional manifold with a maximally non-degenerate
closed two-form (rather than the restriction of
the symplectic form to a hypersurface).
A result similar to Theorem \ref{Theorem:main1}, involving only
two-forms on $M$ but no symplectic embedding, is also correct.
Let $\dim M=2n-1$
and let $\eta$ be a closed maximally non-degenerate (i.~e., of rank
$(2n-2)$) two-form on $M$.
Recall that two such forms $\eta$ and $\eta'$ are said to be \emph{homotopic}
if there exists a family $\eta_\tau$, $\tau\in [0,1]$,
of closed maximally non-degenerate forms connecting $\eta=\eta_0$ with
$\eta'=\eta_1$ and such that all $\eta_\tau$ have the same cohomology
class. The following theorem
is proved in \cite{gi:seifert} and \cite{gi:seifert97}.
\begin{Theorem}
\labell{Theorem:main2}
Assume that $2n-1\geq 5$ and that $\eta$ has a finite number of
closed characteristics. Then there exists a closed maximally
non-degenerate 2-form $\eta'$ on $M$ which is homotopic to $\eta$ and has
no closed characteristics.
\end{Theorem}
\begin{Remark}
\labell{rmk:hms}
In fact, Theorem \ref{Theorem:main2} is a corollary of Theorem
\ref{Theorem:main1}. To derive Theorem \ref{Theorem:main2} from Theorem
\ref{Theorem:main1}, consider a pair $(M^{2n-1},\eta)$, where
$\eta$ is closed and maximally non-degenerate. Then, as follows from
Gotay's coisotropic embedding theorem, \cite{gotay}, there exists a
symplectic manifold $(W^{2n},\sigma)$ and a proper embedding
$M\hookrightarrow W$ such that $\sigma|_M=\eta$.
More explicitly, set $W=M\times (-1,1)$ and let $t$ be the coordinate
on $(-1,1)$.
To construct $\sigma$, fix a 1-form $\alpha$ on $M$ such that
$\ker \alpha$ is everywhere transversal to the characteristics of $\eta$.
Then $\sigma=\eta+d(\epsilon t\alpha)$ is symplectic on $W$, provided
that $\epsilon>0$ is small enough. (The author is grateful to Ana Cannas
da Silva for this remark.)
Technically, however, it is more convenient to prove
Theorem \ref{Theorem:main2} first and then to modify its proof
to obtain Theorem \ref{Theorem:main1}.
\end{Remark}
\begin{Remark}
Theorem \ref{Theorem:main2} extends to the real analytic case: one can
make the form $\eta'$ real analytic, provided that $Q$ and $\eta$ are
real analytic. The argument is the same as that used in the construction
of a real analytic version of Wilson's, \cite{wilson}, or Kuperberg's,
\cite{kuk}, flow. (See \cite{ghys} and \cite{kugk}.)
\end{Remark}
\subsection{The Hamiltonian Seifert Theorem} Before we turn to the
outline of the proofs of Theorems \ref{Theorem:main1} and
\ref{Theorem:main2}, let us state a result which can be viewed as a
Hamiltonian version of Theorem \ref{Theorem:fuller}.
Recall that a periodic orbit of a vector field or an integral curve
of a one-dimensional foliation is called non-degenerate if the
linearization of its Poincar\'{e} return map does not have unit as
an eigenvalue. Let $(B, \omega)$ be a compact symplectic manifold and let
$\pi\colon M\to B$ be a principle $S^1$-bundle. For a one-form $\lambda$
such that $d\lambda$ is $C^0$ sufficiently small, the closed two-form
$\eta=\pi^*\omega+d\lambda$ on $M$ is maximally non-degenerate (and
homotopic to $\pi^*\omega$).
\begin{Theorem}[\cite{gi:MathZ}]
\labell{thm:Ham-seifert}
If $d\lambda$ is $C^0$-small enough, the number of
closed characteristics
of $\eta$ is strictly greater than the cup-length of $B$. If,
in addition, all closed characteristics are non-degenerate, then the number
of closed characteristics is greater than or equal to the sum of
Betti numbers of $B$.
\end{Theorem}
Theorem \ref{thm:Ham-seifert} generalizes some of the results (for
equal eigenvalues) of \cite{weinstein-1973,Moser-1976,Bottkol}.
A simple geometrical proof for the case where $B$ is a surface and
$\lambda$ is $C^2$-small can be found in \cite{gi:FA}.
Theorem \ref{thm:Ham-seifert} is closely related to the Arnold
conjecture and to the problem of existence of periodic orbits of
a charge in a magnetic field; \cite{gi:Cambr}.
Note also that if $d\lambda$ is not $C^0$-small, but
$\eta$ is still non-degenerate and homotopic to $\pi^*\omega$, the form
$\eta$ may have no closed characteristics at all. The example is the
horocycle flow described in the next section. (See \cite{gi:Cambr}
and \cite{gi:MathZ} for more details.)
\subsection{Proofs of Theorems \ref{Theorem:main1} and \ref{Theorem:main2}
and Symplectic Embeddings.}
\labell{subsec:proofs}
There are two essentially different methods to prove these theorems.
Both approaches follow the same general line as Wilson's argument
(i.~e., the proof of Theorem \ref{thm:wilson}), modified to make it work
in the symplectic category, and vary only in the dynamics of the plug.
The plug introduced by M. Herman, \cite{herman-fax}, is the
symplectization
of Wilson's plug when $2n\geq 8$ or of the Schweitzer--Harrison plug
when $2n=6$ (hence the $C^{2+\epsilon}$-smoothness constraint) modified at
infinity. In other words, these plugs are obtained by taking the
induced flows on the cotangent bundles to the non-Hamiltonian plugs
and altering these flows away from a neighborhood of the zero section while
keeping all the properties of the plug.
The plugs from \cite{gi:seifert,gi:seifert97}
are built similarly to Wilson's plug but the entire construction
is Hamiltonian. Let us give some more details on this method focusing
specifically on the proof of Theorem \ref{Theorem:main2}. (The passage
to Theorem \ref{Theorem:main1} is simply by showing that the same
argument can be carried out for hypersurfaces. See \cite{gi:seifert}.)
In the construction of the plug $P$ below we identify a neighborhood
in $M$ containing $P$ with a small ball in ${\mathbb R}^{2n-1}$
equipped with the two-form $\sigma$ induced by the canonical inclusion
${\mathbb R}^{2n-1}\subset{\mathbb R}^{2n}$.
The key element that makes Wilson's construction work in the
Hamiltonian category is that the core of the plug (i.~e., the flow
replacing the irrational flow on the torus)
is also chosen to be Hamiltonian. In other words, to have the aperiodicity
condition satisfied, we need to find a Hamiltonian flow on a symplectic
manifold having no periodic orbits on some compact energy level.
Furthermore, to have the embedding condition satisfied, this energy
level with the induced two-form has to be embeddable into
${\mathbb R}^{2n-1}$ to be made into a part of the plug.
(In particular, the symplectic form must be exact on a neighborhood
of the energy level.) The Hamiltonian flow used
\cite{gi:seifert,gi:seifert97} is the horocycle flow, which we will
now describe.
Let $\Sigma$ be a closed surface with a hyperbolic metric, i.~e., a
metric with constant negative curvature $K=-1$. Denote by
$\lambda$ the canonical Liouville one-form ``$p\,dq$'' on $T^*\Sigma$
and by $\Omega$ the pull-back to $T^*\Sigma$ of the area form on $\Sigma$.
The form $\omega=d\lambda+\Omega$ is symplectic on $T^*\Sigma$ and
exact on the complement of the zero section. The Hamiltonian flow
of the standard metric Hamiltonian on $T^*\Sigma$ with respect to
the twisted symplectic form $\omega$ is known to have no periodic
orbits on the unit energy level $ST^*\Sigma$. Indeed,
the restriction $\varphi^t$ of this flow to $ST^*\Sigma$ is the standard
horocycle
flow. The fact that $\varphi^t$ has no periodic orbits is a consequence of
the classical result of Hedlund, \cite{hedlund}, that the horocycle flow
is minimal, i.~e., all orbits are dense. We will return to the horocycle
flow in Section \ref{sec:list}.
To construct the plug, we then need to prove that there is a
``symplectic'' embedding of a neighborhood $U$ of $ST^*\Sigma$ into
${\mathbb R}^{2n-1}$, i.~e., an embedding $j\colon U\hookrightarrow{\mathbb R}^{2n-1}$
such that $j^*\sigma=\omega$. When $2n-1\geq 7$, this readily follows
from a general symplectic embedding theorem due to Gromov,
\cite{gr:icm,gr:book}.
When $2n-1=5$ an additional argument is required. Consider the forms
$\omega_t=d\lambda+t\Omega$ on $T^*\Sigma$. These forms are
symplectic for all $t$. The form $\omega_0$ is the standard symplectic
form on $T^*\Sigma$ and $\omega_1$ is the twisted form $\omega$.
Note that if $U$ is small enough, there exists a ``symplectic''
embedding $j_0\colon (U,\omega_0)\hookrightarrow ({\mathbb R}^5, \sigma)$. This is an
easy consequence of the fact that
there exists a Lagrangian immersion $\Sigma\to{\mathbb R}^4$,
\cite{gr:icm,gr:book}. Note also that $U$ can be replaced
by a closed neighborhood of $ST^*\Sigma$. Then the existence of a
symplectic
embedding for $2n-1=5$ is a particular case of the following
result improving the dimensional constraints from \cite{gr:icm,gr:book}
by one.
Let $U$ and $V$ be manifolds of
equal dimensions. The manifold $U$ is assumed to be compact, perhaps with
boundary, while $V$ may be open but must be a manifold without
boundary. Let $\sigma$ be a symplectic form on $V$. Abusing notation,
also denote by $\sigma$ the pull-back of $\sigma$ to $V\times{\mathbb R}$
under the natural projection $V\times{\mathbb R}\to V$.
\begin{Theorem}[\cite{gi:seifert97}]
\labell{Theorem:embed}
Let $\omega_t$, $t\in [0,1]$, be a family of symplectic forms
on $U$ in a fixed cohomology class: $[\omega_t]={\mathit const}$. Assume
that there is an embedding
$j_0\colon U\hookrightarrow V\times{\mathbb R}$ such
that $j_0^*\sigma=\omega_0$. Then there exists an embedding
$j\colon U\hookrightarrow V\times{\mathbb R}$, isotopic to $j_0$, with
$j^*\sigma=\omega_1$.
\end{Theorem}
\begin{Remark}
\labell{Remark:moser1}
Since $\sigma|_V$ is symplectic, the composition of $j$ with
the projection to $V$ is necessarily an immersion.
When $\partial U=\emptyset$, Theorem \ref{Theorem:embed} follows immediately
from Moser's theorem \cite{moser}.
\end{Remark}
The rest of the proof (i.~e., the construction of the form $\eta'$
on the complement of the core) proceeds according to the same scheme
as the proof of
Theorem \ref{thm:wilson} with more or less straightforward modifications;
see \cite{gi:seifert} for details.
This completes the outline of the proofs of Theorems \ref{Theorem:main1}
and \ref{Theorem:main2}.
\section{The List of Hamiltonian Flows Without Periodic Orbits.}
\labell{sec:list}
In this section we list all known to the author examples of smooth
Hamiltonian systems on symplectic manifolds $(V,\omega)$ having
no periodic orbits on a compact regular level $M$. This list is similar to
the one given in \cite{gi:seifert97} with the exception of Example
\ref{exam:hor-high}.
The list is divided into two parts according to whether $\omega$
is exact near $M$ or not. The reason for this division is that
the qualitative behavior of the flows for which $[\omega|_M]\neq 0$ can be
expected to differ in an essential way from the behavior of the flows
with $[\omega|_M]=0$.
For instance, Zehnder's flow (Example \ref{ex:Zehn}) appears to be more
robust than it seems to be possible for
``exact'' flows.
\bigskip
\noindent {\bf Case 1:}{\em~ The form $\omega$ is not exact.}
\begin{Example}[Zehnder's torus, \cite{zehnder}]
\labell{ex:Zehn}
Let $2n\geq 4$. Consider the torus $V={\mathbb T}^{2n}$ with an
irrational translation-invariant symplectic structure $\omega$.
Choose a Hamiltonian $H$ on $V$ so that every level $\{ H=c\}$ with
$c\in (0.5, 1.5)$ is the union
of two standard embedded tori ${\mathbb T}^{2n-1}\subset {\mathbb T}^{2n}$. Since $\omega$
is irrational, the characteristics of $\omega|_{{\mathbb T}^{2n-1}}$ form an
quasiperiodic flow on ${\mathbb T}^{2n-1}$. Thus, none of the levels
$\{H=c\}$ with $c\in (0.5, 1.5)$ carries a periodic orbit.
Note that $\omega$ is not exact on any of these energy levels.
As we have already pointed out,
the flow in question exhibits remarkable stability properties according
to a result of M. Herman, \cite{herman1,herman2}.
\end{Example}
\begin{Example}[The Hamiltonian horocycle flow in dimension $2n$]
\labell{exam:hor-high}
Let ${\mathbb C} H^n$ $\,$ be the complex hyperbolic space equipped with its
standard K\"{a}hler metric (see, e.g., Section XI.10 of \cite{kob-nom}).
Pick a discrete subgroup $\Gamma$ in the group $\mbox{\rm SU}(1,n)$
of Hermitian isometries of ${\mathbb C} H^n$ such that $B=\Gamma\backslash{\mathbb C} H^n$
is smooth and compact. (To see that $\Gamma$ exists recall that
according to a theorem of Borel, \cite{borel}, there is a discrete
subgroup $\Gamma$ such that $\Gamma\backslash{\mathbb C} H^n$ is compact. Then
by Selberg's lemma, \cite{selberg}, $\Gamma$ can be chosen
so that $B$ is smooth.)
Let $H$ be the standard metric Hamiltonian and let $d\lambda$ be
the standard symplectic structure on $T^*B$. Denote by $\Omega$
the pull-back to $T^*B$ of the K\"{a}hler symplectic form on $B$.
The Hamiltonian flow of $H$ with respect to
the twisted symplectic structure $\omega=\Omega+d\lambda$ has no periodic
orbits on the level $M=\{H=1\}$. This follows from the fact that this
flow is
generated by the (right) action of a unipotent one-parameter subgroup
of $\mbox{\rm SU}(1,n)$ and that $\Gamma$ contains no unipotent elements,
\cite{KM}; see also \cite{rag}, \cite{ratner}, and references therein.
When $n=1$ this construction gives exactly the horocycle flow on the
unit (co)tangent bundle to a surface $B=\Sigma$. If $n>1$, the form
$\omega$ is not exact in any neighborhood of $M$.
Note also that the Hamiltonian horocycle flows arise in the
description of the motion of a charge in a magnetic field on the
configuration space $B$; see \cite{gi:Cambr,gi:MathZ,ely:paper}.
These are the only known ``magnetic'' flows without
periodic orbits on an energy level.
\end{Example}
\noindent {\bf Case 2:}{\em~ The form $\omega$ is exact near the energy level.}
In this case we should distinguish whether $\dim V=4$ or $\dim V \geq 6$.
When $\dim V=4$, the only known smooth example is the horocycle
flow described as a Hamiltonian system in Section
\ref{subsec:proofs} and in Example \ref{exam:hor-high} with $n=1$ and
$B=\Sigma$. To slightly generalize this
construction, note that a neighborhood of $ST^*\Sigma$ can be identified
with
$
V=\Gamma\backslash \mbox{\rm SU}(1,1)\times (1-\epsilon,1+\epsilon)
$,
where
$\Gamma=\pi_1(\Sigma)$, so that $H$ becomes the projection to the second
component. Then, instead of $\Gamma=\pi_1(\Sigma)$
we can take any
discrete subgroup such that the quotient
$\Gamma\backslash\mbox{\rm SU}(1,1)$
is compact and smooth. As follows from Remark \ref{rmk:hms}, the flow on
$\Gamma\backslash\mbox{\rm SU}(1,1)$
generated by the action of a unipotent subgroup of
$\mbox{\rm SU}(1,1)$ is the Hamiltonian flow of $H$ on $\{ H=1 \}$
with respect to some symplectic form.
Finally note that by Remark \ref{rmk:hms} the flows on three--manifolds
constructed by G. Kuperberg and described above in Section
\ref{subsec:vol} can also be thought of as Hamiltonian flows on some
symplectic manifolds. This gives a class of smooth Hamiltonian flows
on symplectic manifolds with a finite number of closed orbits on a given
energy level or $C^1$-flows without periodic orbits. It is not known
if such a flow on $S^3$ can be obtained by a $C^2$-embedding of
$S^3$ into ${\mathbb R}^4$.
When $\dim V\geq 6$, the only known examples are essentially those
described in Section \ref{subsec:seif-ham} or those obtained by
iterations of their constructions. In other words, beginning with a
flow with a finite number of periodic orbits one can eliminate these
orbits by using the plugs from \cite{gi:seifert,gi:seifert97} or
\cite{herman-fax}. The resulting manifold can also be used as the
core of a plug in the same way as the horocycle flow. As in
Remark \ref{rmk:proliferation}, these new
plugs can in turn be employed to construct Hamiltonian flows without
periodic orbits, etc.
Since the starting point of this method is a flow with
a finite number of periodic orbits, it makes sense to ask how many
such flows are known in addition to those on irrational ellipsoids.
For example, we have already mentioned non-simply connected
hypersurfaces in ${\mathbb R}^{2n}$ found by Laudenbach \cite{laud}.
To produce more examples, we can apply the same method as described
in the previous paragraph. Namely,
if a Hamiltonian flow with a finite number of periodic orbits can be
used as the core of the plug, the resulting ``plug'' will also
have only a finite number of periodic orbits inside of it. Thus by
inserting it into another flow with a finite number of periodic orbits,
we can create yet one more flow with the same property. For example, by
taking $S^1\subset {\mathbb R}^2$ as the core, Cieliebak, \cite{ciel}, constructed
embeddings $S^{2n-1}\subset{\mathbb R}^{2n}$, $2n\geq 4$, such that the pairs
of closed characteristics are linked and knotted in an essentially
arbitrary way. (Each plug results into two ``parallel'' closed orbits,
but there are no constraints on knotting of these pairs or their linking.)
| proofpile-arXiv_065-8094 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection*{Aknowledgements} CB wishes to thank M.Asorey and R. Stora for
interesting remarks.
| proofpile-arXiv_065-8103 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
{\it Introduction}.---Despite much experimental evidence of
pseudogap phenomena
in the underdoped cuprates, their microscopic mechanism is not
understood.\cite{randeria97}
However, a pairing precursor as the origin of the pseudogap is one
prominent possibility.\cite{exp}
An active endeavor has been to incorporate
strong pairing fluctuations to account for the pseudogap phenomena,
especially above the superconducting critical temperature
$T_c$.\cite{maly,varlamov}
In this report, we take a different tack
and study the order parameter fluctuation effects in the superconducting
state.
From a phenomenological standpoint, the pseudogap state can be considered
as a superconductor whose phase coherence is destroyed by strong phase
fluctuations whereas the gap is robust.\cite{Emery}
Therefore, it will bare similarity
to a superconducting state with strong order parameter phase and
amplitude fluctuations.
Here we do not attempt to reproduce pseudogap phenomena since we
study fluctuations in the weak-coupling BCS theory below $T_c$ and do not
consider vortex
pair unbinding transition, but
much of the qualitative trend is expected to pertain to the pseudogap state.
Using the effective low-energy theory approach,
we may describe the
problem with relatively few physical parameters and separate
the effect of order parameter phase and amplitude fluctuation
effects.\cite{ours,xo}
We address the following two issues concerning the fluctuation effects.
First, we examine the effect of the order parameter fluctuations on the
size of the spectral gap below $T_c$.
It has been shown that the fluctuations reduce the magnitude of the
order parameter and the critical temperature.\cite{varlamov,varlamov1,smith}
In this paper we also show the reduction in the
spectral gap in a wide temperature range. For simplicity we do not the
include Coulomb interaction
or disorder although they would alter the form of the order parameter
fluctuations and further modify our result if properly
included.\cite{varlamov,varlamov1}
Secondly, we examine the angular variation of the fluctuation effect.
Angle-resolved photoemission spectroscopy (ARPES) data on underdoped cuprates
show that above
or below $T_c$ the shape of the gap near the node significantly deviates
from the simple $d$-wave shape.\cite{norman,mesot}
We show that this
may be due to the fluctuation of the phase rather than the amplitude
of the order parameter, and moreover that the amplitude fluctuation effect is
the strongest near the antinode. Also we discuss the angular variation
of the quasiparticle lifetime.
{\it Low-energy effective theory}.---We consider the weak-coupling mean-field
BCS theory
in which the pairing potential gives
a $d$-wave order parameter which is effective only within the
momentum thickness $2\Lambda \ll k_F$ around the Fermi surface.
Then the effective fermion Hilbert space is a thin momentum shell of a
characteristic thickness $2\Lambda $ around the Fermi surface where
$\Lambda \ll k_F$.
For convenience, we coarse-grain the momentum shell into small boxes
and label them with an angular variable $\phi$.
The effective action of the fermions and the order
parameter is
\begin{eqnarray}
S_{\rm eff} &=&\int d^2x \int_0^\beta d\tau \left[\sum_{\phi,\sigma}
c^{\dag}_\sigma(\phi ;{\bf x},\tau)
\left(\partial_\tau +{\nabla^2 \over 2m} -\mu\right)
c_\sigma(\phi ;{\bf x},\tau)\right. \nonumber \\
& & + \sum_{\phi} \Psi^*({\bf x},\tau)
w({\phi})
c_\downarrow (\phi ;{\bf x},\tau)
c_\uparrow (\phi+\pi ;{\bf x},\tau) +{\rm h.c.} \nonumber \\
& & \left. -{1\over g}\Psi^*({\bf x},\tau)\Psi({\bf x},\tau)
\right]~,
\end{eqnarray}
where $\Psi$ is the superconducting order parameter introduced via
Hubbard-Stratonovich transformation to decouple the pairing interaction.
In the above we assume a pairing potential of the form
$V(\phi ,\phi^\prime )=g~w(\phi)w(\phi^\prime)$
where $w(\phi) = \cos 2\phi $ and $g<0$ which produces a $d$-wave
order parameter. It is understood that in writing
$c_\sigma(\phi ;{\bf k},\tau)$, the momentum $\bf k$ lives only inside
the small box labeled by the angular variable $\phi $ near the Fermi
surface.
In order to explicitly separate the order parameter phase and amplitude
degrees of freedom, we re-express $\Psi({\bf x},\tau)=\Delta({\bf x},\tau)
e^{i\theta({\bf x},\tau)}$ where $\Delta({\bf x},\tau)$ takes a real value.
In the mean-field approximation, we replace $\Delta({\bf x},\tau)$
with $\Delta_0$ and obtain a d-wave gap $\Delta(\phi)=\Delta_0
\cos 2\phi $ using the following self-consistent gap equation:
\begin{equation}
{1\over |g|} = T\sum_{\omega}\sum_{\phi ,{\bf k}}
{w^2(\phi) \over \omega ^2 +\xi_{\bf k}^2 +\Delta^2(\phi)}~,
\label{gapeq}
\end{equation}
where $\xi_{\bf k}=k^2/2m -\mu $. The momentum summation above is
constrained by the condition $|\xi_{\bf k}| < v_F \Lambda $.
Here we consider the fluctuation around the mean-field
value and re-express $\Delta({\bf x},\tau) = \Delta_0+d({\bf x},\tau)$.
Then we perform a gauge transformation
$ \psi_\sigma({\bf x},\tau) = c_\sigma({\bf x},\tau)
e^{-i\theta ({\bf x},\tau)/2}$, to couple the phase fields to the
fermions explicitly.
The resulting effective action is expressed in terms of the Nambu
spinor notation,
$\hat{\psi} = (\psi_{\uparrow}, \psi^{\dag}_{\downarrow}) $, as
$S_{\rm eff}= S_0 + S_I$,
with
\begin{eqnarray}
S_0&=& T\sum_{\omega}
\sum_{\phi ,{\bf k}} \hat{\psi}^{\dag} \hat{G}_0^{-1} \hat{\psi}
\\ \nonumber
&& +\int_0^\beta d\tau
\int d^2x \big{\{} \, {n_f \over 8m} \,[\nabla \theta ({\bf x},\tau )]^2
+{1\over g} [d({\bf x},\tau)]^2 \big{\}}
\label{Act0}
\end{eqnarray}
and
\begin{eqnarray}
S_I &=& T\sum_\omega T\sum_\nu \sum_{\phi, \bf k,q}
\hat{\psi}^{\dag}(\phi ; {\bf k},\omega)\Big{\{}
{1\over 2}[-\nu +i{\bf v}_F(\phi)\cdot {\bf q}
]\theta({\bf q},\nu) \nonumber \\
&& +w(\phi)
\hat{d}({\bf q},\nu)\Big{\}}\hat{\psi}(\phi ; {\bf k-q},\omega -\nu)
~.
\label{ActI}
\end{eqnarray}
Here
\begin{equation}
\hat{G}_0^{-1} = \left(
\begin{array}{cc}
i\omega - \xi_{\bf k} & \Delta_0 w(\phi) \\
\Delta_0 w(\phi) & i\omega + \xi_{\bf k} \\
\end{array}
\right),
\end{equation}
and $ \hat{d}(\nu , {\bf q}) =\hat{\sigma}_x~{d}(\nu , {\bf q})$,
with $\hat{G}_0$ the bare Green's function for the neutral fermions.
In the above, we approximate $\xi_{\bf k}=k^2/2m -\mu $ as
$\xi_{\bf k} \approx v_F( |{\bf k}| -k_F)$ and ${\bf v}_F(\phi)$
is the Fermi velocity in the $\phi$ direction.
In building this effective theory
we have not considered vortex pair unbinding which
leads to the Kosterlitz-Thouless transition.
This is justified well outside the fluctuation regime. We also assume that
we are in the temperature range where the BCS mean-field theory is justified,
namely, that $\delta T/T_c \gg \Delta_0/E_F $
in a two dimensional clean superconductor.\cite{AL}
One thing we observe from the form of the effective theory is that
the strength of the coupling between fermions and the amplitude
fluctuations has an
angle-dependence; the amplitude fluctuation effect
is suppressed near
the gap nodes, as is evident in Eq. (\ref{selfE0}) where the
self-energy correction is multiplied by a factor of $w^2(\phi)$.
The strength of the coupling to the phase fluctuations is not suppressed
at the node, however.
In studying the finite temperature superconducting to normal state
transition, it should be sufficient to consider only the static
fluctuations of the phase, and we may suppress the time-dependence
in $\theta $ and $d$ and retain only the spatial fluctuations.
Eq. (\ref{ActI}) is then modified as
\begin{eqnarray}
S_I &=& T\sum_\omega \sum_{\phi, \bf k,q}
\hat{\psi}^{\dag}(\phi ;{\bf k},\omega)\Big{\{}
{1\over 2}i{\bf v}_F(\phi)\cdot {\bf q}
\theta({\bf q}) \nonumber \\
&& +w(\phi)
\hat{d}({\bf q})\Big{\}}\hat{\psi}(\phi ; {\bf k-q},\omega)
~.
\label{ActIm}
\end{eqnarray}
The simplified form above, however, does not produce reliable results
near zero temperature.
{\it Fermion single-particle properties}.---In evaluating the quasiparticle
self-energy by perturbative expansions,
we take advantage
of the fact that the effective theory resides in the thin shell around
the large Fermi surface and select the diagrams which are of leading
order in
$\Lambda /k_F$,
which amounts to summing over the ring diagrams
in calculating the order parameter correlation function.
The fermion self-energy can be obtained self-consistently from the
Dyson equation.
We first evaluate the correlation functions of the amplitude
fluctuations:
\begin{eqnarray}
\langle d({\bf q})~d({-\bf q})\rangle _{\rm ring} &=&
{g~T\over 1 + {g}\Pi_{dd}({\bf q},0)} ~,
\label{ddcor}
\end{eqnarray}
where
\begin{eqnarray}
\Pi_{dd}({\bf q},\nu) &=&
{1\over 2}T\sum_{\omega }\sum_{\phi , \bf k}
w^2(\phi) {\rm Tr} \left[ \hat{G}_0(\phi ;{\bf k},\omega)
\hat{\sigma}_x \right. \nonumber \\
&& \left. \times \hat{G}_0(\phi ;{\bf k+q},\omega+\nu)\hat{\sigma}_x
\right] ~.
\label{Pdd}
\end{eqnarray}
From Eq. (\ref{ddcor}) we obtain
$\langle d({\bf q})~d({-\bf q})\rangle _{\rm ring}={T/( a+b~q^2)}$
where
the temperature-dependent coefficients $a$ and $b$ can be evaluated
by carefully expanding $\Pi_{dd}$ in $\bf q$ from Eq. (\ref{Pdd}).
If we only consider the spatial fluctuations in the order parameter,
the $\langle d~\theta\rangle $ terms are zero
in the Gaussian approximation.
Also the phase fluctuation has the following well-known correlation
function\cite{ours}
\begin{equation}
\langle \theta({\bf q})\theta({\bf -q})\rangle _{\rm ring}=
{4mT \over n_s(T)~q^2}~,
\end{equation}
where $n_s(T)$ is the superfluid density at temperature $T$.
Now we can determine the quasiparticle self-energy correction using
the self-consistent Dyson equation, neglecting the vertex corrections:
\begin{eqnarray}
\hat{\Sigma}(\phi ;{\bf k}, \omega) &\approx &
\sum_{\bf q} \left\{ {1\over 4} \left[ {\bf v}_F(\phi)\cdot {\bf q} \right] ^2
\langle \theta({\bf q}) \theta({\bf - q})\rangle _{\rm ring}
\right. \\ \nonumber
&& + w^2(\phi) \langle d({\bf q}) d({\bf - q}) \rangle _{\rm ring}
\bigg{\}} \hat{G}(\phi ;{\bf k-q}, \omega -\nu)~,
\label{selfE0}
\end{eqnarray}
where $\hat{G}$ is the full fermion Green's function, given
self-consistently by $\hat{G}^{-1} = \hat{G}^{-1}_0 - \hat{\Sigma}$.
In general the self-energy has both a momentum and frequency dependence,
but we focus on the behavior of the self-energy near the Fermi
surface, assuming that it varies smoothly near the Fermi surface. Therefore
we neglect the $\xi_{\bf k}$-dependence so that the only
momentum dependence is through the angle $\phi$ on the
Fermi surface.
Then we can approximately obtain the self-energy:
\begin{eqnarray}
\hat{\Sigma}(\phi ,\omega) &\approx & \left\{
{4mT\over n_s(T)}\,{1\over 16\pi} \ln \left[
{\Lambda ^2 + \tilde{\Delta}^2(\phi)+\tilde{\omega} ^2 \over
\tilde{\Delta}^2(\phi)+\tilde{\omega} ^2} \right] \right. \nonumber \\
&& +\left. \sum_{\bf q} {T~w^2(\phi)\over a+b~q^2}~{1\over
\tilde{\omega} ^2 +({\bf v}_F(\phi)\cdot {\bf q})^2 +\tilde{\Delta}^2(\phi)}
\right\} \nonumber \\
&& \times \left(
\begin{array}{cc}
-i\tilde{\omega} & \tilde{\Delta}(\phi) \\
\tilde{\Delta}(\phi) & -i\tilde{\omega} \\
\end{array} \right),
\label{selfE}
\end{eqnarray}
where $\tilde{\omega}$ and $\tilde{\Delta}$ can be calculated
self-consistently by
$\hat{G}^{-1}(\phi ;{\bf k},\omega) = \hat{G}_0^{-1}(\phi ;{\bf k},\omega)
-\hat{\Sigma}(\phi, \omega)$.
From the self-energy obtained in Eq. (\ref{selfE}), by analytically
continuing
the frequency $i\omega \rightarrow
\omega +i\eta $,
we can calculate various single-particle
properties such as density of states (DOS), spectral functions,
and single-particle
scattering rates. In this paper we focus on DOS since
it is gauge-invariant and measurable via the tunneling spectroscopy\cite{sts}
or the momentum-integrated ARPES data\cite{photo}.
We are especially interested in the
angle-resolved DOS:
\begin{equation}
N(\phi ,\omega) = -{1\over \pi} {\rm Im}\int d\xi _{\bf k}~ {\rm Tr}~
\hat{G}(\phi ;{\bf k},\omega)~,
\end{equation}
as it gives information about the angular variation of the fluctuation
effect.
Throughout this paper, we set the relevant energy scales
$\Lambda \approx 10\Delta _0(T=0)/v_F $ and $E_F \approx 5 \Lambda v_F$ so that
we are well in the BCS weak-coupling regime; these relative energy scales
give much
stronger pairing strength than ordinary superconductors but significantly
weaker than the cuprates.
With the energy scales so chosen, we may estimate the regime of the
validity of the mean-field theory. If we apply the Ginzburg criterion,
namely, $|\Delta_0(T)|^2 \gg \langle d({\bf x})~d({\bf x})\rangle $,
which may be estimated from Eq. (\ref{ddcor}),
the mean-field theory breaks down only near $T/T_c \sim 0.98 $.
Therefore, the BCS framework is reliable in most of the temperature
range that we consider.
In Fig. \ref{TDOS} we show the total DOS. As the temperature
increases, the DOS peak is widely smeared. Very close to
$T_c$, the DOS peak has almost disappeared and the spectral gap is only
manifested by the depletion in the DOS around $\omega =0$ as compared to
the normal state DOS.
Figure \ref{peak} shows the DOS peak position as a function of temperature;
in most of the temperature range, we can interpret the peak position roughly
as the spectral gap.
This figure shows that
the spectral gap is reduced in a wide temperature range due to the order
parameter fluctuations.
Near $T/T_c\approx 0.98$, the DOS peak structure has almost disappeared,
and the DOS maxima do not have a meaning as the spectral gap. Therefore,
we need to
estimate the size of the spectral gap from the width of the DOS depletion
in this case.
It is difficult to study the evolution of the spectral gap
through $T_c$ in this framework because the mean-field theory breaks down
sufficiently close to $T_c$ as discussed above.
Also the result near zero temperature is not reliable
due to the negligence of the time-dependence of the fluctuations.
The angle-dependent DOS peak near $T_c$ is shown in Fig.
\ref{dp45}.
At $T\ll T_c$, the DOS peak contour follows the $d$-wave shape.
As the temperature approaches $T_c$, we observe that the DOS
is widely smeared to low-energy states especially near the antinode
($\phi=0$).
Figure \ref{dp45} shows that
the shape of the angle-resolved gap (DOS peak curve) is deformed
from the original $d$-wave shape near the node. We argue that the
downward bending of the DOS peak curve near the node ($\phi = \pi/4$)
is due to the phase fluctuations since the amplitude fluctuations alone
do not cause the downward bending as illustrated in the same figure.
We find that the angle-dependence of the spectral gap near the node is
\begin{eqnarray}
|\tilde{\Delta}(\phi)| &\approx & \Delta_0(T) |\cos(2\phi)| \\
&&~\times \left\{ 1-{mT\over2\pi n_s(T)}
\ln \left[{\Lambda\over \Delta_0(T)|\cos(2\phi)|}
\right] \right\}~, \nonumber
\end{eqnarray}
and the slope of the gap near the node is reduced.
It has similarity to the shape of the gap obtained by ARPES on
underdoped $\rm Bi_2Sr_2CaCu_2O_{8+\delta}$ (Bi2212) in the superconducting
state,\cite{mesot} although its microscopic origin is not well understood.
Figure \ref{rate} shows angular variation of the scattering rate due to the
order parameter fluctuation.
We observe that the maximum scattering rate occurs near the antinode
and also that the rate decreases as one approaches the node.
This variation is due to the angular dependence of the order parameter
magnitude fluctuations.
This feature may contribute to the anisotropy of the quasiparticle
scattering rate in cuprate superconductors.\cite{photo}
{\it Discussions and Conclusions}.---Here we discuss the qualitative
effects of amplitude and phase fluctuations.
In Fig. 3, we find that if we omit the phase fluctuation effect,
the apparent gap is enhanced. This is because the amplitude
fluctuations tend to increase the gap magnitude. This can be understood
from the Fermi liquid reference frame as following:
The fermion self-energy correction due to the amplitude fluctuations
can be roughly estimated as
\begin{eqnarray}
\Sigma ({\bf p}, \omega) & \approx & -\int d \nu d^2 q
G({\bf p -q},\omega -\nu )~\langle \Delta ({\bf q},\nu)\Delta ({-\bf q},-\nu)
\rangle \nonumber \\
&\approx & {1\over i\omega + \xi_p} \langle \Delta (x) \Delta (x)\rangle ~,
\end{eqnarray}
and therefore the effective spectral gap is enhanced by the fluctuations as
$|\Delta_{\rm eff}|^2 = \langle \Delta (x) \Delta (x)\rangle
=|\Delta_0|^2+\langle \delta\Delta (x) \delta\Delta (x)\rangle$.
The effect of phase fluctuation can be considered as a Doppler shift in
the fermionic spectrum by ${\bf k}_F\cdot {\bf v}_s$ where ${\bf v}_s \sim
\nabla \theta /m$. Due to thermally fluctuating superfluid velocity,
the DOS near the gap
is now shifted
since the energy levels at the gap nodes are enhanced.
As a result, more states would be occupied near the nodes, and hence
decrease in the slope of the gap nodes as shown in Fig. 3.
On including the vortex pair unbinding transition of the BKT type,
which gives stronger phase fluctuation effects, we can obtain a
Fermi arc-like phenomenon.\cite{ours,franz98}
In order to obtain the correct effect on the size of the spectral gap,
both the phase and amplitude fluctuations have to be self-consistently
taken into account. The total effect is reduction in the spectral gap as
shown in Fig. 2 and 3.
However, the spectral gap
may not be equal to the order parameter magnitude especially if the
fluctuation is strong,\cite{kosztin} and hence more careful
study is needed to separate these two quantities.
The results presented in this report are equally well applicable to any
unconventional superconducting symmetry. In principle, any superconductor
would have a window of temperatures near $T_c$ where such fluctuations are
visible, depending on the pairing strength and the superfluid density.
In case of underdoped cuprates,
however, it would be essential to include the effect of vortex pair
unbinding in the pseudogap state, due to the small superfluid density.
The result of this report may nevertheless pertain to its superconducting
state. Indeed, a recent
observation of the deformed gap shape
in Bi2212,\cite{mesot} which is only observed in
underdoped regime,
may be related to the order parameter fluctuation effects,
considering that the phase fluctuations are more important in underdoped
cuprates due to reduced superfluid density. Since the
microscopic origin of this deformation is not understood, further experimental
investigation on the temperature variation of the gap anisotrpy would
be desirable.
Some of the above features are shown to be shared by other
theoretical results in the normal state counterpart.
For instance,
the form of the density of states obtained above is similar to that above $T_c$
when the Gaussian pairing fluctuations are incorporated.\cite{old}
Also a similar but much more pronounced deformation of the $d$-wave spectral
gap was obtained above $T_c$ using a self-consistent conserving
approximation.\cite{jan}
The author thanks Alan Dorsey and Rob Wickham for helpful discussions and
comments.
This work was supported by the
National High Magnetic Field Laboratory and by NSF grant DMR 96-28926.
| proofpile-arXiv_065-8106 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{ACKNOLEDGMENTS}
This work is supported by the National Natural Science
Foundation of China, the Fundamental Research Foundation of Tsinghua
Univeristy and a special grant from the State Commission of Education
of China.
\null\vspace{0.5cm}
\section*{APPENDIX}
Here we give the explicit expressions for
$\Gamma^{(2b)}_{\mu\nu}(\Pi)$, $\Gamma^{(2c)}_{\mu\nu}(\Pi)$,
$\Gamma^{(2d)}_{\mu\nu}(\Pi)$, $\Sigma_{\rho}(\Pi)$,
$\Gamma^{(2b)}_{\mu\nu}(\Pi_t)$, $\Gamma^{(2c)}_{\mu\nu}(\Pi_t)$,
$\Gamma^{(2d)}_{\mu\nu}(\Pi_t)$, and $\Sigma_{\rho}(\Pi_t)$ which
can be obtained by direct calculations of the
Feynman diagrams in Figs.~2(c)-2(e). The explicit expressions are
$$
\Gamma^{(2b)}_{\mu\nu}(\Pi)=-c_f\frac{M_Wm_tm'_t}
{12\pi^2F_{\Pi}}\sqrt{2\sqrt{2}\pi
G_F\alpha}\bigl\{ 2[(p_{e^+}-p_{\bar{\nu_e}})_{\mu}
$$
$$
\hspace{0.2cm}(p_{e^+}-p_{\bar{\nu_e}})_{\nu}C_{21}+p_{\gamma\mu}p_{\gamma\nu}C_{22}
+(p_{e^+}-p_{\bar{\nu_e}})_{\mu}p_{\gamma\nu}C_{23}
$$
$$
\hspace{0.2cm}+p_{\gamma\mu}(p_{e^+}-p_{\bar{\nu_e}})_{\nu}C_{23}+g_{\mu\nu}C_{24}]
-g_{\mu\nu}B_0(p_{\gamma},m_b,m_b)
$$
$$
\hspace{0.2cm}-g_{\mu\nu}m^2_tC_0+(2p_{e^+}-2p_{\bar{\nu_e}}+p_{\gamma})_{\mu}
(p_{e^+}C_{11}-p_{\bar{\nu_e}}C_{11}
$$
$$
\hspace{0.2cm}+p_{\gamma}C_{12})^{}_{\nu}+(p_{e^+}C_{11}-p_{\bar{\nu_e}}C_{11}+p_{\gamma}
C_{12})_{\mu}(2p_{e^+}-2p_{\bar{\nu_e}}+p_{\gamma})_{\nu}
$$
$$
\hspace{0.2cm}-(p_{e^+}C_{11}-p_{\bar{\nu_e}}C_{11}+p_{\gamma}
C_{12})^{\rho}(2p_{e^+\rho}
g_{\mu\nu}-2p_{\bar{\nu_e}\rho}g_{\mu\nu}
$$
$$
\hspace{0.2cm}+p_{\gamma\rho}g_{\mu\nu}+i\varepsilon_{\mu\rho\nu\sigma}
p_{\gamma}^{\sigma})+[2(p_{e^+}-p_{\bar{\nu_e}})_{\mu}(p_{e^+}
-p_{\bar{\nu_e}})_{\nu}$$
$$
\hspace{0.2cm}-(p_{e^+}-p_{\bar{\nu_e}})^{2}g_{\mu\nu}+(p_{e^+}-
p_{\bar{\nu_e}})_{\mu}p_{\gamma\nu}-g_{\mu\nu}p_{e^+}.p_{\gamma}
$$
$$
\hspace{0.2cm}+g_{\mu\nu}p_{\bar{\nu_e}}.p_{\gamma}+p_{\gamma\mu}(p_{e^+}
-p_{\bar{\nu_e}})_{\nu}-i\varepsilon_{\mu\rho\nu\sigma}
(p_{e^+}-p_{\bar{\nu_e}})^{\rho}
$$
$$
\hspace{0.2cm}p_{\gamma}^{\sigma}]C_0\bigr\},\hspace{4.52cm}
\eqno(A1)
$$
$$
\Gamma^{(2c)}_{\mu\nu}(\Pi)=
c_f\frac{M^{}_Wm_tm_t'}{6\pi^2F^{}_{\Pi}}\sqrt{2\sqrt{2}\pi G^{}_F\alpha}
\bigl\{2[(p_{e^+}-p_{\bar{\nu_e}})_{\mu}\hspace{0.5cm}
$$
$$
\hspace{0.2cm}(p_{e^+}-p_{\bar{\nu_e}})_{\nu}C^*_{21}+p_{\gamma\mu}
p_{\gamma\nu}C^*_{22}+(p_{e^+}-p_{\bar{\nu_e}})_{\mu}p_{\gamma\nu}C^*_{23}
$$
$$
\hspace{0.2cm}+p_{\gamma\mu}(p_{e^+}-p_{\bar{\nu_e}})_{\nu}C^*_{23}
+g_{\mu\nu}C^*_{24}]+(p_{e^+}C^*_{11}-p_{\bar{\nu_e}}C^*_{11}
$$
$$
\hspace{0.2cm}+p_{\gamma}C^*_{12})_{\mu}(2p_{e^+}-2p_{\bar{\nu_e}}
+p_{\gamma})_{\nu}
-p_{\gamma\mu}(p_{e^+}C^*_{11}-p_{\bar{\nu_e}}C^*_{11}
$$
$$
\hspace{0.2cm}+p_{\gamma}C^*_{12})_{\nu}+g_{\mu\nu}(p_{e^+}.p_{\gamma}C^*_{11}
-p_{\bar{\nu_e}}.p_{\gamma}C^*_{11})
$$
$$
\hspace{0.4cm}-i\varepsilon_{\mu\rho\sigma\nu}[(p_{e^+}-p_{\bar{\nu_e}})
C^*_{11}+p_{\gamma}C^*_{12}]^{\rho}p_{\gamma}^{\sigma}\bigr\},
\eqno(A2)
$$
$$
\Gamma^{(2d)}_{\mu\nu}=c_f\frac{M_Wm_tm_t'}{4\pi^2F_\Pi}\sqrt{2\sqrt{2}\pi
G_F\alpha}\hspace{1cm}\hspace{2.4cm}
$$
$$
\hspace{0.2cm}\frac{B_1(p_t+p_{\bar{b}},m_t,m_b)+B_0(p_t+p_{\bar{b}},m_t,m_b)}
{M^2_W}\hspace{1cm}
$$
$$
\hspace{0.2cm}\bigl\{(p_{e^+}-p_{\bar{\nu_e}})_{\mu}(p_{e^+}
-p_{\bar{\nu_e}})_{\nu}-p_{\gamma\mu}p_{\gamma\nu}\hspace{2cm}
$$
$$
\hspace{0.2cm}-g_{\mu\nu}(p_{e^+}
-p_{\bar{\nu_e}})^2\bigr\}\hspace{4.6cm}
\eqno(A3)
$$
$$
-i\Sigma_{\mu}(\Pi)=
c_f\frac{M_Wm_tm_t'}{8\pi^2F_{\Pi}}\sqrt{2\sqrt{2}G_F}
(p_t+p_{\bar{b}})_{\mu}\hspace{2.4cm}
$$
$$
\hspace{0.4cm}\bigl\{B_1(p_t+p_{\bar{b}},m_t,m_b)+B_0(p_t+p_{\bar{b}},m_t,m_b)
\bigr\}
\hspace{2cm}
\eqno(A4)
$$
$$
C_{ij}=C_{ij}(p_{\bar{\nu_e}}-p_{e^+},-p_{\gamma},m_t,
m_b,m_b)~~~~
$$
$$
C^*_{ij}=C_{ij}(p_{e^+}-p_{\bar{\nu_e}},p_{\gamma},
m_b,m_t,m_t)\,,
\eqno(A5)
$$
where $C_{ij}$'s are the standard 3-point functions given in
Ref.\cite{PV}.
The expressions for $\Gamma^{(2b)}_{\mu\nu}(\Pi_t)$,
$\Gamma^{(2c)}_{\mu\nu}(\Pi_t)$, $\Gamma^{(2d)}_{\mu\nu}(\Pi_t)$,
and $\sigma_{\mu}(\Pi_t)$ can be obtained by simply replacing $m'_t$ by
$m_t-m'_t$, $F_{\Pi}$ by $F_{\Pi_t}$ and taking $c_f=1$.
\null\vspace{0.5cm}
| proofpile-arXiv_065-8109 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\footnotetext[1]{Permanent address:
Department of Physics, Beijing Normal University,
Beijing 100875, China.}
Recent advances in cavity quantum electrodynamics have
significantly expanded our understanding of the interaction between
matter and the quantized electromagnetic field \cite{Mey92,Har92}.
A central topic in these studies is the theoretical and
experimental investigation of situations in which
a single atom interacts with a small number of modes of the radiation field
in high-$Q$ optical or microwave resonators. In such a setting, the
dynamical behavior of the atom is evidently very different from the
free-space situation and one can observe phenomena such as inhibited and
enhanced spontaneous emission \cite{Pur46,Kle81} or Rabi oscillations between
two electromagnetically coupled states \cite{Bru96}. A natural extension of
these studies concerns the modification of the interaction between two atoms
in a cavity environment. As the interatomic interaction is ultimately
mediated by the electromagnetic field, one can expect drastic effects
also in this case. The interest in this problem has recently grown,
stimulated in part by the remarkable experiments of Refs.\ \cite{Eic93}
and \cite{DeV96}. For example, several recent articles have examined
the mutual coherence of the two atomic dipoles under various
circumstances \cite{Koc95,Mey97,Rud98,Yeo98}.
In a further study the modification of the near-resonant dipole-dipole
interaction between two atoms confined to a cavity was investigated in
detail \cite{Gol97}. As a main result it was shown that the familiar concept
of the dipole-dipole potential ceases to be meaningful under certain
circumstances. The purpose of the present paper is to continue and
extend this work, the emphasis now being put on the investigation
of the actual dynamical behavior of the atoms. In particular, we examine
the atomic center-of-mass motion under the influence of their interaction
with the cavity field. In order to work out basic aspects of the problem we
concentrate here on the model of a short and closed optical resonator
in which the atoms interact exclusively with a single damped standing-wave
mode of the electromagnetic radiation field. An initially excited atom will
then spontaneously emit a photon into the cavity mode and subsequently
reabsorb it. Consequently, it experiences a random walk in momentum space,
i.e.\ heating. Due to photon exchange the atom can also interact with and
excite its partner in the cavity. These processes will cease, of course, as
soon as the photon escapes the resonator due to cavity losses.
The analysis of this problem shows that, contrary to what one might
expect intuitively, the presence of the second atom does not simply lead
to some quantitative modifications in the heating and decay process of
the first. Rather, it causes qualitative changes in the dynamical behavior
of the system. In particular, one observes a tendency of the system to
settle into so-called ``dark" or ``quasi-dark" states. These dark states
consist of superpositions of states in which the initial excitation is stored
in either atom 1 or atom 2, i.e., entangled states of the atoms-cavity
system. Due to destructive quantum interferences
these superpositions are completely --- or to a large degree --- dynamically
decoupled from the states in which the photon is present in the cavity.
Thus they are immune --- or almost immune --- to photon decay. Atoms in
these dark states can be thought of as a new kind of ``molecule'' largely
delocalized and bound by the cavity electromagnetic field.
The focus of the present article lies on an analysis of these dark
states, which can be viewed as a generalization of the antisymmetric
Dicke state of the theory of super- and subradiance \cite{Dic54}. To our
knowledge, the persistence of the entangled two-atom dark states under
the influence of the atomic center-of-mass motion has not been previously
discussed in the literature.
Section II introduces our model and establishes the notation. In order to
motivate the subsequent analysis, Sec.~III discusses some numerical examples
that illustrate the role of the dark states and demonstrate their
long-livedness, even in the case of only approximate darkness. Section
IV gives a detailed analytical discussion of the dark states. We first
consider the dynamics of the atomic system in the Raman-Nath approximation
(RNA), where the atoms are treated as infinitely massive. This allows for a
very simple and transparent description of the effect. We then remove this
approximation and demonstrate that certain RNA dark states do remain
dark in the exact analysis. The decay rates of the other RNA dark states
are estimated, and the analytical results compared to numerical calculations.
A central result is that even though these states are only approximately dark,
they still have extremely long lifetimes. This should render the existence
of the quasi-dark states amenable to experimental observation, at least in
principle. Finally, further remarks and conclusions are given in Sec.~V.
\section{Model}
Our objective consists in studying the center-of-mass motion of two atoms
confined by a trapping potential and interacting with the electromagnetic
field inside a high-$Q$ cavity. In order to work out most clearly
some of the basic physical effects observable in this system we investigate in
the following an idealized model problem. Questions of experimental
realizability will be discussed in Sec.\ V.
We consider the one-dimensional motion of two two-level atoms of mass $M$
trapped inside an infinite square-well potential $V(x)$ with boundaries
at $x=0$ and $x=L$. The upper and lower internal atomic states $|e\rangle$ and
$|g\rangle$ are separated in energy by an amount of $\hbar\omega_0$. The atoms
which are treated as distinguishable are also placed inside a short and
closed electromagnetic cavity that is aligned with the atomic trap along the
$x$ axis. We assume the cavity characteristics to be such that the atomic
interaction with the cavity field can be described as a coupling to a single
mode. In particular, spontaneous photon emission into directions other than the
$x$ axis is disregarded. On the other hand, the damping of the relevant
cavity mode due to its coupling to the electromagnetic vacuum outside
the resonator is taken into account. Based on this description, the
Hamiltonian of the system is
\begin{equation}
H=H_a+H_c+H_r+H_{ca}+H_{cr}
\label{h}
\end{equation}
where $H_a$, $H_c$ and $H_r$ are the free Hamiltonians of the atoms,
the cavity mode and the vacuum modes, respectively. They are given by
\begin{equation}
H_a=\sum_{j=1}^2\left (\frac{\hat{p}_j^2}{2M} + V({\hat x}_j) +
\hbar\omega_0\sigma_j^ {\dagger}\sigma_j \right ),
\label{h1}
\end{equation}
\begin{equation}
H_c=\hbar\omega_c a_c^{\dagger} a_c,\quad H_r=\sum_{\mu}\hbar\omega_{\mu}
a_{\mu}^{\dagger} a_{\mu}.
\label{h2}
\end{equation}
Here, $\hat{p}_j$ is the center-of-mass momentum and ${\hat x}_j$ the
position of the $j$th atom along the
$x$-axis. The atomic pseudo-spin operators $\sigma_j$ are defined by
$\sigma_j=|g,j\rangle\langle e,j|$.
The annihilation operators for the cavity mode and the vacuum modes are
denoted $a_c$ and $a_\mu$, respectively, the mode frequencies are
$\omega_c$ and $\omega_{\mu}$. The interaction of the cavity mode with the
atoms and with the vacuum modes are described by the terms $H_{ca}$ and
$H_{cr}$. In the dipole and the rotating-wave approximation, they read
\begin{equation}\label{hca}
H_{ca}=\sum_{j=1}^2\hbar g\cos(kx_j+\phi)(\sigma_j^\dagger a_c+
\sigma_ja_c^\dagger),
\label{h3}
\end{equation}
\begin{equation}
H_{cr}=\sum_\mu\hbar(g_\mu^\ast a_c^\dagger a_\mu+g_\mu a_ca_\mu^\dagger),
\label{h4}
\end{equation}
where $g=(\hbar\omega_c/2\varepsilon_0 L_c)^{1/2}$ denotes the
atom-cavity coupling constant with $L_c$ the cavity length. For a planar
cavity the mode profile is cosine-shaped with wavevector $k$. The phase
angle $\phi$ characterizes the relative positioning between
cavity mode and atomic trap. The coupling constant between the cavity
mode and the $\mu$th vacuum mode is denoted $g_{\mu}$.
In discussing the atomic time evolution we will mostly be concerned with
situations in which the center-of-mass wave function is spread out over
a region of extension $\Delta x$ large in comparison to the cavity mode
wavelength $2\pi/k$ but small in comparison to the trap length $L$. For
small enough times the existence of the trap walls may thus be neglected.
Furthermore, it is assumed that the initial wave function can be
ascribed a well-defined momentum $(p_{01},p_{02})$ and that the
effects of the (small) momentum spread around this initial value may be
disregarded. From the form (\ref{h3}) of the atom-field coupling it follows
that a single-atom state with momentum $p$ is only coupled to states
with momenta $p\pm \hbar k$. In view of our initial condition
we thus introduce the notation $|(i_1,m_1),(i_2,m_2),n_c,\{n_{\mu}\}\rangle$
that denotes a state where atom $j$ has internal state $i_j$ and
momentum $q_{0j}+ m_j\hbar k $ with integer $m_j$. Thereby
, $q_{0j}=\mbox{mod}(p_{0j},\hbar
k)$, i.e., $0\leq q_{0j} < \hbar k$. The number of photons in the cavity
and the vacuum mode ``$\mu$'' are denoted $n_c$ and $n_{\mu}$, respectively.
In case only one excitation is present in the system and within the realm of
validity of the above approximations, the general expression for the system
state vector is thus given by
\begin{eqnarray}
|\Psi(t)\rangle&=&\sum_{m,n}\large\{C_{1,m,n}(t)|(e,m),(g,n),0,\{0_\mu\}
\rangle\nonumber\\ && +C_{2,m,n}(t)|(g,m),(e,n),0,\{0_\mu\}\rangle\nonumber\\
&&+ C_{3,m,n}(t)|(g,m),(g,n),1,\{0_\mu\}\rangle\nonumber\\
&&+\sum_\mu C_{4,m,n,\mu}(t)|(g,m),(g,n),0,\{1_\mu\}\rangle\large\}.
\label{p1}
\end{eqnarray}
We now proceed to eliminate the reservoir degrees of freedom in the
system equations of motion with the help of the Born-Markov
approximation. This introduces an exponential decay rate $\kappa/2=
\pi|g_{\mu}|^2$ and a frequency shift $\Delta_c$ in the dynamics of the
amplitudes $C_{3,m,n}$. For the following, we incorporate this shift into
the detuning $\Delta$ between the atomic resonance and the cavity frequency
and work in the interaction picture with respect to
$\omega_0$. The effective Hamiltonian time evolution of the system
before the photon escapes the cavity is then determined by
\begin{eqnarray} \label{C11}
i\dot{C}_{1,m,n}&=&\omega_{m,n}C_{1,m,n}+\frac{g}{2}(C_{3,m+1,n}
+ C_{3,m-1,n}),\\
i\dot{C}_{2,m,n}&=&\omega_{m,n}C_{2,m,n}+\frac{g}{2}(C_{3,m,n+1}
+ C_{3,m,n-1}), \label{C0} \\
\label{C22}
i\dot{C}_{3,m,n}&=&(\omega_{m,n}+\Delta-i\kappa/2) C_{3,m,n}
+\frac{g}{2}(C_{1,m+1,n}\\ &&+ C_{1,m-1,n}+
C_{2,m,n+1}+ C_{2,m,n-1})\nonumber
\end{eqnarray}
with
\begin{equation}
\omega_{m,n}=[(q_{01}+m\hbar k)^2+(q_{02}+n\hbar k)^2]/(2\hbar M)
\end{equation}
describing the influence of kinetic energy. From Eqs.\ (\ref{C11})-(\ref{C22})
one notices a further selection rule. For example, the set of
coefficients $C_{1,m,n}$ with $m,n$ both even, are only coupled among
each other and to $C_{2,m',n'}$, $m',n'$ odd, and $C_{3,m'',n''}$, $m''$ odd,
$n''$ even. Note also that Eqs.\ (\ref{C11})-(\ref{C22}) can be written
independently of the phase angle $\phi$. In the following we set $\Delta=0$
for convenience.
Another interesting situation arises if one takes the existence of the
atomic trap boundaries fully into account. In this case it is convenient
to expand the center-of-mass wave functions in terms of the eigenfunctions of
the atomic Hamiltonian (\ref{h1}), i.e., $2\sin(\pi q x_1/L) \sin(\pi r x_2/L)/L$,
$q,r\geq 1$, which can be thought of as specific superpositions of momentum
states with opposite wave vectors. In general, the coupling term $H_{ac}$
introduces transitions from a single-particle eigenstate $\psi_q^{g(e)}=
\sqrt{2/L}\sin(\pi q x_1/L)|g(e) \rangle$ to an infinite number of other
states $\psi_{q'}^{e(g)}$. Simple selection rules follow if one has
$k=N\pi/L$ with $N$ a positive integer and $\phi=0$. Under these conditions
one obtains couplings only between the single-atom wave functions
\begin{equation}\label{coup}
\dots \leftrightarrow \psi_{2N-q}^{g/e} \leftrightarrow \psi_{N-q}^{e/g}
\leftrightarrow \psi_q^{g/e} \leftrightarrow \psi_{q+N}^{e/g} \leftrightarrow
\psi_{q+2N}^{g/e} \leftrightarrow \dots\,
\end{equation}
with $1\leq q<N$. The coupling coefficients are all equal besides the one between
$\psi_{N-q}$ and $\psi_q$ which is of the same magnitude, but of opposite
sign. After suitable identifications the equations of motion for the
probability amplitudes of the two-atom system can thus be cast into a form
identical to Eqs.\ (\ref{C11})-(\ref{C22}) apart from this sign peculiarity.
An important special case in the coupling scheme of expression
(\ref{coup}) arises if $q=N$. Under these circumstances the sequence
terminates at $\psi_q$, the part to the left of it being obsolete. This special
case is of particular importance in the discussion of exact dark states
beyond the RNA.
\section{Numerical results}
In order to set the stage for the two-atom problem, let us first take a
brief look at its one-atom counterpart. With the help of the procedure used
to derive Eqs.\ (\ref{C11})-(\ref{C22}) we can obtain a similar set of
equations for the one-atom system,
\begin{equation} \label{CC1}
i\dot{C}_{1,m}=\omega_m C_{1,m}+\frac{g}{2}(C_{2,m+1}+C_{2,m-1}),
\end{equation}
\begin{equation}
i\dot{C}_{2,m}=(\omega_m+\Delta-i\kappa/2)C_{2,m}
+\frac{g}{2}(C_{1,m+1}+C_{1,m-1})
\label{C1}
\end{equation}
where the notations used here are defined in parallel to those for the two-atom
case. In particular, we now have $\omega_m=(q_0+m\hbar k)^2/(2\hbar M)$. The
excited and ground state amplitudes are denoted $C_1$ and $C_2$, respectively.
Equations (\ref{CC1}) and (\ref{C1}) are very similar in structure to those
used in the discussion of near-resonant scattering of two-level atoms from a
standing-wave laser field \cite{Kaz}. Physically, they describe the
atomic momentum spread during the interaction with the cavity mode. If we
imagine the standing wave mode as being composed of two counterpropagating
running waves we see that during an emission-absorption cycle the atomic
momentum can change by an amount of 0 or $2\hbar k$. The change depends
on whether the photon is emitted into and absorbed from the same running
wave mode or not. Successive cycles thus lead to an atomic momentum
spread, i.e.\ heating.
This is illustrated in Fig.\ \ref{fig1}, which shows momentum distributions
$P_m(\tau)=|C_{1,m}(\tau)|^2+ |C_{2,m}(\tau)|^2$ derived from
Eqs.\ (\ref{CC1}) and (\ref{C1}) as a function of the discrete
momentum index $m$ and the dimensionless time $\tau=\omega_{rec}t$, with
$\omega_{rec}=\hbar k^2/(2M)$ being the recoil frequency. These distributions
illustrate the effective Hamiltonian time evolution of the atom
before the photon escapes the cavity, governed by the nonhermitian
Hamiltonian
\begin{equation}
H_{eff} = H_a + H_c + H_{ca} - i\hbar \frac{\kappa}{2} a_c^\dagger a_c,
\end{equation}
$H_a$ and $H_{ca}$ referring now to a single two-level atom. The initial
conditions
for the wave function were chosen as $C_{1,m}=\delta_{m,0}$, $C_{2,m}=0$,
and $q_0=0$. Figures \ref{fig1}(a),(b) display the case of a lossless
cavity ($\kappa=0$) and a dimensionless atom-cavity coupling constant
$\Omega=g/2\omega_{rec}=50$. In Fig.\ \ref{fig1}(a), the
influence of the kinetic energy term $\hat{p}^2/2M$ is neglected
(the Raman-Nath approximation) and the momentum spread grows linearly in time
at a rate proportional to $\Omega\tau$. This should be compared to Fig.
\ref{fig1}(b), which is for the full model including the kinetic energy
terms. This illustrates the well-known fact that the RNA is
only valid for short enough times. Due to the increasing mismatch between
the photon energy and the atomic energy increment accompanying a photon
absorption, the width of the momentum distribution eventually stops growing
and begins to oscillate. The effects of cavity damping are illustrated in
Fig.\ \ref{fig1}(c) and (d), which again compare the momentum distributions
in the RNA and the full model, but for a moderate cavity damping rate
$\kappa'=\kappa/\omega_{rec}=50$, i.e., $\kappa'
/\Omega=0.4$. In this case the excited state
population is damped on a time scale approximately given by
$4/\kappa'$.\footnote{It should be noted that for large cavity damping
$\kappa'\gg\Omega/2$ the decay rate of the excited state population goes to
zero. This stabilization effect, however, is different in nature from
the two-atom dark states discussed below.}
We now turn to the two-atom situation, with the goal of determining how
the previous results are modified when we insert a second atom into the cavity.
The dramatic changes brought about under these circumstances are illustrated in
Figs.\ \ref{fig2}(a)-(d), which show results of the numerical integration of
Eqs.\ (\ref{C11})-(\ref{C22}). They depict the momentum distribution of the
first atom before the photon escape,
$P^{(1)}_m(\tau)= \sum_{i=1,2,3;n} |C_{i,m,n}(\tau)|^2$, as a function
of $m$ and $\tau$ in both the RNA and the full model,
and in the absence or presence of cavity losses. The initial conditions
were chosen such that both atoms are at rest but
atom 1 is in the excited state, atom 2 is in the ground state and no
photon is present in either the cavity or the vacuum modes, i.e.,
\begin{equation}
C_{i,m,n}(t=0)=\delta_{i,1}\delta_{m,0}\delta_{n,0}
\end{equation}
and $p_{01}=p_{02}=0$. The atom-cavity coupling is again set to $\Omega=50$.
As a consequence of the selection rules mentioned in Sec.\ II one has for
these initial conditions
$$P^{(1)}_m=\sum_n |C_{1,m,n}|^2$$
for $m$ even and
$$P^{(1)}_m=\sum_n |C_{2,m,n}|^2+|C_{3,m,n}|^2$$
for $m$ odd. Figures \ref{fig2}(a) and (b) display the case of the lossless
cavity. One can recognize two main qualitative differences from the
corresponding Figs.\ \ref{fig1}(a) and (b). First, the momentum distribution
no longer spreads significantly: rather, it remains
concentrated in the central mode (i.e.\ $m=0$) and a small number of side
modes. The other modes remain almost unpopulated. Second, the
comparison between the RNA and the full model results shows that the influence
of the kinetic energy terms now is much smaller than in the one-atom case.
Contrary to Figs.\ \ref{fig1}(a) and (b), for the time considered they
only lead to some quantitative modifications but not to a qualitative
change. This property is of course due to the concentration of the
momentum distribution around $m=0$. It also indicates that the RNA is a
valuable tool in the interpretation of the two-atom behavior.
The study of the momentum distribution in the presence of cavity losses
[Figs.\ \ref{fig2}(c) and (d), again with $\kappa'=20$] also yields a
surprising result. One finds again that only a small number of modes are
significantly populated. But in addition, and in contrast to the one-atom
case, after an initial transient evolution the total atomic population decays
only very slowly, i.e., {\em the photon escape from the cavity is strongly
inhibited by the presence of a second atom.} In fact, the time evolution of
the distribution still bears a strong similarity to the lossless case.
Furthermore, the RNA yields a good approximation to the full model also
in the presence of losses. A further increase of the cavity damping rate
only leads to minor changes in the behavior of the momentum distribution.
A closer look at the long-time behavior is provided in Figs.\ \ref{fig3}.
There, the total probability $P=\sum_m P_m^{(1)}$ of finding the
excitation in the cavity (curve 1) is shown for the RNA (a) and the full
model (b). The parameter values are chosen as in Figs.\ \ref{fig2}(c),(d).
After a rapid initial transient the probability $P$ reaches a constant value
in the RNA, whereas it still decays slowly in the full model. The curves 2
and 3 show the time evolution of $|C_{1,0,0}|^2+
|C_{1,0,\pm 2}|^2+|C_{1,\pm 2,0}|^2+|C_{2,\pm 1,\pm 1}|^2$ (i.e., the
central and the most highly populated side modes)and of $|C_{1,0,0}|^2$
alone, respectively. These curves
again demonstrate that the spread in momentum is strongly suppressed.
\section{Two-atom dark states}
The results of Figs.\ \ref{fig2} and \ref{fig3} indicate that the
atomic time evolution is characterized by the appearance of dark states
which have the initial excitation stored in the atoms and which are
almost immune to cavity damping. In this section a
detailed analysis of these dark states is given. Before turning to the
full problem we first work in the RNA, which was shown to provide a
useful approximate description.
\subsection{Two-atom dark states in the Raman-Nath approximation}
In order to investigate the dark states it is convenient to work also
in the position-space representation. The equations of motion for the
position-dependent probability amplitudes $C_i(x_1,x_2,t)$ read
\begin{eqnarray}
i\dot{C}_1&=&-\frac{\hbar}{2M}\left(\frac{\partial^2}{\partial x_1^2}+
\frac{\partial^2}{\partial x_2^2}\right)C_1+g\cos(kx_1)C_3 \label{x1},\\
i\dot{C}_2&=&-\frac{\hbar}{2M}\left(\frac{\partial^2}{\partial x_1^2}+
\frac{\partial^2}{\partial x_2^2}\right)C_2+g\cos(kx_2)C_3 \label{x2},\\
i\dot{C}_3&=&\left[-\frac{\hbar}{2M}\left(\frac{\partial^2}{\partial x_1^2}+
\frac{\partial^2}{\partial x_2^2}\right)+\Delta-i\kappa/2\right]C_3
\nonumber \\
&& +g\left[\cos(kx_1)C_1+\cos(kx_2)C_2\right]. \label{x3}
\end{eqnarray}
In the first special case discussed in Sec.\ II (i.e., the atomic wave
packet well localized inside the trap) these equations have to be solved
in the domain $0\leq x_1,x_2 \leq 2\pi/k$ and the solution must be of the
form
\begin{equation}
C_i=\exp(ip_{01}x_1+ip_{02}x_2)\tilde{C}_i
\end{equation}
with $\tilde{C}_i$ fulfilling periodic boundary conditions. In the second
case (trap conditions taken fully into account) one has to consider solutions
with vanishing Dirichlet boundary conditions in the domain
$0\leq x_1,x_2 \leq L$.
In the RNA, i.e., after discarding the spatial derivatives, Eqs.\
(\ref{x1})-(\ref{x3}) decouple spatially and can be solved immediately. At a
given point $(x_1,x_2)$ they form a homogeneous linear $3\times 3$-system of
ordinary differential equations the eigenvalues of which are given by
\begin{eqnarray}
\lambda_1&=&0,\\
\lambda_{2,3}&=&-\kappa/4-i\Delta/2 \label{lam23}\\ && \pm
\sqrt{\left(\frac{\kappa}{4}+i\frac{\Delta}{2}\right)^2-g^2
[\cos^2(kx_1)+\cos^2(kx_2)]}.\nonumber
\end{eqnarray}
The existence of the eigenvalue $\lambda_1$ whose real part vanishes
independently of the values of $x_1$, $x_2$, and $\kappa$ ensures that an
excitation initially present in the system has a finite probability of
remaining in it in the limit $t\to\infty$. In particular, if the atomic
wave function is given at time $t=0$ by
\begin{eqnarray}
|\psi(x_1, x_2,0) \rangle &=& A_1(x_1,x_2) |e,g,0,\{0_\mu \} \rangle
\nonumber \\ &&+
A_2(x_1,x_2)|g,e,0,\{0_\mu\}\rangle \nonumber \\
&&+ A_3(x_1,x_2)|g,g,1,\{0_\mu\}\rangle
\end{eqnarray}
then the asymptotic state reached by the ``atoms + cavity mode'' system
is characterized by the probability amplitudes (arranged as a column vector
in a self-evident way)
\begin{eqnarray}\label{assta}
&&[\cos^2(kx_1)+\cos^2(kx_2)]^{-1}\\ && \times \left(\begin{array}{c}
A_1\cos^2(kx_2)-A_2\cos(kx_1)\cos(kx_2)\\
-A_1\cos(kx_1)\cos(kx_2)+A_2\cos^2(kx_2)\\
0 \end{array} \right)\nonumber.
\end{eqnarray}
Note that this state is {\em not} normalized, a result of the
fact that some of the initial excitation has irreversibly escaped from the
cavity into the reservoir.
Expression (\ref{assta}) shows that the asymptotic state
does not have a contribution from the initial amplitude $A_3$,
furthermore, the final amplitude in the third channel where the photon
is present in the cavity vanishes. On the other hand, if a state has
nonvanishing contributions $A_1$ or $A_2$ it will always evolve into a
dark state unless $A_1\cos(kx_2)=A_2\cos(kx_1)$. The time scale to reach
the dark state is determined by the eigenvalues $\lambda_2$ and $\lambda_3$.
From Eqs.\ (\ref{x1})-(\ref{x3}) or Eq.\ (\ref{assta}) it follows that a
given state is a dark state if and only if it is of the form
\begin{equation}\label{dsta}
A(x_1,x_2)\left(\begin{array}{c}
\cos(kx_2) \\ -\cos(kx_1) \\ 0 \end{array} \right)
\end{equation}
and, in addition, it fulfills the appropriate boundary conditions. The state
(\ref{dsta}) can be viewed as a generalization of the dark state in the
Dicke theory of sub- and superradiance \cite{Dic54}.
In the following discussion we concentrate on the case of localized atoms
in the sense of Sec.\ II. If one substitutes for the function $A$ of
expression (\ref{dsta}) the set of plane waves
$\exp(iq_{01}x_1+iq_{02}x_2) \exp[imkx_1+i(n+1)kx_2]$, one obtains a family
of dark states $\{ |d_{mn} \rangle\}$ which have a simple structure in
momentum space, i.e.,
\begin{eqnarray}
|d_{mn}\rangle&&=\textstyle{\frac 1 2}\large(|(e,m),(g,n)\rangle + |(e,m),
(g,n+2)\rangle \\
&-&|(g,m+1),(e,n+1)\rangle -|(g,m-1),(e,n+1)\rangle\large),\nonumber
\end{eqnarray}
where we have omitted the occupation numbers of the photon modes in the
notation of the ket vectors for simplicity. The dark states
$|d_{mn}\rangle$ are truly entangled states. Since all permissible
functions $A$ can be expanded onto the indicated set of plane waves
the family $\{ |d_{mn}\rangle\}$ forms a basis of the ``dark'' subspace of
the total Hilbert space. However, this is not an orthogonal basis as a
given $|d_{mn}\rangle$ has a nonvanishing scalar product with four
other $|d_{m'n'}\rangle$.
Of particular interest in our context is the question of how to characterize
the asymptotic state $|D_{mn}^{e/g,g/e}\rangle$ associated with a given initial
state $|(e/g,m),(g/e,n)\rangle$. Its coordinate representation can be
inferred immediately from Eq.\ (\ref{assta}), but further
insight into the nature of the state can be obtained from its
momentum distribution. Equations (\ref{C11})-(\ref{C22}) show that it is
sufficient to study this question for the state $|D_{00}^{eg}
\rangle$, since the distributions for the other states can be obtained
by a suitable shift of indices. In coordinate space the state $|D_{00}^{eg}
\rangle$ is represented by
\begin{eqnarray}
&&[\cos^2(kx_1)+\cos^2(kx_2)]^{-1}\\
&&\times(\cos^2(kx_2),-\cos(kx_1)\cos(kx_2),0)^{T}. \nonumber
\end{eqnarray}
Its momentum-space amplitudes
$$c_{1/2,m,n}= \langle (e/g,m),(g/e,n)| D_{00}^{eg}\rangle$$
are determined by
\begin{eqnarray}\label{cint}
c_{i,m,n}=\frac{k}{2\pi}\int \int_0^{2\pi /k}\!\! dx_1dx_2\, e^{-i(mkx_1+nkx_2)}
\nonumber \\
\times\frac{f_i(x_1,x_2)}{\cos^2(kx_1)+\cos^2(kx_2)}
\end{eqnarray}
with $f_1=\cos^2(kx_2)$ and $f_2=-\cos(kx_1)\cos(kx_2)$. As discussed in
Sec.\ II, $c_{1(2),m,n}\neq 0$ only for $m,n$ both even (odd).
Evaluating the integrals (\ref{cint}) one finds that the amplitudes
$c_{1,2m,0}$, $m\geq 0$, are given by
\begin{equation}\label{rec}
c_{1,2m,0}=\delta_{m,0}+\frac{i}{2\pi}(I_m +I_{m-1})
\end{equation}
where the numbers $I_m$ satisfy the recurrence relation
\begin{equation}
I_m=\frac 1 m[(-1)^{m-1}4i -(6m-3)I_{m-1} -(m-1)I_{m-2}]
\end{equation}
and $I_{0}=I_{-1}=i\pi/2$. Further relations between the amplitudes
$c_{i,m,n}$ are given by
\begin{eqnarray}
&&c_{1,m,n}+c_{1,m+2,n}+c_{2,m+1,n+1}+c_{2,m+1,n-1}=0, \label{rel1}\\
&&c_{1,m,n}+c_{1,m,n+2}-c_{2,m+1,n+1}-c_{2,m-1,n+1} \nonumber\\
&&\quad =\delta_{m,0}(\delta_{n,0} +\delta_{n,-2}), \label{rel2}\\
&&c_{i,m,n}=c_{i,\pm m,\pm n} \label{rel3}
\end{eqnarray}
with $m,n$ both even in Eqs.\ (\ref{rel1}) and (\ref{rel2}). Equation
(\ref{rel1}) is a direct consequence of Eq.\ (\ref{C22}) whereas Eq.
(\ref{rel2}) follows from the relation $$|d_{00}\rangle=(|D_{00}^{eg}
\rangle+|D_{02}^{eg}\rangle-|D_{11}^{ge}\rangle-|D_{-1,1}^{ge} \rangle)/2.
$$ With the help of Eqs.\ (\ref{rec})-(\ref{rel3}) all amplitudes $c_{i,m,n}$
can be calculated iteratively. In this way, one obtains for example
\begin{eqnarray*}
c_{1,0,0}&=&1/2,\\
c_{2,\pm 1,\pm 1}&=&1/\pi-1/2\approx -0.1817, \\
c_{1,\pm 2,0}&=&-c_{1,0,\pm 2}=1/2-2/\pi\approx -0.1366.
\end{eqnarray*}
An interesting way to determine the scalar products $\langle D_{m,n}^{\sigma}
|D_{00}^{eg}\rangle $ with $\sigma=eg$ or $ge$ proceeds as follows
[the method can also be used to derive
Eq.\ (\ref{rel2})]. The asymptotic state
$|D_{00}^{eg}\rangle$ into which $|(e,0),(g,0)\rangle$ evolves is
uniquely determined. Any state in the ``dark subspace" orthogonal to
$|D_{00}^{eg}\rangle$ must have vanishing overlap with
$|(e,0),(g,0)\rangle$. If we denote by $|\bar{D}_{00}^{eg}\rangle$ the
state $|D_{00}^{eg}\rangle$ after normalization --- remember that the
dark state into which a given initial state evolves is not normalized ---
we must have that
$$
|D_{00}^{eg}\rangle=|\bar{D}_{00}^{eg}\rangle
\langle \bar{D}_{00}^{eg}|(e,0),(g,0)\rangle.
$$
Comparing coefficients one obtains that
\begin{equation}\label{sc1}
\langle D_{00}^{eg} |D_{00}^{eg}\rangle = 0.5,
\end{equation}
i.e., the system has a 50$\%$ probability to be trapped in that dark state.
Using the Gram-Schmidt orthogonalization scheme to construct from
$|D_{m,n}^{\sigma}\rangle$ a state orthogonal to $|\bar{D}_{00}^{eg}
\rangle$ leads to the conclusion that
\begin{equation} \label{sc2}
\langle D_{m,n}^{\sigma}|D_{00}^{eg}\rangle =c_{k,m,n}
\end{equation}
with $k=1(2)$ if $\sigma=eg(ge)$,
i.e., the asymptotic dark states are non-orthogonal, in general.
Equations (\ref{sc1}) and (\ref{sc2}) can be verified by evaluating
the scalar product in position space.
From Eqs.\ (\ref{rec})-(\ref{sc1}) it can be inferred that 50$\%$ of the
population of the dark state is trapped in the state $|(e,0),(g,0)\rangle$,
while the states $|(i,m),(j,n)\rangle$ with $|m|+|n|\leq 2$ $(4)$ hold
91.3$\%$ ($96.3\%$) of the population. This observation explains the
localization of the momentum distributions in Figs.\ \ref{fig2} and
\ref{fig3}.
\subsection{Exact and approximate dark states in the full model}
Turning to the full model described by Eqs.\ (\ref{C11})-(\ref{C22}) or
(\ref{x1})-(\ref{x3}), i.e., taking the kinetic energy terms into
account, it becomes apparent that, in general, the states $|d_{mn}\rangle$ and
$|D_{mn}\rangle$ are no longer exactly dark. By `exactly dark' we mean being
an eigenstate of the full Hamiltonian with a purely real eigenvalue. It is
therefore natural to ask whether the full model sustains exact dark
states at all. Interestingly, a complete answer to this question can be
given for both cases discussed in Sec.\ II, i.e., for atoms localized
well inside the trap and for atoms experiencing the trap boundaries.
In the first situation there are precisely two exact dark states, which
are given by
\begin{equation}
|D_1\rangle=|d_{0,-1}\rangle=(\cos(kx_2),-\cos(kx_1),0)^T
\end{equation}
and
\begin{eqnarray}\label{da2}
|D_2\rangle&=&\sin(kx_1)\sin(kx_2)(\cos(kx_2),-\cos(kx_1),0)^T\nonumber \\
&=&|d_{-1,0}\rangle-|d_{1,0}\rangle+|d_{1,-2}\rangle-|d_{-1,-2}\rangle.
\end{eqnarray}
Dark states thus appear only if the atomic momenta involved are integer
multiples of $\hbar k$, i.e., if $q_{01}=q_{02}=0$. For the second case, in
which the atomic wave functions extend over the whole length of the trap,
it can be shown that exact dark states can only exist if in the cavity mode
function of Eq.\ (\ref{hca}) $k=\pi N/L$ with integer $N\geq 1$ and $\phi=0$. Under
these conditions there is precisely one such state which, in the
coordinate representation, is given by the first line of Eq.\ (\ref{da2}).
For a proof of uniqueness of these dark states one can start from the
observation that also in the full model exact dark states have to be of
the form (\ref{dsta}). Additionally, they now also must be
eigenfunctions of $(\hat{p}^2_1 +\hat{p}^2_2)/2M$ under the
appropriate boundary conditions. One then expands both $A(x_1,x_2)$ and
$A(x_1,x_2)\cos(kx_{1/2})$ onto a suitable set of eigenfunctions. The
fact that in the expansion of $A(x_1,x_2)\cos(kx_{1/2})$ there should
only appear terms of the same energy imposes severe restrictions on the
possible forms for the expansion of $A(x_1,x_2)$. These requirements can
only be met in the cases indicated. For the situation in which the atoms
extend over the whole trap the breakoff of the coupling scheme
(\ref{coup}) if $q=N$ (as outlined at the end of Sec.\ II) turns out to
be crucial for the existence of the dark state.
These considerations imply that most dark states found in the RNA
become unstable in the full model since they are orthogonal to the exact
dark states, in general. The numerical results of Sec.\ III suggest,
however, that the corresponding lifetimes are still very long so that
these states may be regarded as ``quasi-dark." The examples shown referred to
cases in which $\Omega,\kappa' \gg 1$ which is the relevant situation in
practice as discussed in Sec.\ V. Under these conditions one may treat
the kinetic energy term $(\hat{p}_1^2+\hat{p}_2^2)/2M$ as a small
perturbation to the RNA Hamiltonian. Applying standard perturbation
theory one obtains an imaginary correction to the RNA dark state
eigenenergies only in second order, which already indicates that these
states will be long-lived. A crude estimate of the second-order
imaginary part shows that the state
$|D(d)_{mn}\rangle$ acquires a finite decay rate that is of the order of
\begin{equation}
\Gamma_{mn} \simeq \omega_{rec}(\tilde m^2+\tilde n^2)^2\kappa'/\Omega^2 .
\label{estimate}
\end{equation}
Thereby, $\tilde m$ and $\tilde n$ have to be understood as typical
values of $m$ and $n$ appearing in the expansion into center-of-mass
momentum states. The estimate (\ref{estimate}) assumes that
$\kappa'$ is not too large in comparison to $\Omega$ so that the square
root in expression (\ref{lam23}) is essentially imaginary.
Hence, consistently with the numerical calculations, we find that the
lifetime of the
`quasi-dark states' is long compared to $\omega_{rec}^{-1}$ under the
condition $\Omega,\kappa' \gg 1$. Furthermore, our estimate implies that
the decay rate increase rapidly for increasing $m,n$. This is as can be
expected, since under these circumstances the dephasing
between the different momentum eigenstates becomes faster.
The dependence on $\kappa'$ and $\Omega$ suggest that the coupling to
the decay channel becomes more efficient when $\kappa'$ is increased and
$\Omega$ decreased. Figure \ref{fig4} shows the decay of the dark states
$|d_{mn}\rangle$ for various values of $(m,n)$, $\kappa'$, and $\Omega$.
Their evolution qualitatively confirms the dependence (\ref{estimate}) of
$\Gamma_{mn}$ on these parameters. Thereby, curve (a) should be compared
to curves (b), (c), and (d) as in each one of these one relevant parameter
is changed in comparison to (a).
\section{Summary and conclusions}
In this paper we have investigated the dynamics of two two-level atoms coupled
to a single damped mode of an electromagnetic resonator, including the
effects of photon recoil. We concentrated on the situation where one quantum
of excitation is initially present in the system. A generic feature of the
atomic evolution is the appearance of dark states. These states, in which
the excitation is stored in the internal atomic degrees of freedom, are
almost immune to photon decay from the cavity. When in a dark state, the
two atoms become quantum mechanically entangled and form a new kind
of ``molecule'' bound by the quantum of excitation that they share. The state
of the compound system can conveniently be described in terms of a
superposition of different states of well-defined center-of-mass momentum.
A remarkable characteristic feature of the dark states is their small
momentum spread, as compared e.g. to the one-atom situation. This property
makes their description in the Raman-Nath approximation quite accurate.
While most dark states become only ``quasi-dark'' when this approximation
is removed, their damping rate remains quite long indeed.
When considering the possible practical realization of these states, an
interesting question concerns the influence of a non-constant atomic
trapping potential on the time evolution of the dark states. If the trapping
potentials can be arranged to be equal for ground and excited states, then
one can still obtain dark states in the RNA (for the full model it can be
anticipated that exact dark states will not exist any longer, in
general). If, as is normally the case, these potentials differ from each
other, even the RNA will not support dark states. However, as Eqs.\
(\ref{x1})-(\ref{x3}) show, in the vicinity of the line $x_1=x_2$ the decay
will be significantly decelerated so that a remnant of the dark-state
effect might still be visible under such circumstances.
Let us conclude with a brief discussion of the experimental feasibility
to observe such two-atom dark states. Recent cavity QED experiments
in the microwave and optical domain are described e.g., in Refs.\
\cite{Bru96,MieFosOro98,HooChaLyn98}. They typically involve a low density
atomic beam passed through the electromagnetic resonator, a situation that
can be modeled in terms of the localized wave packet description of Sec.\ II.
In these experiments the residual spontaneous atomic decay rate $\gamma$
in the cavity (due to coupling to vacuum modes) is approximately one order
of magnitude smaller than the cavity Rabi frequency $g$ and damping rate
$\kappa$, which are both comparable in magnitude. A single-mode description is
thus adequate and our system (once prepared in the initial state) would have
enough time to coherently evolve into a dark state. Furthermore, the recoil
frequency $\omega_{rec}$ is also very small in comparison to $g$ and
$\kappa$ (typically less than a factor of $10^{-3}$) so the RNA should
provide a very accurate description. In an experimental realization a
main difficulty would certainly consist in efficiently preparing the
initial system state. From this point of view, the optical regime does not
appear as promising as the microwave regime: First, due to the short
free-space spontaneous lifetime of optical transitions the atoms probably
could not be prepared in the excited state before they enter the cavity. Second,
if they are both simultaneously excited inside the cavity the probability of
coupling to the dark state is relatively low.
An experiment involving a
microwave cavity might proceed as follows. Diatomic molecules in a
low-intensity beam are dissociated such that the two fragments are of
nonvanishing opposite spin. The atoms can thus be separated in an inhomogeneous
magnetic field. One atomic beam is subsequently prepared in the Rydberg
ground state, the other one in the excited state. Using atom optical elements
the two beams are guided such that they intersect each other in the
microwave cavity (at a small angle). As the molecular dissociation creates
atom pairs it should be possible to arrange the setup so that both partners
pass the cavity simultaneously with high probability. The experimental
parameters should be chosen such that a single atom always leaves the cavity in
the ground state. The signature of the formation of a dark state would
consist in detecting an appreciable fraction of atoms leaving the cavity in
the excited state. In order to obtain more information about the nature
of the dark state one could for example additionally observe the spatial
atomic density distribution.
\acknowledgments
We have benefited from numerous discussions with Dr.\ E.\ V.\ Goldstein
and M.\ G.\ Moore. G.\ J.\ Y.\ gratefully acknowledges support from the
Chinese Scholarship Committee. This work was also supported by the U.S.\
Office of Naval Research under Contract No.\ 14-91-J1205, by the
National Science Foundation under Grant No.\ PHY95-07639, by the
U.S.\ Army Research Office, and by the Joint Services Optics Program.
| proofpile-arXiv_065-8113 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
$\iota$~Pegasi (HR 8430, HD 210027) is a nearby, short-period (10.2 d)
binary system with a F5V primary and a $\sim$ G8V secondary in a
circular orbit. $\iota$~Peg was first discovered as a single-lined
spectroscopic binary by Campbell (1899), and the first spectroscopic
orbital elements were estimated by Curtis (1904). Several other
single-line studies were made, notably Petrie and Phibbs (1949) and
Abt and Levy (1976). In the context of a lithium abundance study,
Herbig (1965) noted that lines from the $\iota$~Peg secondary were
visible at red wavelengths. Lithium abundances for both the primary
(\cite{Herbig65,Conti66,Duncan81,Lyubimkov91}) and the secondary
(\cite{Fekel83,Lyubimkov91}) indicate the system is very young ($\sim$
8 $\times$ 10$^7$ yr, \cite{Fekel83}, 1.7 $\pm$ 0.8 $\times$ 10$^8$
yr, \cite{Lyubimkov91}) and both components are close to the zero-age
main sequence. Both components of $\iota$~Peg are also believed to
have solar-type abundances (\cite{Lyubimkov91}).
Following Herbig's implicit suggestion, Fekel and Tomkin (1983,
hereafter FT) made radial velocity measurements of both $\iota$~Peg
components at 643 nm, and computed a definitive spectroscopic orbit
and inferred a probable G8V spectral classification for the secondary.
FT's orbit was noteworthy as it indicated that the minimum masses for
the two components were very near the model values for the spectral
types, suggesting a ``reasonable prospect'' for eclipses in the system
(FT). Subsequent photometric monitoring by automated photometry
projects in Arizona, at Palomar Observatory, and in Pasadena failed to
show any evidence for eclipses (see \S \ref{sec:eclipses}). FT also
questioned synchronous rotation of the secondary. However, Gray
(1984), from somewhat higher resolution spectroscopic data, argued
that both components are in synchronous rotation.
Herein we report a determination of the $\iota$~Peg visual orbit from
near-infrared, long-baseline interferometric visibility measurements
taken with the Palomar Testbed Interferometer. PTI is a 110-m K-band
(2 - 2.4 $\mu$m) interferometer located at Palomar Observatory, and
described in detail elsewhere (\cite{Colavita94,Colavita98a}). The
minimum PTI fringe spacing is roughly 4 mas at the sky position of
$\iota$~Peg, allowing us to resolve this binary system. The
procedures we have used to determine $\iota$~Peg's visual orbit are
similar to other visual orbits determined for spectroscopic binaries
using the Mark III Interferometer at Mt.~Wilson
(\cite{Pan90,Armstrong92a,Armstrong92b,Pan92,Hummel93,Pan93,Hummel94,Hummel95}),
and the NPOI Interferometer at Anderson Mesa, AZ (\cite{Hummel98}).
The analogy between $\iota$~Peg and the short-period, small angular
scale binaries studied in Hummel et al.~(1995) and Hummel et
al.~(1998) is especially apt.
\section{Observations}
Pan attempted to determine a visual orbit for $\iota$~Peg using the
Mark III interferometer at Mt.~Wilson, but the significant brightness
difference in the two components at 800 nm made the observations
difficult (\cite{Pan97}). The apparent contrast ratio in the
$\iota$~Peg system decreases in the K-band, allowing a reliable orbit
determination with PTI observations.
The observable used for these observations is the fringe contrast or
{\em visibility} (squared) of an observed brightness distribution on
the sky. Normalized in the interval [0,1], a single star exhibits
visibility modulus given in a uniform disk model by:
\begin{equation}
V =
\frac{2 \; J_{1}(\pi B \theta / \lambda)}{\pi B \theta / \lambda}
\label{eq:V_single}
\end{equation}
where $J_{1}$ is the first-order Bessel function, $B$ is the projected
baseline vector magnitude at the star position, $\theta$ is the
apparent angular diameter of the star, and $\lambda$ is the
center-band wavelength of the interferometric observation. (We
consider corrections to the uniform disk model from limb darkening in
\S \ref{sec:physics}.) The expected squared visibility in a narrow
pass-band for a binary star such as $\iota$~Peg is given by:
\begin{equation}
V^{2}_{nb}(\lambda)
= \frac{V_{1}^2 + V_{2}^2 \; r^2 + 2 \; V_{1} \; V_{2} \; r \;
\cos(\frac{2 \pi}{\lambda} \; {\bf {B}} \cdot {\bf {s}})}
{(1 + r)^2}
\label{eq:V2_double}
\end{equation}
where $V_{1}$ and $V_{2}$ are the visibility moduli for the two stars
alone as given by Eq.~\ref{eq:V_single}, $r$ is the apparent
brightness ratio between the primary and companion, ${\bf {B}}$ is the
projected baseline vector at the system sky position, and ${\bf {s}}$
is the primary-secondary angular separation vector on the plane of the
sky (\cite{Pan90,Hummel95}). The $V^2$ observables used in our
$\iota$ Peg study are both narrow-band $V^2$ from seven individual
spectral channels (\cite{Colavita98a}), and a synthetic wide-band
$V^2$, given by an incoherent SNR-weighted average $V^2$ of the
narrow-band channels in the PTI spectrometer (\cite{Colavita98b}). In
this model the expected wide-band $V^2$ observable is approximately
given by an average of the narrow-band formula over the finite
pass-band of the spectrometer:
\begin{equation}
V^{2}_{wb} = \frac{1}{n}\sum_{i}^{n} V^{2}_{nb-i}(\lambda_i)
\label{eq:V2_doubleWB}
\end{equation}
where the sum runs over the n = 7 channels with wavelengths
$\lambda_i$ covering the K-band (2 - 2.4 $\mu$m) of the PTI
spectrometer in its 1997 configuration. Separate calibrations and
hypothesis fits to the narrow-band and synthetic wide-band $V^2$
datasets yield statistically consistent results, with the synthetic
wide-band data exhibiting superior fit performance. Consequently we
will present only the results from the synthetic wide-band data.
$\iota$~Peg was observed by PTI on 24 nights between 2 July and 8 Sept
1997. In each night $\iota$~Peg was observed in conjunction with
calibration objects multiple times during the night. Each observation
(``scan'') was from 120 -- 130 seconds in duration. For each scan we
computed a mean $V^2$ value through methods described in Colavita
(1999b). We assumed the measured rms in the internal scatter to be
the error in $V^2$. For the purposes of this analysis we have
restricted our attention to four calibration objects, two primary
calibrators within 5$^{\circ}$ of $\iota$~Peg (HD 211006 and HD
211432), and two ancillary calibrators within 15$^{\circ}$ of
$\iota$~Peg (HD 215510 and HD 217014 -- 51~Pegasi). The suitability
of 51~Peg (a known radial velocity variable) as a calibrator at PTI is
addressed in Boden et al.~(1998b). Table \ref{tab:calibrators}
summarizes the relevant parameters on the calibration objects used in
this study. In particular we have estimated our calibrator diameters
based on a model diameter on 51~Peg of 0.72 $\pm$ 0.06 mas implied by
a linear diameter of 1.2 $\pm$ 0.1 R$_{\sun}$ (adopted by
\cite{Marcy97}) and a parallax of 65.1 $\pm$ 0.76 mas from Hipparcos
(\cite{HIP97,Perryman97}).
The calibration of $\iota$~Peg $V^2$ data is performed by estimating
the interferometer system visibility ($V^{2}_{sys}$) using calibration
sources with model angular diameters, and then normalizing the raw
$\iota$~Peg visibility by $V^{2}_{sys}$ to estimate the $V^2$ measured
by an ideal interferometer at that epoch
(\cite{Mozurkewich91,Boden98a}). We calibrated the $\iota$~Peg $V^2$
data in two different ways: (1) with respect to the two primary
calibration objects, resulting in our primary dataset containing 112
calibrated observations over 17 nights, and (2) an unbiased average of
the primary and ancillary calibrators, resulting in our secondary
dataset containing 151 observations over 24 nights. The motivation
for constructing these two datasets, which are clearly not
independent, is that the determination of the orbital solution and
component diameters is sensitive to calibration uncertainties.
Comparison of the solutions derived from the two datasets allow us to
quantitatively assess this uncertainty.
\begin{table}[t]
\begin{center}
\begin{small}
\begin{tabular}{|c|c|c|c|c|}
\hline
Object & Spectral & Star & Sky Separation & Diam.~WRT \\
Name & Type & Magnitude & From $\iota$~Peg & Model 51 Peg \\
\hline
HD 211006 & K2III & 5.9 V/3.4 K & 3.6$^{\circ}$ & 1.06 $\pm$ 0.05 \\
HD 211432 & G9III & 6.4 V/3.7 K & 3.2$^{\circ}$ & 0.70 $\pm$ 0.05 \\
\hline \hline
HD 215510 & G6III & 6.3 V/3.9 K & 11$^{\circ}$ & 0.85 $\pm$ 0.06 \\
HD 217014 & G2.5V & 5.9 V/4.0 K & 12$^{\circ}$ & (0.72 $\pm$ 0.06) \\
\hline
\end{tabular}
\caption{1997 PTI $\iota$~Peg Calibration Objects Considered in our
Analysis. The relevant parameters for our four calibration objects
are summarized. The apparent diameter values are determined by a fit
to our $V^2$ data calibrated with respect to a model diameter for HD
217014 (51 Peg) of 0.72 $\pm$ 0.06 mas
(\cite{Marcy97,HIP97}).
\label{tab:calibrators}}
\end{small}
\end{center}
\end{table}
\section{Orbit Determination}
The estimation of the $\iota$~Peg visual orbit is made by fitting a
Keplerian orbit model with visibilities predicted by
Eqs.~\ref{eq:V2_double} and \ref{eq:V2_doubleWB} directly to the
calibrated (narrow-band and synthetic wide-band) $V^2$ data on
$\iota$~Peg (see \cite{Armstrong92b,Hummel93,Hummel95}). The fit is
non-linear in the Keplerian orbital elements, and is therefore
performed by non-linear least-squares methods (i.e.~the
Marquardt-Levenberg method, \cite{Press92}). As such, this fitting
procedure takes an initial estimate of the orbital elements and other
parameters (e.g. component angular diameters, brightness ratio), and
refines the model into a new parameter set which best fits the data.
However, the chi-squared surface has many local minima in addition to
the global minimum corresponding to the true orbit. Because
Marquardt-Levenberg strictly follows a downhill path in the $\chi^2$
manifold, it is necessary to thoroughly survey the space of possible
binary parameters to distinguish between local minima and the true
global minimum. In the case of $\iota$~Peg the parameter space is
significantly narrowed by the high-quality spectroscopic orbit and
inclination constraint near 90$^\circ$ (FT). Furthermore, the
Hipparcos distance determination sets the rough scale of the
semi-major axis (\cite{HIP97}).
In addition, as the $V^2$ observable for the binary
(Eqs.~\ref{eq:V2_double} and \ref{eq:V2_doubleWB}) is invariant under
a rotation of 180$^{\circ}$, we cannot differentiate between an
apparent primary/secondary relative orientation and its mirror image
on the sky. In order to follow the FT convention for T$_0$ at primary
radial velocity maximum, in our analysis of $\iota$~Peg we have
defined T$_0$ to be at a component separation extremum, yielding an
extremum in component radial velocities for the circular orbit. We
have additionally required our fit T$_0$ to be within half a period of
the projected FT determination to differentiate between primary radial
velocity maximum and minimum. Even with our determination of T$_0$ so
defined there remains a 180$^{\circ}$ ambiguity in our determination
of the longitude of the ascending node, $\Omega$.
We used a preliminary orbital solution computed by Pan (1996) by
separation vector techniques (see \cite{Pan90} for a discussion of the
method), and refined it into the best-fit orbit shown here. We
further conducted an exhaustive search of the binary parameter space
that resulted in the same best-fit orbit, which is in fact the global
minimum in the $\chi^2$ manifold.
Figure \ref{fig:iPg_orbit} depicts the apparent relative orbit of the
$\iota$~Peg system. Most striking is the observation that the
circular orbit of the system (see below) is very nearly eclipsing.
From our primary dataset we find a best fit orbital inclination of
95.67 $\pm$ 0.21 degrees. With model angular diameters of 1.0 and 0.7
mas for the primary and secondary components respectively (\S
\ref{sec:physics}), and an apparent semi-major axis of 10.33 $\pm$
0.10 mas, this inclination is about 0.87$^{\circ}$ from apparent
limb-to-limb contact. This is consistent with the lack of photometric
evidence for eclipses despite several photometry campaigns on the
$\iota$~Peg system (\S \ref{sec:eclipses}).
\begin{figure}
\epsscale{0.7}
\plotone{iPg.trace.eps}
\caption{Visual Orbit of $\iota$~Pegasi. The relative visual orbit of
$\iota$~Peg is depicted, with the primary and secondary rendered at
T$_0$ (maximum primary radial velocity) and apparent conjunction. The
inset shows a closeup of the system at apparent conjunction. By our
model the $\iota$ Peg orbit is nearly, but not quite eclipsing, being
approximately 0.87$^\circ$ in inclination from apparent grazing
eclipses.
\label{fig:iPg_orbit}}
\end{figure}
\begin{table}
\dummytable\label{tab:dataTable}
\end{table}
Table \ref{tab:dataTable} lists the complete set of $V^2$ measurements
in the primary dataset and the prediction based on the best-fit orbit
model for $\iota$~Peg. Figure \ref{fig:iPg_fit} shows two graphical
comparisons between our $V^2$ data on $\iota$~Peg and the best-fit
model predictions. Figure \ref{fig:iPg_fit}a gives four consecutive
nights of PTI $V^2$ data from our primary dataset on $\iota$~Peg (18
-- 21 July 1997), and $V^2$ predictions based on the best-fit model
for the system. Figure \ref{fig:iPg_fit}b gives an additional seven
consecutive nights (12 -- 18 August 1997) with the same quantities
plotted. These are the two longest consecutive-night sequences in our
data set. The model predictions are seen to be in excellent absolute
and statistical agreement with the observed data, with a primary
dataset average absolute $V^2$ deviation of 0.014, and a $\chi^2$ per
Degree of Freedom (DOF) of 0.75.
\begin{figure}
\epsscale{0.8}
\plotone{iPg.V2.trace.eps}\\
\plotone{iPg.V2.trace2.eps}
\caption{$V^2$ Fit of $\iota$~Pegasi. a) Four consecutive nights (18
-- 21 July 1997) of calibrated $V^2$ data on $\iota$~Peg, and $V^2$
predictions from the best-fit model for the system. In the lower
frame we give $V^2$ residuals between the calibrated data and best-fit
model. b) An additional seven consecutive nights (12 -- 18 August 1997) of
data on $\iota$~Peg, with model predicts and fit residuals. The model
is in good agreement with the calibrated data, with a $\chi^2$/DOF of
0.75 and an average absolute $V^2$ residual of 0.014.
\label{fig:iPg_fit}}
\end{figure}
Figure \ref{fig:iPg_surf} gives two examples of the $\chi^2$ fit
projected into orbital parameter subspaces. Figure
\ref{fig:iPg_surf}a shows a surface of $\chi^2$/DOF projected into the
subspace of orbit semi-major axis and relative component brightness,
with all other parameters held to their best-fit values. Inset is a
closeup of a contour plot of the $\chi^2$/DOF surface indicating
location of the best-fit parameter values, and contours at +1, +2, and
+3 of $\chi^2$/DOF significance. Figure \ref{fig:iPg_surf}b gives the
$\chi^2$/DOF surface in the subspace of orbital inclination and
longitude of the ascending node. Again, the inset gives best-fit
parameter values, and contours at +1, +2, and +3 of $\chi^2$/DOF
significance. All indications are that the best-fit model for the
$\iota$~Peg system is in excellent agreement with our $V^2$ data, and
that data uniquely constrain the parameters of the visual orbit.
\begin{figure}
\epsscale{0.8}
\plotone{cont1.eps}\\
\plotone{cont2.eps}
\caption{$\chi^2$/DOF Fit Surfaces for $\iota$~Pegasi Primary Dataset.
a) $\chi^2$/DOF surface in the subspace of orbit semi-major axis and
relative component brightness. Inset is a closeup of a contour plot
surface indicating location of the best-fit parameter values, and
contours at +1, +2, and +3 of $\chi^2$/DOF significance. b)
$\chi^2$/DOF surface in the subspace of orbital inclination and
longitude of the ascending node, with inset giving surface contour
closeup.
\label{fig:iPg_surf}}
\end{figure}
Spectroscopic (from FT) and visual orbital parameters of the $\iota$
Peg system are summarized in Table \ref{tab:orbit}. We present the
results for our primary and secondary datasets separately. For the
parameters we have estimated from our interferometric data we quote a
total one-sigma error in the parameter estimates, and the one-sigma
errors in the parameter estimates from statistical (measurement
uncertainty) and systematic error sources. In our analysis the
dominant forms of systematic error are: (1) uncertainties in the
calibrator angular diameters (Table \ref{tab:calibrators}); (2) the
uncertainty in our center-band operating wavelength ($\lambda_0
\approx$ 2.2 $\mu$m), which we have taken to be 20 nm ($\sim$1\%); (3)
the geometrical uncertainty in our interferometric baseline ( $<$
0.01\%); and (4) uncertainties in orbital parameters we have
constrained in our fitting procedure (e.g. period, eccentricity).
Different parameters are affected differently by these error sources;
our estimated uncertainty in the $\iota$~Peg orbital inclination is
dominated by measurement uncertainty, while the uncertainty in the
angular semi-major axis is dominated by uncertainty in the wavelength
scale. Conversely, we have assumed that all the uncertainty quoted by
FT in the $\iota$~Peg spectroscopic parameters is statistical.
Finally, we have listed the level of statistical agreement in the
visual orbit parameters in our two solutions (the absolute residual
between the two estimates divided by the RSS of their statistical
errors). The two solutions are in good statistical agreement, giving
us confidence we have properly characterized our calibration
uncertainties.
Particularly remarkable is the agreement between T$_{0}$ (quoted as
the epoch of maximum primary radial velocity for the $\iota$~Peg
circular orbit) and period as determined by FT, and T$_{0}$ as
determined in our primary dataset, separated from the FT determination
by 523 cycles. FT quote an $\iota$~Peg period accurate to roughly 1
part in 10$^{6}$, resulting in a propagated uncertainty in T$_{0}$ at
the epoch of our observations of 7 $\times$ 10$^{-3}$ days. This
FT-extrapolated T$_{0}$ differs from our 1997 T$_{0}$ determination by
8 $\times$ 10$^{-4}$ days, an agreement of roughly 0.1 sigma. A
similar comparison with the secondary dataset solution is less
spectacular, an agreement at 0.7 sigma. Clearly the extraordinary
quoted accuracy of the $\iota$~Peg period determination by FT (made by
combining their 1977 -- 1982 data with spectroscopy from the mid-30s
-- \cite{Petrie49}) seems well justified compared to our visual orbit.
Consequently we have assumed the FT value for the $\iota$~Peg period.
Following FT we have assumed a circular orbit for the system. Fitting
our primary dataset for an eccentricity in the system yields an
estimate of 1.5 $\times$ 10$^{-3}$ $\pm$ 1.3 $\times$ 10$^{-3}$. The
assumption of a circular orbit seems well justified.
\begin{table}
\begin{center}
\begin{small}
\begin{tabular}{|c|c||c|c|c|}
\hline
Orbital & FT & \multicolumn{3}{c|}{PTI 1997} \\
\cline{3-5}
Parameter & 1983 & Primary Dataset & Secondary Dataset & Stat Agr \\
\hline \hline
Period (d) & 10.213033 & 10.213033 & 10.213033 & \\
& $\pm$ 1.3 $\times$ 10$^{-5}$ & (assumed) & (assumed) & \\
T$_{0}$ (HJD) & 2445320.1423 & 2450661.5578 & 2450661.5634 & 1.26 \\
& & $\pm$ 3.6 (3.3/1.5) $\times$ 10$^{-3}$ & $\pm$ 3.3 (3.0/1.5) $\times$ 10$^{-3}$ & \\
$e$ & 0 (assumed) & 0 (assumed) & 0 (assumed) & \\
K$_A$ (km s$^{-1}$) & 48.1 $\pm$ 0.2 & & & \\
K$_B$ (km s$^{-1}$) & 77.9 $\pm$ 0.3 & & & \\
\hline
$i$ (deg) & & 95.67 $\pm$ 0.22 (0.22/0.03) & 96.03 $\pm$ 0.20 (0.20/0.03) & 1.21 \\
$\Omega$ (deg) & & 94.09 $\pm$ 0.23 (0.22/0.05) & 94.03 $\pm$ 0.25 (0.24/0.05) & 0.03 \\
$a$ (mas) & & 10.33 $\pm$ 0.10 (0.02/0.10) & 10.32 $\pm$ 0.11 (0.02/0.11) & 0.35 \\
$\Delta$ K (mag) & & 1.610 $\pm$ 0.021 & 1.610 $\pm$ 0.021 & 0.23 \\
& & (0.007/0.020) & (0.007/0.020) & \\
\hline
$\chi^2$/DOF & & 0.75 & 1.0 & \\
$\overline{|R_{V^2}|}$ & & 0.014 & 0.016 & \\
N$_{scans}$ & & 112 & 151 & \\
\hline
\end{tabular}
\end{small}
\caption{Orbital Parameters for $\iota$~Peg. Summarized here are the
apparent orbital parameters for the $\iota$~Peg system as determined
by FT, and our PTI primary and secondary datasets. For parameters
estimated from our PTI observations we separately quote one sigma
errors from both statistical and systematic sources (listed as
$\sigma_{stat}$/$\sigma_{sys}$), and the total error as the sum of the
two in quadrature. We have also included the level of statistical
agreement between visual orbit parameters from our two solutions; the
parameters estimated separately from the primary and secondary
datasets are in good agreement in relation to the statistical
component of their error estimates. We have quoted the longitude of
the ascending node parameter ($\Omega$) as the angle between local
East and the orbital line of nodes (and the relative position of the
secondary at T$_0$), measured positive in the direction of local
North. Due to the degeneracy in our $V^2$ observable there is a
180$^\circ$ ambiguity in $\Omega$. Finally, the fit $\chi^2$/DOF and
mean absolute $V^2$ residual ($\overline{|R_{V^2}|}$) is listed for
both solutions.
\label{tab:orbit}}
\end{center}
\end{table}
\section{Physical Parameters}
\label{sec:physics}
Physical parameters derived from the $\iota$~Peg primary dataset
visual orbit and the FT spectroscopic orbit are summarized in Table
\ref{tab:physics}. We use the primary dataset solution because it is
the most free from possible sky position-dependent systematic effects
(as the secondary dataset includes the ancillary calibrators), but we
note the two orbital solutions yield statistically consistent results.
Notable among the physical parameters for the system is the
high-precision determination of the component masses for the system, a
virtue of the precision of the FT radial velocities on both components
and the high inclination of the orbit. We estimate the masses of the
F5V primary and putative G8V secondary components as 1.326 $\pm$ 0.016
M$_{\sun}$ and 0.819 $\pm$ 0.009 M$_{\sun}$ respectively. Our mass
values agree well with mass estimates of 1.33 $\pm$ 0.08 M$_{\sun}$ and
0.9 $\pm$ 0.2 M$_{\sun}$ respectively made by Lyubimkov et al.~(1991)
based on evolutionary models and spectroscopic measurements of component
effective temperatures and surface gravities.
The Hipparcos catalog lists the parallax of $\iota$~Peg as 85.06 $\pm$
0.71 mas (\cite{HIP97}). The distance determination to
$\iota$ Peg based on the FT radial velocities and our apparent
semi-major axis and inclination is 11.51 $\pm$ 0.13 pc, corresponding
to an orbital parallax of 86.91 $\pm$ 1.0 mas, consistent with the
Hipparcos result at roughly 2\% and 1.5 sigma.
FT list main-sequence model linear diameters for the two $\iota$~Peg
components as 1.3 and 0.9 R$_{\sun}$ respectively (FT). At a distance
of approximately 11.5 pc this corresponds to apparent angular
diameters of 1.0 and 0.7 mas for the primary and secondary components
respectively. We have fit for the uniform-disk angular diameter for
both components as a part of the orbit estimation, and find best fit
apparent diameters of 0.98 $\pm$ 0.05 and 0.70 $\pm$ 0.10 mas.
Because we have limited spatial frequency coverage in our data,
following Mozurkewich et al.~(1991) and Quirrenbach et al.~(1996) we
have estimated the limb-darkened diameters of the components from a
correction to the uniform-disk diameter based on the solar
limb-darkening at 2 $\mu$m given by Allen (1982). The limb-darkened
diameters for the primary and secondary components are 1.0 $\pm$ 0.05
and 0.71 $\pm$ 0.10 mas respectively. For both the primary and
secondary components our fits for apparent diameter are in good
agreement with main-sequence model diameters.
The observed K-magnitude of the $\iota$~Peg system (2.623 $\pm$ 0.016
-- \cite{Carrasco91}, 2.656 $\pm$ 0.002 -- \cite{Bouchet91}) and our
estimates of the distance and relative K-photometry (Table
\ref{tab:orbit}) of the system allows the determination of the
absolute magnitude of both components separately. Using the Bouchet
et al.~(1991) K-photometry we obtain M$_{K}$ values of 2.574 $\pm$
0.025 and 4.182 $\pm$ 0.030 for the primary and secondary components
respectively. Both of these M$_{K}$ values are consistent (within
quoted scatter) to the empirical mass-luminosity relation for nearby
low-mass, main-sequence stars given by Henry \& McCarthy (1992, 1993).
In particular, our M$_{K}$ value for the primary is 0.010 mag brighter
than the mass-luminosity prediction (\cite{Henry92}), while the 4.18
M$_{K}$ value for the secondary is roughly 0.28 magnitudes dimmer than
the prediction (\cite{Henry93}). Both values are well within the
quoted scatter of the mass-luminosity models. A second check on the
absolute K-magnitude estimates can be extracted from the model
calculations of Bertelli et al.~(1994), who predict absolute
K-magnitudes of 2.616 $\pm$ 0.048 and 4.254 $\pm$ 0.039 for our
estimated primary and secondary masses respectively for main-sequence
stars with solar-type abundances at an age of 1.7 $\pm$ 0.8 $\times$
10$^8$ yr (\cite{Lyubimkov91}).
\begin{table}
\begin{center}
\begin{small}
\begin{tabular}{|c|c|c|}
\hline
Physical & Primary & Secondary \\
Parameter & Component & Component \\
\hline \hline
a (10$^{-2}$ AU) & 4.54 $\pm$ 0.03 (0.03/0.0002) & 7.35 $\pm$ 0.03 (0.03/0.0003) \\
Mass (M$_{\sun}$)& 1.326 $\pm$ 0.016 (0.016/0.0001) & 0.819 $\pm$ 0.009 (0.009/0.0001) \\
Sp Type (FT) & F5V & G8V \\
Model Diameter (mas) & 1.0 & 0.7 \\
UD Fit Diameter (mas)& 0.98 $\pm$ 0.05 (0.01/0.05) & 0.70 $\pm$ 0.10 (0.03/0.10) \\
LD Fit Diameter (mas)& 1.0 $\pm$ 0.05 (0.01/0.05) & 0.71 $\pm$ 0.10 (0.03/0.10) \\
\cline{2-3}
System Distance (pc) & \multicolumn{2}{c|}{11.51 $\pm$ 0.13 (0.05/0.12)} \\
$\pi_{orb}$ (mas) & \multicolumn{2}{c|}{86.91 $\pm$ 1.0 (0.34/0.94)} \\
\cline{2-3}
M$_K$ (mag) & 2.574 $\pm$ 0.025 (0.010/0.024) & 4.182 $\pm$ 0.030 (0.019/0.028) \\
\hline
\end{tabular}
\end{small}
\caption{Physical Parameters for $\iota$~Peg. Summarized here are the
physical parameters for the $\iota$~Peg system as derived from the
orbital parameters in Table \ref{tab:orbit}. As for our PTI-derived
orbital parameters we have quoted both total error and separate
contributions from statistical and systematic sources (given as
$\sigma_{stat}$/$\sigma_{sys}$).
\label{tab:physics}}
\end{center}
\end{table}
\section{Eclipse Search}
\label{sec:eclipses}
A critical test of our visual orbit model is a high-precision
photometric search for eclipses in $\iota$~Peg. Combined with our
visual orbit (Table \ref{tab:orbit}), our measured diameters (Table
\ref{tab:physics}) imply an apparent limb-to-limb separation at
conjunction of 0.151 $\pm$ 0.069 mas (using our limb-darkened diameter
estimates). Our visual orbit and fit diameters do not favor the FT
conjecture of possible eclipses in the $\iota$~Peg system.
Conversely, were the inclination of the orbit near 90$^\circ$, there
would be significant primary eclipses with a duration of a few hours
(6.8 hr for $i$ = 90$^\circ$ -- FT), and as large as 0.6 mag in
V-band.
Several individuals have searched for signs of eclipses in the
$\iota$~Peg system. In 1997 both Van Buren with the 60'' telescope at
Palomar (1997) and one of us (C.D.K.) at the Robinson Rooftop
Observatory at Caltech in Pasadena (\cite{Koresko97}) searched for
eclipses during primary and secondary eclipse opportunities
respectively. Both searches resulted in non-detections at about the
0.1 mag levels.
More comprehensive and sensitive than the Southern California searches
has been the program conducted by the Automated Astronomy Group at
Tennessee State University. $\iota$~Peg was observed photometrically
in 1984 with the Phoenix-10 automatic photoelectric telescope (APT) in
Phoenix, AZ, and again in 1997-98 with the Vanderbilt/Tennessee State
16-inch APT at Fairborn Observatory near Washington Camp, AZ, in order
to search for possible eclipses suggested by FT. Both telescopes
observed $\iota$~Peg once per night through a Johnson V filter with
respect to the comparison star HR 8441 (HD 210210, F1 IV) in the
sequence C,V,C,V,C,V,C, where C is the comparison star and V is
$\iota$~Peg. Three differential magnitudes (in the sense V-C) were
computed from each nightly sequence, corrected for differential
extinction, and transformed to the Johnson system. The three
differential magnitudes from each sequence were then averaged together
and treated as single observations thereafter. Because of the lack of
accurate standardization in the Phoenix-10 data set, a -0.027 mag
correction was added to each observation to bring those data in line
with the 16-inch observations. The observations are summarized in
Table \ref{tab:photometry}. Column 4 gives the standard deviation of
a single nightly observation from the mean of the entire data set and
represents a measure of the precision of the observations. Further
details on the telescopes, data acquisition, reductions, and quality
control can be found in Young et al.~(1991) and Henry (1995a,b).
\begin{table}
\begin{center}
\begin{small}
\begin{tabular}{|c|c|c|c|}
\hline
APT & JD Range & \# Obs. & Std.~Dev. \\
& (+2400000) & & (mag) \\
\hline
10-inch & 45703 -- 46065 & 78 & 0.0109 \\
16-inch & 50718 -- 50829 & 66 & 0.0032 \\
\hline
\end{tabular}
\end{small}
\caption{Summary of APT Photometry on $\iota$~Peg.
\label{tab:photometry}}
\end{center}
\end{table}
The photometric observations summarized in Table \ref{tab:photometry}
are plotted in Figure \ref{fig:iPg_phot} against orbital phase of the
binary computed from the FT-defined T$_0$ and period. For
inclinations allowing eclipses of the two components, the phases of
conjunction coinciding with primary and secondary eclipse
opportunities are 0.25 and 0.75 respectively. FT estimated the total
duration of a central eclipse ($i$ = 90$^\circ$) to be roughly 6.8
hours or 0.027 phase units. Our photometric observations exclude this
possibility and show no evidence for any partial eclipse to a
precision of around 0.003 mag. The time of conjunction is uncertain
by no more than a few minutes, and gaps in the data around the time of
conjunction are no larger than about 0.005 phase units (1.2 hours).
Thus, the possibility of all but the briefest of grazing eclipses are
excluded by the APT photometry. In particular, using the two points
nearest the primary conjunction opportunity (at -1.29 and +1.22 hours
relative to the predicted conjunction respectively) constrain $|90-i|$
to be greater than 4.07$^\circ$ and 4.10$^\circ$ respectively at
greater than 99\% confidence, based on the model diameters and M$_v$
estimates of 3.4 and 5.8 for the primary and secondary components
respectively.
\begin{figure}
\epsscale{0.8}
\plotone{iPg.phot.eps}
\caption{Photometric Observations of $\iota$~Peg. Differential
photometric observations of $\iota$~Peg from the Phoenix-10 APT (open
triangles) and the Vanderbilt/Tennessee State University 16-inch APT
(filled triangles) plotted against orbital phase of the binary
computed following FT. Phase 0.25 represents a time of conjunction
with the secondary in front (primary eclipse opportunity). Inset we
show a closeup of the data around the primary eclipse opportunity.
(We have added a second horizontal scale relative to the eclipse
opportunity in units of hours; a full eclipse in the
$\iota$~Peg system would be roughly 7 hours in duration.) The
photometric observations exclude the possibility of all but the
briefest of grazing eclipses in the $\iota$~Peg system.
\label{fig:iPg_phot}}
\end{figure}
The components of most close binaries with orbital periods less than
about one month rotate synchronously with the orbital period due to
tidal action between the components (e.g.~\cite{Fekel89}). Such
synchronous rotation is expected in $\iota$~Peg and is confirmed by
the rotational broadening measurements of FT and Gray (1984)
(c.f.~\cite{Wolff97}). If the G8V secondary, which is much more
convective than the F5V primary, is rotating synchronously, it would
be expected to be photometrically variable on the orbital period at
the level of a few percent due to starspot activity (\cite{Henry99}).
In fact, $\iota$~Peg is listed as a suspected variable star by Petit
(1990), who reports variability at the 0.02 mag level in V. FT
estimate that the secondary is roughly 2.7 mag fainter in the V band
than the primary, so any apparent photometric variability of the
secondary component will be diluted by a factor of about 12 by the
primary component.
In order to search for this possible photometric variability in
$\iota$~Peg, we performed a periodogram analysis of the 16-inch APT
data. The analysis reveals a photometric period that is identical,
within its uncertainty, to the spectroscopic period, a result that is
consistent with the assumption of synchronous rotation. Likewise, the
amplitude of 0.0037 mag, scaled by a factor of 12, results in a 4.4\%
variation, similar to the variability expected from rotational
modulation of the spotted surface of the secondary diluted by the
emission of the primary. Based on these results, we conclude that
$\iota$~Peg is a low-amplitude variable star.
\section{Summary}
We have presented the visual orbit for the double-lined binary system
$\iota$~Pegasi, and derived the physical parameters of the system by
combining it with the earlier spectroscopic orbit of Fekel and Tomkin.
The derived physical parameters of the two young stars in $\iota$~Peg
are in reasonable agreement with the results of other studies of the
system, and theoretical expectations for stars of these types. Noted
by FT, the $\iota$~Peg system is nearly eclipsing; because our model
visual orbit is so close to producing observable eclipses we have
further presented high-precision photometric data which is consistent
with our visual orbit model.
$\iota$~Peg represents a prototype of the binary system that PTI is
well-suited to measure; the large magnitude difference between
components in the visible is significantly mitigated in the
near-infrared, making the accurate determination of the system
parameters feasible.
\acknowledgements Part of the work described in this paper was
performed at the Jet Propulsion Laboratory, California Institute of
Technology under contract with the National Aeronautics and Space
Administration. Interferometer data was obtained at the Palomar
Observatory using the NASA Palomar Testbed Interferometer, supported
by NASA contracts to the Jet Propulsion Laboratory.
Automated astronomy at TSU has been supported for several years by the
National Aeronautics and Space Administration and by the National Science
Foundation, most recently through NASA grants NCC2-977 and NCC5-228 (which
supports TSU's Center for Automated Space Science) and NSF grants HRD-9550561
and HRD-9706268 (which supports TSU's Center for Systems Science Research).
We wish to thank the anonymous referee for his many positive
contributions to the accuracy and quality of this manuscript, and his
forbearance in the review process.
This research has made use of the Simbad database, operated at CDS,
Strasbourg, France.
| proofpile-arXiv_065-8136 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
In Minkowski space the only Killing vector that is time like
everywhere is the time translation Killing vector
$\partial /\partial t $. For instance, in four dimensional Minkowski
space, the Killing vector
$\partial /\partial t + \Omega \partial /\partial \phi $ that
describes a frame rotating with angular velocity $\Omega $ becomes
space like outside the velocity of light cylinder
$r\sin \theta =1/\Omega $.
This raises problems with the
thermodynamic interpretation of the Kerr solution: a Kerr solution
with non zero rotation parameter $a$ cannot be in equilibrium with
thermal radiation in infinite space because the radiation would have
to co-rotate with the black hole and so would have to move faster
than light outside the velocity of light cylinder. The best one can
do is consider the rather artificial case of equilibrium with
rotating radiation in a box smaller than the velocity of light
radius. This problem is inextricably linked with
the fact that the Hartle-Hawking state for a Kerr solution does not
exist, as proved in \cite{Ka_Wa}. The absence of the Hartle-Hawking
state has a number of important ramifications, details of which are
discussed in \cite{Ka_Wa}.
On the other hand, even a non rotating Schwarzschild black
hole has to be placed in a finite sized box because otherwise the
thermal radiation would have infinite energy and would collapse
on itself. There is also the problem that the equilibrium is
unstable because the specific heat is negative.
\bigskip
It is now well known \cite{HP}, \cite{Wit} that the specific heat of large
Schwarzschild anti de Sitter black holes is positive and that the red
shift in anti-de Sitter spaces acts like an effective box to remove the
infinite energy problem. What was less well known except in the rather
special three dimension case was that anti-de Sitter boundary
conditions could also remove the faster than light problem
for rotating black holes. That is, in anti-de Sitter space there are
Killing vectors that are rotating with respect to the standard time
translation Killing vector and yet are timelike everywhere. This
means that one can have rotating black holes that are in equilibrium
with rotating thermal radiation all the way out to infinity.
One would expect \cite{M;G;W}, \cite{Wit} the partition function of this
black hole to be related to the partition function of a conformal
field theory in a rotating Einstein universe on the boundary of
the anti-de Sitter space. It is the aim of this paper to examine
this relationship and draw some surprising conclusions.
\bigskip
Of particular interest is the behaviour in the limiting case in
which rotational velocity in the Einstein universe at infinity a
approaches the speed of light. We find that the actions of the
Kerr-AdS solutions in four and five dimensions
have similar divergences at the critical angular velocity to the partition
functions of conformal field theories in rotating Einstein universes of one
dimension lower. This is like the behaviour of the three dimensional
rotating anti-de Sitter black holes and the corresponding
conformal field theory on the two dimensional Einstein universe or cylinder.
There is however an important
difference: in three dimensions one calculates the actions of the BTZ black
holes relative to a reference background that is the $M=0$ BTZ black
hole. Had one used three dimensional anti-de Sitter space as the
reference background, one would have had an extra term in the action
which would have diverged as the critical angular velocity was
reached.
On the conformal theory side, this choice of reference
background is reflected in a freedom to choose the vacuum energy.
However, in higher dimensions there is no analogue of the $M=0$ BTZ
black hole to use as a reference background. One therefore has to use
anti-de Sitter space itself as the reference background. Similarly,
there isn't a freedom to choose the vacuum energy in the conformal
field theory. Any mismatch between the reference background for
anti-de Sitter black holes and the vacuum energy of the conformal
field theory will become unimportant in the high temperature limit
for non rotating black holes or the finite temperature but critical
angular velocity case. Thus it might be that the black hole/thermal
conformal field theory correspondence is valid only in those
limits. In that case, maybe we shouldn't believe that the large
$N$ Yang Mills theory in the Einstein universe has a phase transition.
\bigskip
In the $1+1$ dimensional boundary of three dimensional anti-de Sitter space,
massless particles move to the left or right at the speed of light. The
critical angular velocity corresponds to all the particles moving in the same
direction. If the temperature is scaled to zero as the angular velocity
approaches its critical value, the energy remains finite and the system
approaches a BPS state.
In higher dimensional Einstein universes however particles can move in
transverse directions as well as in the rotation direction or its opposite. At
zero angular velocity, the velocity distribution of thermal particles is
isotropic but as the angular velocity is increased the velocity distribution
becomes peaked in the rotation direction. When the rotational velocity reaches
the speed of light, the particles would have to be moving exclusively in the
rotation direction. This is impossible for particles of finite energy. Thus
rotating Einstein universes of dimension greater than two cannot
approach a finite energy BPS state as the
angular velocity approaches the critical value for rotation at the
speed of light.
Corresponding to this, we shall show that four and
five dimensional Kerr-AdS solutions do not approach a BPS
state as the angular velocity approaches the critical value,
unlike the three dimensional BTZ black hole. Nevertheless critical
angular velocity may be of interest because one might expect that
in this limit super Yang-Mills would behave like a free theory. We
postpone to a further paper the question of whether this removes
the apparent discrepancy between the gravitational and Yang Mills
entropies.
We should mention that
critical limits on rotation have recently been discussed in the
context of black three branes in type IIB supergravity \cite{gubs}:
rotating branes are found to be stable only up to a critical value of
the angular momentum density, beyond which the specific heat becomes
negative. However, our critical limit is different. It
corresponds not to a thermodynamic
instability, but rather to a Bose condensation effect in the boundary
conformal field theory.
\bigskip
In section two we calculate the partition function for conformal
invariant free fields in rotating Einstein universes of dimension two,
three and four in the critical angular velocity limit. In sections
three, four and five we calculate the entropy and actions for
rotating anti-de Sitter black holes in the
corresponding dimensions and find agreement with the conformal
field in the behaviour near the critical angular velocity.
The metric for
rotating anti-de Sitter black holes
in dimensions higher than four was not previously known. Our solutions
have other interesting applications, particularly when regarded as
solutions of gauge supergravity in five dimensions, which we will
discuss elsewhere \cite{mmt}.
\section{Conformally invariant fields in rotating Einstein universes}
The Maldacena conjecture \cite{M;G;W}, \cite{Wit}
implies that the thermodynamics of quantum gravity
with a negative cosmological constant can be modelled by the large $N$
thermodynamics of quantum field theory.
We are interested here in probing the
correspondence in the limit that the boundary is rotating at the speed
of light; that is, we want to study the large $N$ thermodynamics of
conformal field theories in an Einstein universe rotating at the speed
of light.
The details of the boundary conformal field theory ultimately
depend on the details of the bulk supergravity (or string) theory, but
generic features such as the divergence of the entropy in this
critical limit should be independent of the precise features of the
theory. Thus we are led to making the following simplification:
instead of considering, for example, the large $N$ limit of ${\cal
N}=4$ SYM in four dimensions we can just look at the behaviour of
conformal scalar fields in a rotating Einstein universe. We find that
this does indeed give us generic thermodynamic features at high
temperature which agree with those found from the bulk theory.
To go further than this, we would have to embed the rotating black hole
solutions within a theory for which we know the corresponding
conformal field theory. For instance, we could embed the five
dimensional anti-de Sitter Kerr black holes into IIB supergravity in
ten dimensions; we then know that the corresponding conformal field
theory is the large $N$ limit of ${\cal N}=4$ SYM.
However, since we can't calculate
quantities in the large $N$ limit of the latter, to obtain the
subleading behaviour of the partition function would require some
approximations or models such as those used in the discussion of
rotating three branes in \cite{gubs}. It would be interesting to
show that the perturbative SYM calculation gives a discrepancy of
$4/3$ in the entropy as one expects from the results of
\cite{Gu_Kl_Pe}.
Of course in two dimensions we can do better than this:
the two-dimensional conformal field theory is well understood in the
context of an old framework \cite{Br_He}, where the correspond between
bulk and boundary is effectively provided by the modular invariance of
the boundary conformal field theory \cite{Cardy}, \cite{Strominger97}.
In recent months, the CFT
has been discussed in some detail, for example in \cite{Ma_St}, and one
should be able to obtain the subleading dependences of the partition
function on the angular velocity $\Omega$. We leave this
issue to future work.
It is interesting to note here that there is no equivalent of the zero mass
BTZ black hole in higher dimensions. Since the correspondence between
the bulk theory and the boundary conformal field theory is clearest
when one takes the background to be the BTZ black hole, the
correspondence between the conformal field theory and supergravity in
the anti-de Sitter background may only be approximate in higher
dimensions, valid for high temperature. This is one reason why it is
useful to investigate what happens in the critical angular
velocity limit.
\bigskip
Let us start with an analysis of conformal fields in a two-dimensional
rotating Einstein universe; the metric on a cylinder is
\begin{equation}
ds^2 = - dT^2 + d\Phi^2,
\end{equation}
where we need to identify $\Phi \sim \Phi + \beta \Omega$, and both
the inverse temperature $\beta$ and the angular velocity $\Omega$ are
dimensionless. Now consider modes of a conformally coupled scalar
field, propagating in this background; for harmonic modes, the
frequency $\omega$ is equal in magnitude to the angular momentum
quantum number $L$. So we can write the partition function for
conformally invariant scalar fields as
\begin{equation}
\ln {\cal Z} = - \sum \ln \left ( 1 - e^{-\beta (\omega - L
\Omega)} \right ) - \sum \ln \left (1 - e^{-\beta (\omega + L
\Omega)} \right ),
\end{equation}
where the first term counts left moving modes and the second term
counts right moving modes. The partition
function is manifestly singular as one takes the limit $\Omega
\rightarrow \pm 1$; in this limit, all the particles rotate in one
direction. Provided that $\beta$ is small we
can approximate the summation by an integral so that
\begin{equation}
\ln {\cal{Z}} \approx \frac{\pi^2}{6 \beta (1-\Omega^2)},
\end{equation}
which agrees with the high temperature result found in the next
section (\ref{dim_act})
up to a factor and a scale $l$. Note that the form of this result
could also be derived by requiring conformal invariance in the high
temperature limit.
\bigskip
Let us now consider the conformal field theory in three dimensions;
a hypersurface of constant large radius in the four-dimensional
anti-de Sitter Kerr metric has a metric which is proportional to
a three dimensional Einstein universe
\begin{equation}
ds^2 = -dT^2 + d\Theta^2 + \sin^2\Theta d\Phi^2,
\end{equation}
where $\Phi$ must be identified modulo $\beta
\Omega$ with $\beta$ and $\Omega$ dimensionless.
Now consider a conformally
coupled scalar field propagating in this background: the field
equation for a harmonic scalar is
\begin{equation}
\left (\nabla - \frac{R_g}{8} \right ) =
\left ( \nabla - \frac{1}{4} \right ) \varphi = 0,
\end{equation}
where $\nabla $ is the d'Alambertian and $R_{g}$ is the Ricci scalar.
Modes of frequency $\omega$ satisfy the constraint
\begin{equation}
\omega^2 = L(L+1) + \frac{1}{4} = (L + \frac{1}{2})^2,
\end{equation}
where $L$ is the angular momentum quantum number. Then the partition
function can be written as
\begin{equation}
\ln {\cal Z} = - \sum_{L=0}^{\infty} \sum_{m=-L}^{L}
\ln \left ( 1 - e^{-\beta(\omega - m\Omega)}
\right). \label{par_fun}
\end{equation}
For small $\beta$ we can approximate this summation as the integral
\begin{equation}
\ln {\cal Z} \approx - \int_{0}^{\infty} dx_{L} \int_{-x_{L}}^{x_{L}}
dx_{M} \ln \left ( 1 - e^{-\beta(x_{L} -\Omega x_{M})}
\right )= \frac{1}{\beta^2} \int_{0}^{\infty} dy \int_{-y}^{y} dx
\ln \left ( 1 - e^{-(y -\Omega x)} \right ) \label{sum_fun}
\end{equation}
We are interested in the divergence of the partition function when
$\Omega \rightarrow \pm 1 $; this divergence arises from the modes for
which the frequency is almost equal to $\left | m \right | $.
Of course the frequency can never be quite equal to
$\left | m \right | $,
but for large $m$ the argument of the logarithm in (\ref{par_fun})
becomes very small. So picking out the modes for which $y= \left | x
\right | $ in
(\ref{sum_fun}) we find that the leading order divergence in the
partition function at small $\beta$ is
\begin{equation}
\ln {\cal Z} \approx \frac{\pi^2}{6 \beta^2 (1 - \Omega^2)},
\end{equation}
which agrees in functional form with the limit that we will find for the
bulk action in section four. In the critical limit, all the particles
are rotating at the speed of light in the equatorial plane.
\bigskip
The metric of the four-dimensional rotating Einstein universe can be
written as
\begin{equation}
ds^2 = - dT^2 + d\Theta^2 + \sin^2\Theta d\Phi^2 + \cos^2\Theta
d\Psi^2,
\end{equation}
where $\Phi$ and $\Psi$ must be identified modulo $\beta \Omega_1$ and
$\beta \Omega_2$. We have only approximated the partition function
for conformally coupled scalar fields in lower dimensional
rotating Einstein universes. However in \cite{CassidyHawking97} the
thermodynamics of conformally coupled scalars were discussed in detail
for a four-dimensional rotating Einstein universe in the limit in
which one of the angular velocities vanishes.
The general form for the
partition function found in \cite{CassidyHawking97} is quite
complex, but it takes a
simple form when $\beta$ is small: one finds that
\begin{equation}
\ln {\cal Z} \approx \frac{\pi^3}{90 \beta^3 (1 - \Omega^2)}, \label{mjc}
\end{equation}
where $\Omega$ is the angular velocity, which agrees in form with the
bulk result to leading order. In principle we could use the partition
function density given in \cite{CassidyHawking97} to probe the
correspondence between subleading terms.
Let us now try to approximate the partition function for general
angular velocities using the same techniques as before.
Consider a conformally invariant scalar field propagating in this
background; the field equation is
\begin{equation}
\left ( \nabla - \frac{R_g}{6} \right ) =
\left ( \nabla - 1 \right )\varphi = 0,
\end{equation}
and so modes of the field have frequencies $\omega$ which satisfy
\begin{equation}
\omega^2 = L(L+2) + 1 = (L+1)^2,
\end{equation}
where $L$ is the orbital angular momentum number. Then the partition
function may be written as
\begin{equation}
\ln {\cal Z} = - \sum_{L,m_1,m_2} \ln \left ( 1 - e^{-\beta (\omega - m_1
\Omega_1 - m_2 \Omega_2)} \right),
\end{equation}
where
$m_1$ and $m_2$ are orbital quantum numbers. Suppose that $\Omega_2 = 0$;
then we expect the dominant contribution to the partition function in
the critical angular velocity limit to be from the $m_1 = \pm L$
modes. However there is a constraint on the angular momentum quantum
numbers
\begin{equation}
\left | m_1 \right | + \left | m_2 \right | \le L, \label{rot_con}
\end{equation}
and so we need to set $m_2 = 0$. The dominant contribution to the
partition function at high temperature can be expressed as
\begin{equation}
\ln {\cal Z} \approx - \frac{1}{\beta^3} \int_{0}^{\infty} dx \left [ \ln
\left ( 1 - e^{(1 + x (1- \Omega))} \right) + \ln
\left ( 1 - e^{(1 + x (1+ \Omega))} \right) \right ]
= \frac{\pi^2}{6 \beta^3 (1-\Omega^2)}, \nonumber
\end{equation}
which agrees with the result (\ref{mjc}) in functional dependence
although not coefficient.
\bigskip
For general angular velocities we find that the factor
\begin{equation}
\left (L - m_1 \Omega_1 - m_2 \Omega_2 \right )
\end{equation}
only approaches zero in the limit $\Omega_1, \Omega_2 \rightarrow
1$. Thus we expect that there is a divergent contribution to the
partition function only when either or both of $\Omega_1$ and
$\Omega_2$ tend to
one, as we will find when we look at the black hole metric.
Setting $\Omega_1 = \Omega_2 \equiv \Omega$, the dominant contribution to the
partition function will come
from modes for which the bound (\ref{rot_con}) is saturated. Then
we find that
\begin{equation}
\ln {\cal Z} \approx - \frac{1}{\beta^3}
\int_{0}^{\infty} dx x \left [\ln \left ( 1 - e^{-x(1-\Omega)} \right)
+ \ln \left ( 1 - e^{-x(1-\Omega)} \right) \right ]
= \frac{\zeta(3)}{\beta^3 (1 - \Omega^2)^2},
\end{equation}
which has the correct dependence on $\beta$ and $\Omega$ to agree
with the bulk result found in section five.
\section{Rotating black holes in three dimensions}
\subsection{The BTZ black hole}
The Euclidean Einstein action in three dimensions can be written as
\begin{equation}
I_{3} = - \frac{1}{16 \pi } \int d^3x \sqrt{g} \left [ R_g + 2 l^2
\right],
\end{equation}
with the three dimensional Einstein constant set to one. The
Lorentzian section of the BTZ black
hole solution first discussed in \cite{BanadosTeitelboimZanelli92} is
\begin{equation}
ds^2 = - N^2 dT^2 + \rho^2 (N^{\Phi} dT + d\Phi)^2 +
(\frac{y}{\rho})^2 N^{-2} dy^2,
\end{equation}
where the squared lapse $N^2$, the angular shift $N^{\phi}$ and the
angular metric $\rho^2$ are given by
\begin{eqnarray}
N^2 &=& (\frac{y l}{\rho})^2 ({y^2 - y_{+}^2}); \nonumber \\
N^{\Phi} = - \frac{j}{2 \rho^2}; && \hspace{5mm} \rho^2 = y^2 +
\frac{1}{2} (m l^{-2} - y_{+}^2),
\end{eqnarray}
with the position of the outer horizon defined by
\begin{equation}
y_{+}^2 = m l^{-2} \sqrt{ 1 - (\frac{jl}{m})^2}.
\end{equation}
Note that in these conventions anti-de Sitter spacetime is the $m=-1$,
$j=0$ solution. Cosmic censorship requires the existence of an event
horizon, which in turn requires either $m= -1$, $j=0$ or $m \ge \left
|j \right |l$. This bound in fact coincides with the supersymmetry
bound: regarded as a solution of the equations of motion of gauged
supergravity with zero gravitini, extreme black holes with $m = \left
| j \right | l$ have one exact supersymmetry. Both the $m=0$ and the
$m=-1$ black holes have two exact supersymmetries. In higher
dimensional anti-de Sitter Kerr black holes
the cosmic censorship bound does not coincide with the
supersymmetry bound.
The temperature of the black hole is given by
\begin{equation}
T_{H} = \frac{\sqrt{2m} l}{2 \pi} \left [ \frac{1 - (\frac{jl}{m})^2}{1 +
\sqrt{1 - (\frac{jl}{m})^2}} \right ]^{1/2}.
\end{equation}
There has been a great deal of interest recently in the BTZ black
hole; the action was first calculated in
\cite{BanadosTeitelboimZanelli92} and has also been
discussed in \cite{Ma_St}. However, the
action was calculated with respect to the zero mass black hole
background, whilst in the present context we are interested in the
action with respect to anti-de Sitter space itself. The reason for
this is that in higher dimensions there is no analogue of the zero
mass black hole as a background. {\footnote {The
metric for which one replaces the lapse function $(1+ l^2 y^2)$ by
$l^2 y^2$ certainly plays a distinguished r\^{o}le in all dimensions,
since this is the metric that one obtains from branes in the decoupling
limit. It is not however true that this metric is the natural
background for rotating black holes in dimensions higher than
three but in the high temperature limit
the distinction between the backgrounds will only affect subleading
contributions to the action.}}
To calculate the action of the rotating black hole one first needs to
analytically continue both $t \rightarrow i \tau$ and $j \rightarrow
-i \bar{j}$. Using the Euclidean section one finds the action as a
function of $m$, $l$ and $\bar{j}$. The physical result is then
obtained by analytically continuing the angular momentum parameter.
Taking the background to be anti-de Sitter space we then
find that the Euclidean action (for $m \ge 0$) is given by
\begin{equation}
I_3 = - \frac{\pi}{8 \sqrt{2m} l } \left [ \frac{1 + \sqrt{f}}{f}
\right ]^{\frac{1}{2}} \left [ 3m \sqrt{f} - (2 + m) \right ],
\end{equation}
where $f = 1 - (jl/m)^2$. This action diverges in general
as $f$ approaches zero, i.e. as we approach the cosmological and
supersymmetry bound. One would expect the action to diverge to
positive infinity in this limit; from the gravitational instanton point of
view, this implies that there is zero probability for anti-de Sitter
spacetime to decay into a supersymmetric BTZ black hole.
It is straightforward to show that the energy $\cal{M}$, angular
momentum $J$, angular velocity $\Omega$ and entropy $S$ are given by
\begin{eqnarray}
{\cal{M}} = \frac{1}{8} (m+1); && \hspace{5mm} J = \frac{j}{8};
\label{mass_btz} \\
S = \frac{1}{2} \pi \rho(y_{+}); && \hspace{5mm} \Omega = - \frac{j}{2
\rho^2(y_{+})}. \nonumber
\end{eqnarray}
Note that the zero of energy is defined with respect to the
anti-de Sitter space rather than the $m=0$ black hole.
\bigskip
The asymptotic form of the Euclidean section of the BTZ metric is
\begin{equation}
ds^2 = y^2 l^2 d\tau^2 + y^2 d \Phi^2 + \frac{dy^2}{y^2 l^2}.
\end{equation}
Regularity of the solution on the boundary of the Euclidean section
at $y = y_{+}$ requires that
we must identify $\tau \sim \tau + \beta$ and $\Phi \sim \Phi +
i \beta \Omega$, where $\beta$ is the inverse temperature. The latter
identification is necessary because the boundary is a fixed point set
of the Killing vector
\begin{equation}
k = \partial_{\tau} + i \Omega \partial_{\Phi}.
\end{equation}
The net result of these identifications is that after one analytically
continues back to Lorentzian signature one finds that
the boundary at infinity is conformal to an
Einstein universe rotating at angular velocity $\Omega$.
In the limit that $\Omega \rightarrow \pm l$ the surface is
effectively rotating at
the speed of light: this gives the critical angular velocity
limit. Looking back at the
form of the metric for the BTZ black hole we find that this limit
implies that
\begin{equation}
\Omega = - \frac{jl^2}{m (1 + \sqrt{f})} \rightarrow \pm l,
\end{equation}
which in turn requires that $f \rightarrow 0$. Hence in three
dimensions the cosmological and supersymmetry limits coincide with a
critical angular velocity limit. However, the temperature
necessarily vanishes whilst in the conformal field theory we have only
probed the high temperature limit. This suggests that one should be
able to find a more general critical angular velocity limit. This is
indeed the case: if we rewrite the BTZ metric in Kerr form we will be
able to find non-extreme states for which the boundary is rotating at
the speed of light.
It is useful to rescale the time coordinate
so that $\hat{\beta}$ is both finite and dimensionless in the critical
limit
\begin{equation}
\hat{\beta} = \sqrt{f} l \beta \approx \frac{2\pi}{\sqrt{2m}},
\end{equation}
where the latter equality applies for $m$ large.
In this limit of small $\hat{\beta}$ the action for the BTZ black hole
diverges as
\begin{equation}
I_{3} \approx \frac{\pi^2}{8 l \hat{\beta} (1 - \hat{\Omega})},
\label{dim_act}
\end{equation}
where $\hat{\Omega} = l^{-1} \Omega$ and is hence dimensionless. We
would need to know the CFT partition function at low temperature
to compare with the CFT and bulk results.
\subsection{Alternative metric for the BTZ black hole}
\noindent
To elucidate the thermodynamic properties of the black hole as one
takes the cosmological and supersymmetric limit
it is useful to rewrite the metric in the alternative form
\begin{equation}
ds^2 = - \frac{\Delta_{r}}{r^2} (dt - \frac{a}{\Xi} d\phi)^2 + \frac{r^2
dr^2}{\Delta_r} + \frac{1}{r^2} (a dt - \frac{1}{\Xi}(r^2 + a^2)
d\phi)^2, \label{alt_met}
\end{equation}
where we define
\begin{equation}
\Delta_r = (r^2 + a^2)(1 + l^2 r^2) - 2 {M} r^2.
\end{equation}
The motivation for writing the metric in this form is that it then
resembles the higher dimensional anti-de Sitter Kerr solutions. We
have chosen the normalisation of the time and angular coordinates so
that the latter has the usual period and the former has norm $r l$ at
spatial infinity. Rewriting the BTZ black hole metric in Kerr-Schild
and Boyer-Lindquist type coordinates was discussed
recently in \cite{Kim} in the context of studying the global structure
of the black hole. Using the coordinate transformations
\begin{eqnarray}
T = t; && \hspace{10mm} \Phi = \phi + a l^2 t; \\
R^2 &=& \frac{1}{\Xi}(r^2 + a^2),
\end{eqnarray}
with $\Xi = 1 - a^2/l^2$, followed by a shifting of the radial
coordinate, we can bring the metric back into the usual BTZ form.
The horizons are defined by the zero points of $\Delta_r$, with the
event horizon being at
\begin{equation}
r_{+}^2 = \frac{1}{2 l^2}(2 {M} - 1 - a^2 l^2) + \frac{1}{2 l^2} \sqrt{
(1 + a^2 l^2 - 2 {M})^2 - 4 a^2 l^2}.
\end{equation}
Expressed in terms of the variables $({M},a)$ the supersymmetry and
cosmic censorship conditions become
\begin{equation}
{M} \ge \frac{1}{2} (1 + \left | a \right | l)^2, \label{up_ab}
\end{equation}
where the choice of sign of $a$ determines which Killing spinor is
conserved in the BPS limit. In the special case $\bar{M} \equiv 0$
both supersymmetries are
preserved; this is true for all $a$ and not just for the limiting value
$\left | a \right | l \rightarrow 1$ which saturates (\ref{up_ab}).
\bigskip
As is the case in higher dimensions, the ${M}=0$ metric is identified
three-dimensional anti-de Sitter space. One can calculate the inverse
temperature of the black hole to be
\begin{equation}
\beta_{t} = 4 \pi \frac{r_{+}^2 + a^2}{\Delta_{r}'(r_{+})}.
\end{equation}
In the calculation of the action, only the volume term
contributes; the appropriate background is the ${M}=0$ solution with the
imaginary time coordinate scaled so that the geometry matches on a
hypersurface of large radius
\begin{equation}
\tau \rightarrow ( 1 - \frac{M}{l^2 R^2}) \tau.
\end{equation}
Then the action is given by
\begin{equation}
I_3 = - \frac{\pi (r_{+}^2 + a^2)}{\Xi \Delta_{r}'(r_{+})}
\left [ r_{+}^2 l^2 + a^2 l^2 - {M} \right ].
\end{equation}
In this coordinate system the thermodynamic quantities can be written
as
\begin{eqnarray}
{\cal M}' = \frac{M}{4 \Xi}; & \hspace{5mm} & J' = \frac{M a }{2\Xi^2};
\\
\Omega' = \frac{\Xi a }{(r_{+}^2 + a^2)}; & \hspace{5mm} & S =
\frac{\pi}{2 \Xi r_{+}} (r_{+}^2 + a^2). \nonumber
\end{eqnarray}
We now have to decide how to take the limit of critical angular
velocity in this coordinate system.
The key point is that this coordinate system is not adapted
to the rotating Einstein universe on the boundary. The angular
velocity of the black hole in this coordinate system
vanishes in the limit $al \rightarrow 1$ and is always smaller
in magnitude than $l$.
In both this and following sections, we shall
adhere to the notation that primed thermodynamic quantities are expressed with
respect to the Kerr coordinate system whilst unprimed thermodynamic
quantities are expressed with respect to the Einstein universe
coordinate system. We also assume from here onwards that $a$ is positive.
\bigskip
The angular velocity of the rotating Einstein universe is given by
\begin{equation}
\Omega = \Omega' + al^2; \label{om2}
\end{equation}
that is, we need to define the angular velocity with respect to
the coordinates $(T, \Phi)$. Now suppose that $\Omega =
l (1 - \epsilon)$ where $\epsilon$ is small. This requires that
\begin{equation}
\epsilon = (1 - al) \frac{(r_{+}^2 - a/l)}{(r_{+}^2 + a^2)}.
\end{equation}
For the Einstein universe on the boundary to be rotating at the
critical angular velocity, either $al =1$ or $r_{+}^2 = a/l$.
Note that not only the action but also
the entropy is divergent in the limit $al =1$.
Let us explore the limit $r_{+}^2 = a/l$ first;
it is straightforward to show that this coincides with the
supersymmetry limit. This means that in every supersymmetric black
hole the boundary is effectively rotating at the speed of light, which
is apparent from the limit of $\Omega$ given in (\ref{mass_btz}).
Cosmic censorship requires that $r_{+}^2 \ge a/l$ and hence
the rotating Einstein universe never rotates faster than the speed of
light. Put another way, any BTZ black hole can be in equilibrium with
thermal radiation in infinite space, no matter what its mass is.
The metric of a supersymmetric BTZ black hole is
\begin{equation}
ds^2 = - l^2 y^2 dT^2 + \frac{jl}{2} ( dT - l^{-1} d\Phi)^2 + y^2
d\phi^2 + \frac{dy^2}{l^2 y^2}.
\end{equation}
Now starting from the black hole metric (\ref{alt_met}) and using
the coordinate transformations
\begin{eqnarray}
T = t; \hspace{10mm} \Phi = \phi + al^2 t; \\
y^2 = \frac{1}{\Xi} (r^2 - a/l), \nonumber
\end{eqnarray}
the general supersymmetric metric can also be expressed as
\begin{equation}
ds^2 = - l^2 y^2 dT^2 + y^2 d\Phi^2 + \frac{dy^2}{l^2 y^2} +
\frac{al ( 1 + al)^2}{\Xi^2} ( dT - l^{-1} d\Phi)^2.
\end{equation}
Correspondence between the two metrics requires that
\begin{equation}
m = j l = \frac{2 a l}{(1-al)^2}.
\end{equation}
So a supersymmetric black hole has a mass which diverges as we take
the limit $al \rightarrow 1$. This is apparent if we define the
thermodynamic quantities with respect to the
coordinates $(T, \Phi)$. The energy and inverse temperature are
unchanged (${\cal M} \equiv {\cal M}'$ and $\beta_t \equiv \beta$) whilst
\begin{equation}
J = \frac{Ma}{2 \Xi (1 + l^2 r_{+}^2)}.
\end{equation}
So the mass and angular momentum of {\it any} black hole diverge as we take
the limit $al \rightarrow 1$.
It is useful to consider (very non-extreme) black holes which are at high
temperature; this requires that $r_{+} l \gg 1$ and so if we define a
dimensionless inverse temperature
\begin{equation}
\bar{\beta} = l \beta \approx \frac{2 \pi}{l r_{+}},
\end{equation}
we find that the other thermodynamic quantities behave for $al
\rightarrow 1$ as
\begin{eqnarray}
I_{3} = - \frac{\pi^2}{l \Xi \bar{\beta}}; & \hspace{5mm} &
S = \frac{\pi^2}{l \Xi \bar{\beta}} \\
{\cal M} = \frac{\pi^2}{2 l \Xi \bar{\beta}^2}; & \hspace{5mm} &
J = \frac{1}{4 l^2 \Xi}, \nonumber
\end{eqnarray}
where the latter two quantities are defined with respect to the
dimensionless temperature. These thermodynamic quantities are
consistent both with the thermodynamic relations, and with the result
for the partition function of the corresponding conformal field
theory.
\section{Rotating black holes in four dimensions}
\label{sec:Four}
Rotating black holes in four dimensions with asymptotic AdS behaviour
were first constructed by Carter \cite{Carter68} many years ago.
There has been interest in such solutions recently as solitons
of $N=2$ gauged supergravity in four dimensions
\cite{KosteleckyPerry95} and in the
context of topological black holes \cite{CaldarelliKlemm98}. The metric is
\begin{equation}
ds^2 =-\frac{\Delta_r}{\rho^2}
\left[dt - \frac{a}{\Xi} \sin^2\theta d\phi\right]^2
+ \frac{\rho^2}{\Delta_r}dr^2 + \frac{\rho^2}{\Delta_\theta}d\theta^2
+ \frac{\sin^2\theta \Delta_\theta}{\rho^2}\left[adt -
\frac{(r^2+a^2)}{\Xi} d\phi \right]^2
\end{equation}
where
\begin{eqnarray}
\rho^2 & = & r^2 + a^2\cos^2\theta \nonumber \\
\Delta_r & = & (r^2 + a^2)(1+ l^2 r^2) - 2Mr \\
\Delta_\theta & = & 1 - l^2 a^2 \cos^2\theta \nonumber \\
\Xi & = & 1 - l^2 a^2 \nonumber
\end{eqnarray}
The parameter $M$ is related to the mass, $a$ to the angular momentum and
$l^2 = -\Lambda/3$ where $\Lambda$ is the (negative) cosmological
constant. The solution is valid for $a^2<l^2$, but becomes singular
in the limit $a^2=l^2$ which is the focus of our attention here.
The event horizon is located at $r=r_+$, the largest root of the
polynomial $\Delta_r$. One can define a critical mass parameter
$M_{e}$ such that \cite{CaldarelliKlemm98}
\begin{equation}
M_{e} l = \frac{1}{3 \sqrt{6}} \left ( \sqrt{ (1+a^2 l^2)^2 + 12 a^2 l^2 } + 2
(1+a^2 l^2) \right ) \left (\sqrt{ (1+a^2 l^2)^2 + 12 a^2 l^2 } -
(1+a^2 l^2) \right )^{\frac{1}{2}}. \nonumber
\end{equation}
Cosmic censorship requires that $M \ge M_{e}$ with the limiting case
representing an extreme black hole. In the limit of critical angular
velocity, the bound becomes
\begin{equation}
M l \ge \frac{8 }{3 \sqrt{3}}, \label{crit_mas}
\end{equation}
which we will see implies that physical black holes must be at least
as large as the cosmological scale. The angular velocity $\Omega'$ is
\begin{equation}
\Omega' = \frac{\Xi a}{(r_+^2 + a^2)},
\end{equation}
whilst the area of the horizon is
\begin{equation}
{\cal A} = 4\pi \frac{r_+^2 + a^2}{\Xi},
\end{equation}
and the inverse temperature is
\begin{equation}
\beta_{t} = \frac{4\pi (r_+^2 + a^2)}{\Gamma_r'(r_+)} =
\frac{4\pi (r_+^2 + a^2)}{r_{+}(3l^2 r_+^2 + (1+ a^2 l^2)
- a^2/r_+^2)}.
\end{equation}
We should mention here the issue of the normalisation of the
Killing vectors and the rescaling of the associated coordinates.
One choice of normalisation of the Killing vectors ensures that
the associated conserved quantities generate the $SO(3,2)$ algebra:
this was the natural choice in the context of
\cite{KosteleckyPerry95}. Here we have chosen the metric so that the
coordinate $\phi$ has the usual periodicity whilst the norm of the
imaginary time Killing vector at infinity is $l r$. Note that we are
referring to the issue of the normalisation of the Kerr coordinates rather
than to the relative shifts between Kerr and Einstein universe
coordinates.
\bigskip
If we Wick rotate both the time coordinate
and the angular momentum parameter,
\begin{equation}
t = -i\tau \hsp {\rm and} \hsp a = i\alpha,
\end{equation}
then we obtain a real Euclidean metric where the radial coordinate
is greater than the largest root of $\Delta_{r}$.
The surface $r=r_+$ is a bolt of the co-rotating Killing vector,
$\xi = \partial_\tau + i \Omega\partial_\phi$.
However, an identification of imaginary time coordinates
must also include a rotation through $i \beta \Omega$
in $\phi$; that is, we identify the points
\begin{equation}
(\tau, r, \theta, \phi) \sim (\tau + \beta, r, \theta, \phi + i \beta\Omega).
\end{equation}
We now want to calculate the Euclidean action, defined as
\begin{equation}
I_4 = - \frac{1}{16 \pi} \int d^4x \sqrt{g} \left [ R_g + 6 l ^2
\right ],
\end{equation}
where we have set the gravitational constant to one. The choice of
background is made
by noting that the $M=0$ Kerr-AdS metric is actually the AdS metric in
non-standard coordinates \cite{HenneauxTeitelboim85}. If we make the
implicit coordinate transformations
\begin{eqnarray}
T = t & \hspace{5mm} &
\Phi = \phi - a l^2 t \nonumber \\
y \cos{\Theta} & = & r \cos\theta \\
y^2 & = & \frac{1}{\Xi} [ r^2 \Delta_\theta + a^2 \sin^2\theta ]
\nonumber
\end{eqnarray}
this takes the AdS metric,
\begin{equation}
d\tilde{s}^2 = - (1+ l^2 y^2) dT^2 +
\frac{1}{1 + l^2 y^2} dy^2 +
y^2(d\Theta^2 + \sin^2\Theta d\Phi^2),
\end{equation}
to the $M=0$ Kerr-AdS form. To calculate the action we need to match
the induced Euclidean
metrics on a hypersurface of constant radius $R$ by scaling
the background time coordinate as
\begin{equation}
\tau \rightarrow \left(1 - \frac{M}{l^2R^3} \right) \tau,
\end{equation}
and then we find that
\begin{equation}
I_4 = - \frac{\pi (r_+^2 + a^2)^2 (l^2r_+^2 - 1)}
{\Xi r_{+} \Delta_r'(r_+)}
= - \frac{\pi (r_+^2 + a^2)^2 (l^2r_+^2 - 1)}
{(1-l^2 a^2) (3l^2r_+^4 + (1+l^2a^2)r_{+}^2 - a^2)}.
\end{equation}
Features of this result are as follows.
The action is positive for $r_{+}^2 \le 1/l^2$ and negative for larger
$r_{+}$; just as for Schwarzschild anti-de Sitter this indicates
that there is a phase transition as one increases the mass. The action
is clearly divergent for extreme black holes as one would
expect. There is also a divergence when $\Xi \rightarrow 0$; for small
radius black holes the action diverges to positive infinity, whilst
for large radius black holes the action diverges to negative
infinity. In the special case $r_{+}^2 = a/l$ the action is finite and
positive in the limit $al \rightarrow 1$.
\bigskip
Defining the mass and the angular momentum of the black hole as
\begin{equation}
{\cal M}' = \frac{1}{8 \pi} \int \nabla_{a} \delta {\cal T}_{b} dS^{ab}
\hspace{5mm} J' = \frac{1}{4\pi} \int \nabla_{a} \delta {\cal J}_{b}
dS^{ab},
\end{equation}
where ${\cal T}$ and ${\cal J}$ are the generators of time translation and
rotation respectively and one integrates the difference between the
generators in the spacetime and the background over a celestial sphere
at infinity, then we find that
\begin{equation}
{\cal M}' = \frac{M}{\Xi}; \hspace{5mm} J' = \frac{a M}{\Xi^2}.
\end{equation}
Allowing for the differences in normalisation of the generators, these
values agree with those given in \cite{KosteleckyPerry95}.
Using the usual thermodynamic relations we can check that the entropy
is
\begin{equation}
S = \pi \frac{r_+^2+a^2}{\Xi},
\end{equation}
as expected. Note that none of the extreme black holes are
supersymmetric: in four dimensions there needs to be a non vanishing
electric charge for such black holes, regarded as solutions of a
gauged supergravity theory, to be supersymmetric.
\bigskip
It is well known that small Schwarzschild anti-de Sitter black holes are
thermodynamically unstable in the sense that their heat capacity is
negative, just as for Schwarzschild black holes in flat space. We find
such an instability in dimensions $d \ge 4$ for black holes whose
radius is less than a critical radius which is dimension dependent but
is approximately $1/l$. One can show that small rotating black holes
are also unstable in this sense but only for rotation parameters of
order $0.1 l^{-1}$ or less (again the precise limit is dimension
dependent); larger angular velocities stabilise the black holes.
In three dimensions no anti-de Sitter black holes have negative
specific heat.
\bigskip
To take the limit of critical angular velocity, we need to use the
coordinate system adapted to the rotating Einstein universe. As in
three dimensions the angular velocity of the Einstein
universe is given by
\begin{equation}
\Omega = \Omega' + al^2,
\end{equation}
and is defined with respect to the coordinates $(T, \Phi)$. Defining
$\Omega = l (1 -\epsilon)$ as before we find that
\begin{equation}
\epsilon = ( 1 - al) \frac{(r_{+}^2 -a/l)}{(r_{+}^2 + a^2)}. \label{eps}
\end{equation}
Rotation at the critical angular velocity hence requires that either
$al = 1$ or $ r_{+}^2 = a/l$, as in three dimensions.
Generically the thermodynamics of the
four dimensional black hole are similar to those of the BTZ black
hole, and in fact to those of higher dimensional black holes also.
The $(r_{+}, a)$ plane for a single parameter black hole in a general
dimension is illustrated in Figure~\ref{fig:fig0}.
\begin{figure}
\begin{center}
\leavevmode
\bigskip
\epsfxsize=.45\textwidth
\epsfbox{act.eps}
\bigskip
\caption[]
{Plot of black hole radius $r_{+}$ against $al$. For $r_{+} < 1/l$,
the action is positive, whilst the action blows up along the line
$al = 1$. The lower line denotes the radius of the extreme black hole
$r_{c}$ as a function of $a$.
In the hatched region $r_{c}^2 \le r_{+}^2 < a/l$ the
Einstein universe on the boundary rotates faster than the speed of
light. The action is finite and positive at $r_{+}^2 = a/l$ but infinite
and positive for extreme black holes. In three dimensions the
supersymmetric limit coincides with $r_{+}^2 = a/l$, whilst in five
and higher dimensions the cosmological bound is at $r_c = 0$. }
\label{fig:fig0}
\end{center}
\end{figure}
\bigskip
There is however a novelty compared to the
three dimensional case. The cosmological bound permits solutions with
$r_{+}^2 < a/l$; for example, in the limiting case $al =1$, the
extreme solution has $r_{+}^2 = a^2/3$. To preserve the Lorentzian
signature of the metric we require that $al \le 1$, and so $\Omega' >
l$ in the limit $r_{+}^2 < a/l$. That is, only for sufficiently
large black holes can one have the rotating black hole in
equilibrium with thermal radiation in infinite space. This is
reflected in the fact that the action changes sign at $r_{+} =
1/l$. In the limit of zero curvature - by taking $l$ to zero - we
find, as expected, that there are no rotating
black holes for which there is a Killing vector which is timelike
right out to infinity.
One can rewrite the thermodynamic quantities of the black hole with
respect to the coordinate system $(T, \Phi)$. The temperature is
unchanged ($\beta \equiv \beta_t$) whilst the energy and angular
momentum are given by
\begin{equation}
{\cal M} = {\cal M}'; \hspace{5mm} J = \frac{a M}{ \Xi (1+ l^2 r_{+}^2)}.
\end{equation}
We are particularly interested in the limit of the action as $al
\rightarrow 1$ at high temperature. Defining a dimensionless quantity
\begin{equation}
\hat{\beta} = l \beta \approx \frac{4 \pi}{3 l r_{+}},
\end{equation}
where the latter relation applies in the high temperature limit, the
action diverges as
\begin{equation}
I_{4} = - \frac{8 \pi^3}{27 l^2 \bar{\beta}^2 (1-al)}
\end{equation}
The other thermodynamic quantities behave to leading order as
\begin{eqnarray}
(1 - \Omega) = (1 - al); & \hspace{5mm} &
{\cal M} = \frac{16 \pi^3}{27 l^2 \bar{\beta}^3 (1-al)} \\
J = \frac{\pi}{3 l^3 \bar{\beta} (1-al)} & \hspace{5mm} &
S = \frac{8 \pi^3}{9 l^2 \bar{\beta}^2 (1-al)} \nonumber
\end{eqnarray}
The entropy diverges at the critical value, as do the energy and the
angular momentum. Note that the divergence of the angular momentum is
subleading in $\bar{\beta}$.
As we stated in the introduction, there is no sense in which one can
take the energy to be finite in the critical limit. If we take
${\cal{M}}$ to be fixed, then $M$ must approach zero in the limit.
However according to (\ref{crit_mas}) there is no horizon unless the
mass parameter $M$ is of the cosmological scale.
\section{Rotating black holes in five dimensions}
\subsection{Single parameter anti-de Sitter Kerr black holes}
We now consider rotating black holes within a five dimensional anti-de
Sitter background. In five dimensions the rotation group is $SO(4)
\cong SU(2)_{L} \times SU(2)_{R}$. Black holes may be characterised by
two independent projections of the angular momentum vector which may
be denoted as the angular momenta $J_{L}$ and $J_{R}$. This is the
most natural parametrisation when one considers the conformal field
theory describing such states but the usual construction of Kerr
metrics in higher dimensions will use instead two parameters
$J_{\phi}$ and $J_{\psi}$ which we choose such that
\begin{equation}
J_{L,R} = \left ( J_{\phi} \pm J_{\psi} \right ),
\end{equation}
where we express the metric on the three sphere in the form
\begin{equation}
ds^2 = d\theta^2 + \sin^2\theta d\phi^2 + \cos^2\theta d\psi^2.
\end{equation}
The two classes of special cases may be represented by the limits
\begin{eqnarray}
J_{R} = 0 \hspace{2mm} & \Rightarrow & \hspace{2mm} J_{\phi} = J_{\psi};
\\
J_{L} = J_{R} \hspace{2mm} & \Rightarrow & \hspace{2mm} J_{\psi} = 0.
\end{eqnarray}
The former case will be considered in the next subsection.
As for the stationary asymptotically flat solutions constructed by
Myers and Perry \cite{My_Pe}, the single parameter
Kerr anti-de Sitter solution in
$d$ dimensions follows straightforwardly from the four-dimensional
solution. It is convenient to write it in the form
\begin{eqnarray}
ds^2 & = & - \frac{\Delta_r}{\rho^2}(dt - \frac{a}{\Xi}
\sin^2\theta d\phi)^2 +
\frac{\rho^2}{\Delta_r}dr^2 + \frac{\rho^2}{\Delta_\theta}d\theta^2
\nonumber \\
& & + \frac{\Delta_\theta\sin^2\theta}{\rho^2}[adt -
\frac{(r^2+a^2)}{\Xi} d\phi]^2
+ r^2\cos^2\theta d\Omega_{d-4}^2
\end{eqnarray}
where $d\Omega_{d-4}^2$ is the unit round metric on the $(d-4)$ sphere and
\begin{eqnarray}
\Delta_r & = & (r^2+a^2)(1+l^2r^2) - 2M r^{5-d}; \nonumber \\
\Delta_\theta & = & 1 - a^2 l^2 \cos^2\theta; \\
\Xi & = & 1 - a^2 l^2; \nonumber \\
\rho^2 & = & r^2 + a^2\cos^2\theta. \nonumber
\end{eqnarray}
The angular velocity on the horizon is all dimensions is
\begin{equation}
\Omega' = \frac{\Xi a}{(r_{+}^2 + a^2)}.
\end{equation}
The thermodynamics of single parameter solutions are generically
similar in all dimensions. In five dimensions
we can solve explicitly for the horizon position finding that
\begin{equation}
r_{+}^2 = \frac{1}{2l^2} [ \sqrt{(1- a^2 l^2)^2 + 8 M l^2} -
(1-a^2 l^2) ].
\end{equation}
The condition for a horizon to exist
is that $r_{+}$ must be real, which requires that $2 M \ge a^2$.
The volume of the horizon is
\begin{equation}
V = \frac{2 \pi^2}{\Xi} r_{+} (r_{+}^2 + a^2),
\end{equation}
and the inverse temperature is
\begin{equation}
\beta_{t} = 4 \pi \frac{(r_{+}^2 + a^2)}{\Delta_{r}'(r_{+})}
= \frac{2 \pi (r_{+}^2 + a^2)}{r_{+} (2 l^2 r_{+}^2 + 1 +
a^2 l^2)}.
\end{equation}
It is useful to note that the $M=0$ Kerr anti-de Sitter solution
reduces to the anti-de Sitter background, with points identified in
the angular directions, for all $d$: this follows
from the same coordinate transformation as for the four dimensional
solution. The same coordinate transformation also brings
the $M \neq 0$ solution into a manifestly asymptotically anti-de Sitter form.
In calculating the action the appropriate background
is the $M=0$ solution, with the imaginary time coordinate rescaled so
that the induced metric on a hypersurface of large radius $R$ matches
that of the $M \neq 0$ solution. This requires that we scale
\begin{equation}
\tau \rightarrow (1 - \frac{M}{R^4 l^2}) \tau.
\end{equation}
The volume term in the action is given by
\begin{equation}
I_{5} = - \frac{1}{16 \pi} \int d^5x (R_g + 12 l^2), \label{5d_act}
\end{equation}
(with the gravitational constant equal to one) and
the surface term does not contribute. Evaluating the volume term we find
that the action is given by
\begin{equation}
I_{5} = \frac{\pi^2}{4 \Xi} \frac{(r_{+}^2 + a^2)^2 (1-
l^2 r_{+}^2)}{r_{+} (2l^2 r_{+}^2 + 1 + a^2 l^2)}.
\end{equation}
This action has the same generic features as in the lower dimensional
cases, namely (i) the sign changes at the critical radius $r_{+}^2 =
1/l^2$; (ii) the action diverges as $\Xi \rightarrow 0$ except for black
holes of the critical radius $r_{+}^2 = a/l$.
It is straightforward to show that the mass and angular momentum of
the rotating black hole with respect to the anti-de Sitter background
are given by
\begin{equation}
{\cal M}' = \frac{3 \pi M}{4 \Xi}, \hspace{5mm} J_{\phi}' = \frac{\pi M a
}{2 \Xi^2}.
\end{equation}
Then the usual thermodynamic relations give the entropy of
the black hole as
\begin{equation}
S = \beta ({\cal M} + \Omega J) - I_{5} = \pi^2 \frac{r_{+} (r_{+}^2 +
a^2)}{2 \Xi},
\end{equation}
which is related to the horizon volume in the expected way.
\bigskip
It is interesting to note that both the temperature and the entropy
vanish for black holes with horizons at $r_{+} = 0$, even though the
mass and angular momentum can be non-zero. Since a necessary (though
non-sufficient) condition for a black hole to be supersymmetric is
that the temperature vanishes, only states for which the bound $2 M =
a^2$ is saturated could be supersymmetric.
Just as the four-dimensional rotating black holes are solutions of
$N=2$ gauged supergravity in four dimensions, so we can regard the
five-dimensional solutions as solutions of a five
dimensional gauged supergravity theory. However, as in the four
dimensional case, the black holes do not preserve any supersymmetry
for non-zero mass unless they are charged.
One can see this as follows. Supersymmetry requires the existence of a
supercovariantly constant spinor $\epsilon$ satisfying
\begin{equation}
\delta \Psi_{m} = \hat{D}_{m} \epsilon = (\nabla_{m} +
\frac{1}{2} i l \gamma_{m}) \epsilon = 0,
\end{equation}
where $\Psi$ is the gravitino, $\hat{D}$ is the supercovariant
derivative, $\nabla$ is the covariant derivative and $\gamma$ is a
five-dimensional gamma matrix. The integrability condition then
becomes
\begin{equation}
\left [ \hat{D}_{m}, \hat{D}_{n} \right ] \epsilon = 0 \Rightarrow
\left (R_{mnab} \gamma^{ab} + 2 l^2 \gamma_{mn} \right ) \epsilon = 0,
\label{int_con}
\end{equation}
where $a,b$ are tangent space indices. It is straightforward to verify
that all components of the bracketed expression
vanish for the background whilst
for the rotating black hole the integrability conditions reduce to
\begin{equation}
\frac{M}{\Xi} \gamma_{a} \epsilon = 0.
\end{equation}
Hence only in the zero mass black hole - anti de Sitter space itself -
is any supersymmetry preserved. We expect that supersymmetry can be
preserved if we include charges, but leave this as an
issue to be explored elsewhere \cite{mmt}. General static charged
solutions of $N=2$ gauged supergravity in five dimensions have been
discussed recently in \cite{BCS}; one can construct the natural
generalisations to general stationary black holes starting from the
neutral five dimensional stationary solutions given here. One can also
construct solutions for which the horizon is hyperbolical rather than
spherical; such solutions are analogous to those discussed in
\cite{CaldarelliKlemm98}.
\bigskip
Taking the limit of critical angular velocity requires that we
move to the coordinates $(T,\Phi)$ which are adapted to the rotating
Einstein universe. Then letting $\Omega = l (1- \epsilon)$ with
$\epsilon$ defined as in (\ref{eps}) we find that in the critical limit
either $r_{+}^2 = a/l$ or $al = 1$. Since cosmic censorship requires
that $r_{+} \ge 0$ with equality in the extreme limit, we can again
have solutions for which $\Omega > l$ which in turn implies that the
black holes cannot be in equilibrium with radiation right out to
infinity. As in four dimensions the action changes sign at the
critical value $r_{+}^2 = a/l$.
The thermodynamic quantities relative to the coordinate system $(T,
\Phi)$ are $\beta \equiv \beta_t$, ${\cal M} \equiv {\cal M}'$ and
\begin{equation}
J_{\Phi} = \frac{\pi M a }{2 \Xi ( 1 + l^2 r_{+}^2)}.
\end{equation}
In the limit $al \rightarrow 1$ at high temperature such that
\begin{equation}
\bar{\beta} = l \beta \approx \frac{\pi}{ l r_{+}},
\end{equation}
we can express the thermodynamic quantities as
\begin{eqnarray}
I_{5} \approx - \frac{\pi^5}{8 l^3 \Xi \bar{\beta}^3}; & \hspace{5mm}
& {\cal M} \approx \frac{3 \pi^5}{8 l^3 \Xi \bar{\beta}^4};
\label{5d_1a} \\
J_{\Phi} \approx \frac{\pi^3}{2 l^4 \Xi \bar{\beta}^2}; &
\hspace{5mm} & S \approx \frac{\pi^5}{2 l^3 \Xi \bar{\beta}^3}
\nonumber
\end{eqnarray}
where the energy and angular momentum are defined with respect to the
dimensionless temperature. Note that the angular momentum is again
subleading in $\bar{\beta}$ dependence relative to the mass and the action.
The divergence at critical angular velocity is in agreement with that
of the conformal field theory.
\subsection{General five-dimensional AdS-Kerr solution}
The metric for the two parameter five-dimensional rotating black hole
is given by
\begin{eqnarray}
ds^2 &=& - \frac{\Delta}{\rho^2} (dt - \frac{a \sin^2\theta}{\Xi_a}d\phi -
\frac{b \cos^2\theta}{\Xi_b} d\psi)^2 +
\frac{\Delta_{\theta}\sin^2\theta}{\rho^2} (a dt -
\frac{(r^2+a^2)}{\Xi_a} d\phi)^2 \nonumber \\
&& + \frac{\Delta_{\theta}\cos^2\theta}{\rho^2} (b dt -
\frac{(r^2+b^2)}{\Xi_b} d\psi)^2 + \frac{\rho^2}{\Delta} dr^2 +
\frac{\rho^2}{\Delta_{\theta}} d\theta^2 \\
&& + \frac{(1+r^2 l^2)}{r^2 \rho^2}
\left ( ab dt - \frac{b (r^2+a^2) \sin^2\theta}{\Xi_a}d\phi - \frac{a
(r^2 + b^2) \cos^2 \theta}{\Xi_b} d\psi \right )^2, \nonumber
\end{eqnarray}
where
\begin{eqnarray}
\Delta &=& \frac{1}{r^2} (r^2 + a^2) (r^2 + b^2) (1 + r^2 l^2) - 2M;
\nonumber \\
\Delta_{\theta} &=& \left ( 1 - a^2 l^2 \cos^2\theta - b^2 l^2
\sin^2\theta \right ); \\
\rho^2 &=& \left ( r^2 + a^2 \cos^2\theta + b^2 \sin^2\theta \right);
\nonumber \\
\Xi_a &=& (1-a^2 l^2); \hspace{5mm} \Xi_b = (1 -b^2 l^2). \nonumber
\end{eqnarray}
It should be straightforward to construct the metric for
general rotating black holes in anti-de Sitter backgrounds of
higher dimension.
As for the single parameter solution, the $M=0$ metric is anti-de
Sitter space, with points identified in the angular directions.
Using the coordinate transformations
\begin{eqnarray}
\Xi_a y^2 \sin^2\Theta &=& (r^2 + a^2) \sin^2\theta; \nonumber \\
\Xi_b y^2 \cos^2\Theta &=& (r^2 + b^2) \cos^2\theta; \nonumber
\\
\Phi &=& \phi + a l^2 t; \\
\Psi &=& \psi + b l^2 t, \nonumber
\end{eqnarray}
the metric can be brought into a form which is manifestly asymptotic
to anti-de Sitter spacetime. The parameters $a$ and $b$ are
constrained such that $a^2, b^2 \le l^{-2}$ and the metric is only
singular if either or both parameters saturate this limit.
Defining the action as in (\ref{5d_act}) we find that
\begin{equation}
I_5 = - \frac{\pi \beta l^2}{4 \Xi_a \Xi_b} \left [ (r_{+}^2 + a^2)(r_{+}^2
+ b^2) - M l^{-2} \right ],
\end{equation}
where the inverse temperature is given by
\begin{equation}
\beta_t =
\frac{4 \pi (r_{+}^2 + a^2) (r_{+}^2 + b^2)}{ r_{+}^2 \Delta'(r_{+})},
\end{equation}
and $r_{+}$ is the location of the horizon defined by
\begin{equation}
(r_{+}^2 + a^2)(r_{+}^2 + b^2) (1 + r_{+}^2 l^2) = 2 M r_{+}^2.
\end{equation}
For real $a$, $b$ and $l$ there are two real roots to this equation;
when $a = b$ these coincide to give an extreme black hole when
\begin{eqnarray}
r_c^2 &=& = \frac{1}{4 l^2} \left ( \sqrt{1 + 8 a^2 l^2} - 1 \right );
\\
2 M_c l^2 &=& \frac{1}{16} \left ( \sqrt{1 + 8 a^2 l^2} - 1 + 4 a^2 l^2
\right ) \left ( 3 \sqrt{1 + 8 a^2 l^2} + 5 + 4 a^2 l^2 \right). \nonumber
\end{eqnarray}
The entropy of the general two parameter black hole is given by
\begin{equation}
S = \frac{\pi^2}{2 r_{+} \Xi_a \Xi_{b}} (r_{+}^2 + a^2)(r_{+}^2 + b^2),
\end{equation}
whilst the mass and angular momenta are
\begin{equation}
{\cal M}' = \frac{3 \pi M}{4 \Xi_a \Xi_b}; \hspace{2.5mm}
J_{\phi}' = \frac{\pi M a }{2 \Xi_a^2}; \hspace{2.5mm} \hspace{2.5mm}
J_{\psi}' = \frac{\pi M b }{2 \Xi_b^2}, \nonumber
\end{equation}
with the angular velocities on the horizon being
\begin{equation}
\Omega_{\phi}' = \Xi_a \frac{a}{r_{+}^2 + a^2}; \hspace{5mm}
\Omega_{\psi}' = \Xi_b \frac{b}{r_{+}^2 + b^2}.
\end{equation}
Since the black hole is singular only when either or both of $\Xi_a$
and $\Xi_b$ tend to zero, we should look in particular at the latter
case for which the two rotation parameters $a$ and $b$ are equal in
magnitude. Then we can write the metric in the transformed coordinates
as
\begin{eqnarray}
ds^2 &=& - (1 + y^2 l^2) dT^2 + y^2 \left (
d\Theta^2 + \sin^2\Theta
d\Phi^2 + \cos^2\Theta d\Psi^2 \right) \nonumber \\
&& + \frac{2M}{y^2 \Xi^2} ( dT - a \sin^2 \Theta
d\Phi - a \cos^2\Theta d\Psi)^2 + \frac{y^4
dy^2}{( y^4(1+ y^2l^2) -
\frac{2M}{\Xi^2}y^2 + \frac{2 M a^2}{\Xi^3})}, \label{a=b}
\end{eqnarray}
where $\Xi = 1 - a^2 l^2$. The position of the horizon of the extreme
solution in these coordinates is
\begin{equation}
y^2 = \frac{1}{4 \Xi} \left [ 4 a^2 l^2 - 1 + \sqrt{1 + 8 a^2 l^2}
\right ].
\end{equation}
In the critical limit, $al \rightarrow 1$, the size
of the black hole becomes infinite in this coordinate system.
One can check to see whether the integrability condition
(\ref{int_con}) is satisfied by the black hole metric
(\ref{a=b}). Preservation of supersymmetry requires that
\begin{equation}
\frac{M}{\Xi^2} \gamma_a \epsilon = 0,
\end{equation}
and hence only in the zero mass black hole is any supersymmetry
preserved. We have not checked the integrability condition in the
general two parameter rotating black hole but do not expect
supersymmetry to be preserved. In three dimensions the integrability
condition is trivially satisfied since the BTZ black hole is locally
anti-de Sitter and supersymmetry preservation relates to global
effects. In higher dimensions it does not seem possible to satisfy the
integrability conditions without including gauge fields.
\bigskip
The Einstein universe rotates at the speed of light in at least some
directions either when one or both of $\Xi_a$ and $\Xi_b$ vanish or
when $r_{+}^2 =a/l$ or when $r_{+}^2 = b/l$.
The action is singular when either or both of $\Xi_a$ and $\Xi_b$ are
zero and when the black hole is extreme. The action is positive for
$r_{+} \le 1/l$; there is a phase transition as the mass of the
black hole increases.
If $r_{+}^2 = a/l$ the action
will be positive and finite when $\Xi_a$ vanishes and positive and
infinite when $\Xi_b$ vanishes. For $l r_{+}^2 < {\rm Max} \left [ a,b
\right ]$ there will be directions in the Einstein universe which are
rotating faster than the speed of light. In the limiting case $a=b$ the action
diverges for all $r_{+}$ as $\Xi$ tends to zero.
In the high temperature limit, the action for the equal parameter
rotating black hole diverges as
\begin{equation}
I_5 = - \frac{\pi^5} {8 l^3 \Xi^2 \bar{\beta}^3},
\end{equation}
with $\bar{\beta} \approx \pi/(r_{+} l) \ll 1$. The other
thermodynamic quantities follow easily from this expression, and are
in agreement with those derived from the conformal field theory in
section two.
We should mention what we expect to happen in higher dimensions. A
generic rotating black hole in $d$ dimensions will be classified by
$\left [ (d-1)/2 \right ]$ rotation parameters $a_{i}$, where $[x]$
denotes the integer part of $x$. Thus we expect both the action and the
metric to be singular if any of the $a_{i}$ vanish. Provided that the
black hole horizon is at $r_{+} > 1/l$ the action should diverge to
negative infinity in the critical limit, behaving as
\begin{equation}
I_{d} \sim - \frac{1}{\beta^{d-2} \prod_{i} \epsilon_{i}},
\end{equation}
where $\epsilon_{i} = 1 - \Omega_{i}$ and the product is taken over
all $i$ such that $a_{i}l \rightarrow 1$. The $\beta$ dependence
follows from conformal invariance, whilst one should be able to derive
the $\epsilon_i$ dependence by looking at the behaviour of the spherical
harmonics.
| proofpile-arXiv_065-8137 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
Galaxy evolution is strongly influenced by the environment.
The galaxy morphology is related to the density of its
environment (Dressler 1980) and, in the case of galaxy clusters,
elliptical galaxies are more common than spirals with respect to the field.
If we look to distant clusters we detect some evolution
effects as Butcher-Oemler effect (1984) or a
variation in the morphological content with the redshift (Dressler et al., 1997)
.\\
In this observational program of galaxy clusters at medium and high
redshift (P.I.: A. Franceschini; collaborators: C. Cesarsky, P.A. Duc, A. Moorwood and A. Biviano), we are interested in studying the galaxy
evolution with respect to their distance from the cluster center, in
observing active galaxies in clusters and finally, in comparing the
behavior of galaxies in clusters and in the field. This is really
interesting especially after the results of IR deep surveys which
claim the existence of a population of galaxies with great IR fluxes
in the past ($0.6 <z < 1.$), which does not present any peculiar optical
signature (Elbaz et al. 1998, Aussel et al. 1998).
\section{OBSERVATIONS}
Ten rich galaxy clusters, with redshift ranging from 0.2 to 1,
have been observed with the ISO camera in the two bands LW2 and LW3
(centered respectively at 6.75 $\mu$m and 15 $\mu$m).
Three of them were observed very deeply (see Tables~\ref{tab:shallow} and~\ref{tab:deep}
).
Optical follow-up's of the observed clusters are scheduled in the next
months. Only that of A1689 has already been carried out in May 1998 with
EMMI on the NTT at La Silla (P.I.: P.A. Duc). A total of about 120
spectra have been obtained in a field of 3$\times$5 square
arcminutes centered on Abell 1689.
The observed objects were mostly the optical counterpart candidates
of the sources detected in the ISOCAM bands and the data are in
course of reduction.
\begin{table}[!h]
\caption{\em Shallow observations}
\label{tab:shallow}
\begin{center}
\leavevmode
\footnotesize
\begin{tabular}[h]{crrccr}
\hline \\[-5pt]
\multicolumn{1}{c}{Cluster} & \multicolumn{1}{c}{z} & \multicolumn{1}{c}{Field
of}
& \multicolumn{1}{c}{Pix.} & \multicolumn{1}{c}{Time} \\
\multicolumn{1}{c}{Name} & \multicolumn{1}{c}{}
& \multicolumn{1}{c}{ View}
& \multicolumn{1}{c}{FoV}
& \multicolumn{1}{c}{(sec)}\\
\hline \\[-5pt]
A1689 &0.19 & 5.6$^\prime$ $\times$ 5.6$^\prime$ & 6$^{\prime\prime}$ & 4506 \\
GHO1600+412 &0.3 & 7.4$^\prime$ $\times$ 7.4$^\prime$ & 6$^{\prime\prime}$ & 4546\\
3C295 &0.46 & 4.0$^\prime$ $\times$ 4.0$^\prime$ & 3$^{\prime\prime}$ & 5413 \\
3C330 &0.55 & 7.4$^\prime$ $\times$ 7.4$^\prime$ & 6$^{\prime\prime}$ & 4536 \\
CL0016+1609 &0.55 & 5.6$^\prime$ $\times$ 5.6$^\prime$ & 6$^{\prime\prime}$ & 4476 \\
J1888.16CL &0.56 & 5.6$^\prime$ $\times$ 5.6$^\prime$ & 6$^{\prime\prime}$ & 4506 \\
CL1322+3029 &0.7 & 2.2$^\prime$ $\times$ 2.2$^\prime$ & 3$^{\prime\prime}$ & 5373 \\
GHO1322+3027 &0.75 & 4.0$^\prime$ $\times$ 4.0$^\prime$ & 6$^{\prime\prime}$ & 2732\\
GHO1603+4313 &0.895& 2.2$^\prime$ $\times$ 2.2$^\prime$ & 3$^{\prime\prime}$ & 5373\\
CL1603+4329 &0.92 & 4.0$^\prime$ $\times$ 4.0$^\prime$ & 6$^{\prime\prime}$ & 2732\\
CL1415+5244 &... & Failed & ... & ... \\
\hline \\
\end{tabular}
\end{center}
\end{table}
\begin{table}[!h]
\caption{\em Deep observations}
\label{tab:deep}
\begin{center}
\leavevmode
\footnotesize
\begin{tabular}[h]{crcccr}
\hline \\[-5pt]
\multicolumn{1}{c}{Cluster} & \multicolumn{1}{c}{z} & \multicolumn{1}{c}{Band}
& \multicolumn{1}{c}{Field of} & \multicolumn{1}{c}{Pix.} & \multicolumn{1}{c}{Time} \\
\multicolumn{1}{c}{Name} & \multicolumn{1}{c}{} &\multicolumn{1}{c}{}
& \multicolumn{1}{c}{View}
& \multicolumn{1}{c}{FoV}
& \multicolumn{1}{c}{(sec)} \\
\hline \\[-5pt]
3C330 & 0.55 & LW3 &15.2$^\prime \times$3.3$^\prime$ & 6$^{\prime\prime}$ & 20584\\
J1888.16CL & 0.56 & LW2 &13.7$^\prime \times$3.35$^\prime$& 6$^{\prime\prime}$ & 20967\\
J1888.16CL & 0.56 & LW3 &15.2$^\prime \times$3.3$^\prime$ & 6$^{\prime\prime}$ & 20624\\
GHO1603+4313 & 0.895& LW2 &13.7$^\prime \times$3.35$^\prime$& 6$^{\prime\prime}$ & 20967\\
GHO1603+4313 & 0.895& LW3 &15.2$^\prime \times$3.3$^\prime$ & 6$^{\prime\prime}$ & 20604\\
\hline \\
\end{tabular}
\end{center}
\end{table}
\section{IR EMISSION IN ISOCAM FILTERS}
The part of the galaxy spectrum seen by each ISOCAM filter
is a function of the redshift (K correction).
The galaxy spectrum is the sum of three different components: UIBs, warm
dust and forbidden lines of ionized gas.
In the case of nearby clusters, the LW3 band is centered on the
warm dust emission and the LW2 band is dominated by the contribution
of the UIBs.
\begin{figure*}[!ht]
\begin{center}
\leavevmode
\centerline{\hspace{5.5cm}\epsfig{file=faddad1.eps,width=14.0cm}\hspace{-5.5cm}
\epsfig{file=faddad2.eps,clip=,width=14.0cm}
}
\end{center}
\caption{\em A1689 in the LW2 (left) and LW3 (right) bands superposed on a
NTT image (PI: P.A. Duc)}
\label{fig:a1689-images}
\end{figure*}
When we consider distant clusters ($0.5 < z < 1.0$), the LW3 band is
more and more contaminated by the UIBs, while in the LW2 band the
contribution of the stellar continuum, especially from old population
stars, overtakes the UIB features (see Fig.~\ref{fig:lw2lw3}).
\begin{figure}[!h]
\begin{center}
\leavevmode
\centerline{\epsfig{file=faddad3.eps,
width=7.0cm}}
\end{center}
\caption{\em The figure shows the spectra of two known starburst galaxies
observed by ISO (courtesy of D. Tran and V. Charmandaris) and the LW2 and LW3 bands at different redshifts.}
\label{fig:lw2lw3}
\end{figure}
\section{DISCUSSION}
\label{sec:commands}
We detect a spatial segregation in the distribution of ISOCAM sources.
\begin{figure}[!h]
\begin{center}
\leavevmode
\vspace{-1.5cm}
\epsfig{file=faddad4.eps,
width=7.0cm,angle=-90}
\end{center}
\caption{\em Source number density for A1689. Shaded and solid lines refer respectively to 5$\tau_W$ and 7$\tau_W$ thresholds in the wavelet detections of sources (Starck et al., 1998).}
\label{fig:a1689-dens}
\end{figure}
In the A1689 field (Fig.~\ref{fig:a1689-images})
we can remark an overdensity of LW2 sources in the cluster center,
while LW3 sources seem to avoid the central region.
There are more detections associated to the dust (LW3 band) in the
outskirts than in the central region, likely because galaxies which
are far from the cluster center are generally more active than central
galaxies (see Fig.~\ref{fig:a1689-dens}).
\begin{figure}[!t]
\begin{center}
\leavevmode
\epsfig{file=faddad5.eps,
width=7.0cm}
\end{center}
\caption{\em Color distribution for A1689.}
\label{fig:a1689-colors}
\end{figure}
If we compare the distribution of colors for the galaxies (Gudehus \&
Hegyi 1991) with the analogous distributions for galaxies detected in
the two bands (Fig.~\ref{fig:a1689-colors}), we note that
these distributions do not differ significatively, as in the case of
ISOCAM observations of the Hubble Deep Field (Aussel et al. 1998). This shows
that the activity unveiled by the mid-infrared images is hidden in the optical,
corroborating the observation of the Antennae galaxy by Mirabel et al.~(1998).
\begin{figure}[!h]
\begin{center}
\leavevmode
\vspace{-1.5cm}
\centerline{\epsfig{file=faddad6.eps,
width=7.0cm,angle=-90}}
\end{center}
\caption{\em Source number density for J1888.16CL. Shaded and solid lines refer respectively to 5$\tau_W$ and 7$\tau_W$ thresholds in the wavelet detections of sources (Starck et al., 1998).}
\label{fig:J1888-dens}
\end{figure}
The segregation effect is more clearly visible in the cluster
J1888.16CL at a redshidft of 0.5 (Fig.~\ref{fig:J1888-dens}). This effect could be due to a more
intense activity of galaxies or to a favored detection of arclets at
15 $\mu$m as in the case of A2390 (Altieri \& al, 98).
For the cluster GHO 1603+4313 at $z=1$ we find an uniform distribution for
the LW2 band and an overdensity of galaxies at the cluster center for the LW3 band (Fig.~\ref{fig:gho-dens}). This is consistent with previous cases, since LW3 at $z=1$ corresponds to LW2 band at rest ($z=0$), due to k-correction, but may be also associated to a stronger star formation activity at higher $z$.
\begin{figure}[!t]
\begin{center}
\leavevmode
\vspace{-0.7cm}
\centerline{\epsfig{file=faddad7.eps,
width=7.0cm,angle=-90}}
\end{center}
\caption{\em Source number density for GHO 1603+4313. Shaded and solid lines refer respectively to 5$\tau_W$ and 7$\tau_W$ thresholds in the wavelet detections of sources (Starck et al., 1998).}
\label{fig:gho-dens}
\end{figure}
\section*{ACKNOWLEDGMENTS}
We thank H. Aussel for his helpful advises.\\
The ISOCAM data presented in this paper was analyzed using "CIA",
a joint development by the ESA Astrophysics Division and the ISOCAM
Consortium led by the ISOCAM PI, C. Cesarsky, Direction des Sciences de la
Matiere, C.E.A., France.
| proofpile-arXiv_065-8155 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\hspace{\parindent}The discovery of carbon nanotubes (CNs)
by Iijima \cite{CN-Ijima}, has sparked a tremendous amount
of activity and interest from basic research to applied
technology\cite{CN-basis}. These carbon-based structures
have spectacular electronic properties but the range of
their applications is much broader than electronics, and in
particular, due to potential marked mechanical properties,
they could become serious competitors to composite materials.
CNs consists of coaxially rolled graphite sheets determined
by only two integers, and depending on the choice of $m$ and
$n$, a metallic, semiconducting or insulating behavior is
exhibited. Given their extremely small diameter (1 to 20
nanometer), CNs turn out to be serious candidates for the
next generation of microelectronic elementary devices often
referred to the unprecedented ``Nanotechnology Era''
\cite{Rohrer}.
\hspace{\parindent}
The understanding of the pure CNs is now very
mature\cite{CN-basis}, and researches are now focusing on
the effects of topological (such as pentagon-heptagon pair
defects \cite{Charlier,CN-QD}) and chemical disorder on
``ideal properties'' in order to make CNs, operational devices
for microelectronics. Yet, transistor device has been
designed from CNs, and shown to operate at room temperature
\cite{CN-RTT}. Other mesoscopic effects, such as universal
conductance fluctuations(UCF), appearing at low temperature
have also been revealed in CNs\cite{langer96}. From Nanotubes,
1-dimensional (quantum wires) \cite{CN-QW} or zero
dimensional (quantum dots \cite{CN-QD}) molecular systems
have been conceived.
\hspace{\parindent}To strengthen the usefulness of CNs,
the understanding of the influence of defects in
nanostructures deserves a particular attention since at such
very small scale, any disruptions of local order may affect
dramatically the physical response of the nano-device. For
the implementation to efficient technological devices, one
needs seriously to understand, control and possibly confine
the magnitude of these inherent quantum fluctuations,
induced by disorder or magnetic field.
\hspace{\parindent}Usually, classical theory enable
an elegant description of electronic properties for
crystalline materials through the use of quantum mechanics
and symmetry principles. However, real materials always
enclosed impurities, dislocations, or more general form of
disorder which generally imply to go beyond usual paradigms
that have been used successfully for pure systems. To
investigate tunneling spectroscopy experiments or electronic
conductivity measurements, one may alternatively tackle the
problems of aperiodicity by using real space methods
\cite{ON}.
\hspace{\parindent}Conductivity measurements have been performed on
bundles of single-walled carbon nanotubes by means of
scanning tunneling microscope (STM).\cite{tans98} By moving
the tip along the length of the nanotubes, sharp deviations
in the $I-V$ characteristics could be observed and related
to electronic properties predicted
theoretically. Notwithstanding, the scanning tunneling
experiment is a local measurement of current from the tip to
the nanotube surface and gives basically local informations
such as local density of states(LDoS)
\cite{CN-STM}. Further measurements of the electronic conductivity
involve the study of CN-based junctions. Finally, nanotubes have been
recently shown to be nanometer-sized probes for imaging in chemistry and
biology, improving substantially the actual resolution of scanning probes\cite{CN-AFM}.
\hspace{\parindent}Investigation of
electronic properties are generally confined to a description of
the so-called electronic spectrum of a given material. From the
spectrum, one can typically distinguish a semiconductor from a
metallic sample by identification of a gap at Fermi energy.
Within this framework, we will show that interesting effects are driven
by the strength of the magnetic field, or energy fluctuations in the
site energies. In particular metal-insulator
transition will be studied, by considering the density of states,
when site randomness and magnetic filed act simultaneously.
\section{Magnetic field induced metal-insulator transition}
\hspace{\parindent}To evaluate the density of states (DoS) of the
carbon nanotubes, we use the recursion method \cite{Haydock}
which enables a study of structural imperfections and
disorder, as discussed by Charlier et al.\cite{Charlier} The
Green's function from which one estimates the DoS are
calculated as a continuous fraction expansion, which requires
to be properly terminated. Besides, finite imaginary part of
the Green's function is necessary to achieve numerical
convergence of the expansion if the system presents gaps in
the electronic structure. The DoS on a given initial state
$|\psi_{0}\rangle$ is evaluated from
$\langle\psi_{0}|G(z=E+i\eta)|\psi_{0}\rangle$ by tridiagonalization
of the corresponding tight-binding Hamiltonian. A proper
terminator has to be computed with respect to the asymptotic
behavior of recursion coefficients $a_{n},b_{n}$ associated
to the recursion procedure. For a metallic CN, the series
$a_{n},b_{n}$ exhibit well-defined limits $a_{\infty}$ and
$b_{\infty}$ enabling a simple termination of the Green's
function. For the semiconducting (10,0) CN, the series of
$b_{n}$ coefficients encompass small fluctuations confined
around 4.25 and 4.75 (in $\gamma$ units, with $\gamma$ the
coupling constant between neighboring carbon atoms). This
unveils the presence of a gap in the structure which may be
completely resolved by appropriate
terminator\cite{Turchi}. Furthermore, when dealing with
magnetic field which may lead to Landau levels structure,
suitable termination can be implemented \cite{Roche-PRB}.
\hspace{\parindent}However, one notes that $\eta$ may
also have some physical significance. Indeed, real materials
usually enclosed inherent disorder that lead to finite mean
free path, or scattering rates. Then, the first
approximation to account for such elastic/inelastic
relaxation times, is to introduced a finite imaginary part
of the Green's function (Fermi's Golden rule). Finally,
contrary to diagonalization methods for finite length
systems, a discussion about scaling properties is beyond the
scope of the recursion method. Finite size effects due to
boundary conditions will strongly affect physical
properties. For instance, close to a metal-insulator
transition, characteristic properties like localization
length will be driven by scaling exponents. Here, large
systems are used to achieve the convergence of continuous
fraction expansion before finite size effects occur. It is
equivalent to consider an infinite homogeneous
structure. Actually, local properties such as LDoS are
weakly affected by boundary effects when the nanotube is
sufficiently long. In the following, we will keep a finite
imaginary part for the Green's function keeping in mind the
previous discussion.
\hspace{\parindent}We consider a tight-binding description of the
graphite $\pi$ bands, with only first-neighbor C-C
interactions $\gamma=V_{pp\pi}$ which is taken as $-2.75$eV.
The magnetic field is considered to be perpendicular to the
nanotube axis so that the potential vector is given by
$A=(0,\frac{LB}{2\pi}\sin
\frac{2\pi}{L}\cal{X})$\cite{Ajiki-1}
, with $a_{cc}=1.42\AA$,
$L=\mid{\bf C}_{h}\mid =\sqrt{3}
a_{cc}\sqrt{n^2+m^2+nm}$, and the modulus of the chiral
vector defined by ${\bf C}_{h} =n{\bf a}_{1}+m{\bf
a}_{2}$\cite{CN-basis}. The effects of the magnetic field are driven by the
phase factors $\exp (\frac{ie}{\hbar c}\gamma_{ij})$
introduced in the hopping integrals between two sites ${\bf
R_{i}}=\{ {\cal X}_{i},{\cal Y}_{i}\}$ and ${\bf
R_{j}}$. They are generally defined by the potential vectors
through the Peierls substitution :
$$\gamma_{ij}=\frac{2\pi}{\varphi_{0}}\int_{R_i}^{R_j}{\bf
A}\cdot d{\bf r}$$
\noindent
where $\varphi_{0}=hc/e$ is the quantum flux. After simple algebra,
assuming $\Delta {\cal X}= {\cal X}_{i}-{\cal
X}_{j}$ and $\Delta {\cal Y}= {\cal Y}_{i}-{\cal Y}_{j}$
one finds that the proper phase factors in our
case are given by\cite{CN-basis}
:
\begin{eqnarray}
\gamma_{ij}( \Delta {\cal X}\neq 0)&=&\bigl(\frac{L}{2\pi}\bigr)^{2}
B \frac{\Delta {\cal Y}}{\Delta {\cal X}}\nonumber\\
& & \times \biggl( -\cos \frac{2\pi}{L}({\cal X}+\Delta {\cal X})+
\cos(\frac{2\pi {\cal X}}{L})\biggr)\nonumber\\
\gamma_{ij}( \Delta {\cal X}= 0)&=&\bigl(\frac{L}{2\pi}\bigr)\Delta {\cal
Y}B\sin \frac{2\pi {\cal X}}{L}\nonumber\\
\end{eqnarray}
\hspace{\parindent}On Fig. 1, we show the total density of
states (TDoS) at Fermi level as a function of the effective
magnetic field defined by $\nu=L/2\pi \ell$, where $\ell
=\sqrt{\hbar c/eB}$ is the magnetic length. For the two
metallic nanotubes (9,0) and (12,0), the TDoS at Fermi level
decreases as the magnetic strength is increasing. For higher
values of magnetic field, our results are in agreement which
was has been found by exact
diagonalizations\cite{CN-basis}. As $\nu \to 1$, the TDoS, at Fermi
energy, approaches the same value ($\simeq 0.014$) as the
semiconducting (10,0) CN for zero magnetic field.
\hspace{\parindent}On Fig. 2, the
bold curve is the metallic CN (9,0), whereas the
dashed-bullet line stands for the semiconducting CN (10,0) both in
the absence of magnetic field B. The normal curve
corresponds to a metallic CN with magnetic field $\nu=0.8$. It
can be seen that for $\nu=0.8$, the TDoS in the vicinity of the Fermi
level, presents a ``pseudo-gap'' (due to finite $\eta$) very similar to
the one of the (10,0) semiconducting nanotube for the same
value of $\eta=0.02$. Furthermore, we estimate the width
$\Delta_{g}\simeq 1.1 eV$, which is in good agreement with typical
values found in experiments \cite{CN-STM}. This surprising result shows that the
magnetic field induces a metal-insulator transition. From $\nu=0$
to $\nu=0.8$,
a continuous metal-insulator
transition can be drawn.
\hspace{\parindent}The semiconducting case in Fig.~1 is also
interesting since a transition from semiconducting to
metallic is seen to occur for $\nu\sim 0.7$. On Fig.~3, the
effect of $\eta$ on the semiconducting case is
illustrated. As $\eta$ is reduced, the TDoS at Fermi level
(given in states/eV/graphite unit cell units) follows such
that
$\hbox{TDoS}(E_{F},\eta_{1})/\hbox{TDoS}(E_{F},\eta_{2})=\eta_{1}/\eta_{2}$.
Accordingly, a continuous transition, as $\eta$ tends to
zero, from insulator to metal (for $\nu$ of about 0.7) turns
out to be an inherent feature of the semiconducting
CN. Besides, Fig.~3 also shows the oscillation pattern that
is driven by the magnetic field.
\hspace{\parindent}For the metallic CN, the normalized TDoS
at Fermi level is given by $8/\sqrt{3}\pi a\gamma$ in the
unit of states/$\gamma$/(length along the
nanotube)\cite{CN-basis}, with $a=2.46\AA$, so that the
expected real TDoS$(E_{F})$ for (9,0) nanotube is given by
0.08168 states/$\gamma$/(graphite unit cell). From our
results, we find that TDoS$(E_{F},\eta=0.02)=0.08238$ and
TDoS$(E_{F},\eta=0.01)=0.0812$ which are qualitatively in
good agreement with the theoretical result. We have checked
that the integral density of states (IDoS) is normalized.
\hspace{\parindent}For the (9,0) CN, Ajiki
and Ando \cite{Ajiki-1} predicted that the DoS,
for magnetic field parallel to the nanotube
axis and in the framework of $k \cdot p$ approximation, should exhibit
$\varphi_{0}$-periodic oscillations ($
\varphi_{0}=ch/e$ the quantum flux). In our
case, for $\nu=0$ to $\nu=0.8$, the finite density
of states in the vicinity of Fermi level (Fig. 4)
is deepened and finally reach the same value
as the semiconducting CN for same value of $\eta$. Afterwards,
TDoS undergoes as a function of $\nu$ non-periodic oscillations, as it
can be seen when $\nu$ increases from 1 to 1.4.
\hspace{\parindent}To conclude this first part, from our results it
seems that magnetic field leads to a oscillatory behavior of the TDoS at Fermi
level between metallic and semiconducting electronic states. This
effect has been illustrated symmetrically on semiconducting and metallic
nanotubes for broad range of magnetic field strengths.
\section{Combination of randomness and magnetic field effects}
\hspace{\parindent}As mentioned previously, it is of great necessity
to analyze the stability of physical patterns that emerge from perfect
nanotubes, when introducing disorder. So far much consideration
has been paid to the so-called heptagon-pentagon pair defects since
their presence is to be expected in any junction devices \cite{Charlier}. Hereafter,
we will rather focus on the effect of Anderson-type disorder on
spectral properties, investigating to which extent, randomness is able
to modify the previously unveiled pattern. We concentrate our study in the
metallic CN case, restricting our discussion to (9,0) CN, given that
similar patterns were observed in others metallic ones.
\hspace{\parindent}The effect of randomness is now considered on the
site energies of the tight-binding Hamiltonian with
$\sum_{n_{x},n_{y}}\varepsilon_{n}|n_{x},n_{y}\rangle
\langle n_{x},n_{y}|$. The site energies are
randomly chosen in the interval $[-W/2,W/2]$, with uniform probability
distribution. Accordingly, the strength of disorder is measured by $W$. To evaluate the
TDoS, we use an average on typically 100 different
configurations.
\hspace{\parindent}The TDoS as a function of Fermi energy
for $\nu=0.8$ with different values of randomness, in the
vicinity of the Fermi energy ($E_{F}=0$ eV) has been studied. Values of
randomness strength W=0, 0.5, 1,2 and 3 have been considered
and are given in $\gamma$ units. The electronic structure at Fermi level is
affected in several ways. First, it is interesting to point
out that disorder up to W=1.0 does not break localization
properties of nanotubes. Unlike usual 1D system where the slightest
disorder induces Anderson localization mechanism, CN may be considered
as marginal 1D quantum nanostructures where quantum confinement takes
place but existing phenomena in 1D-structures undergo specific
alterations. The effect of disorder being one of them.
One has to reach higher values of
disorder strengths ($W\sim 1.5$) to break suck localization.
\hspace{\parindent}If we
consider the enlargement of the DoS over the entire
bandwidth (Fig. 5), one clearly sees that from W=1.5, all the
one-dimensional van-Hove singularities have
disappeared. This is a signature that quantum confinement
has been disrupted by disorder. Besides, for W=3 bandwidth
is enlarged by 1/4 of the total bandwidth, which is a rather
small increase of the bandwidth. Probably the localization
length is then smaller than the circumference of the CN.
\hspace{\parindent}We then consider the evolution of DoS as a function
of $\nu$ on Fig. 6 for $W=0,0.5,1.5$. The case $W=0$
is given and compared with low-disorder limit $W=0.5$. For each value
of $\nu$, we plot a different LDoS on a given site. The effect of low
disorder is shown not to affect significantly the general
metal-insulator pattern discussed earlier. This is also the case for
stronger disorder. However for disorder
width as large as 1.5 (as shown on Fig. 6), LDoS in the low-magnetic field regime
is strongly affected by the appearance of
strong fluctuations between different LDoS, whose period
further increases with magnetic strength. As $\nu$
approaches 1, one recovers a basic pattern that magnetic
field will induced metal-insulator transition. Averaging on
some 10 LDoS reduces the amplitude of such fluctuations, but
the TDoS at Fermi energy for low disorder limit is increased
(bold-curve on Fig.6).
\hspace{\parindent}We believe that this effect of fluctuations may have
significant consequences on the electronic properties of CN,
for instance affecting the ideal properties of nanoscale
metal-semiconductor contact devices \cite{CN-QW} made from
CN. This should also be considered carefully in relation to
UCF. Indeed, it is generally assumed that UCF reflects the
microscopic random potential where electrons are
propagating. The pattern of the fluctuations of conductance
as a function of Fermi energy or magnetic field, being quite
random but reproducible and fluctuating from sample to
sample. The only common feature is that the fluctuations are
of order of $e^{2}/h$ independently of sample quality, size,
and so forth.
\hspace{\parindent}To some extent, LDoS from one site to another is
also related to the local potential around a given carbon atom, and if
general UCF may be seen as a consequence of fluctuations in the
potential distribution, universal fingerprints may also emerge in local spectral
properties. Besides, as the LDoS can be directly related to tunneling
current from tips to the surface, such fluctuations
may be indirectly related to STM experiments. Calculation of Kubo
conductivity by means of real-space recursion method \cite{ON} may
also lead to valuable information about transport properties \cite{SR-RS}.
\section{Conclusion}
\hspace{\parindent}We have shown several patterns occurring in the
electronic spectra of metallic or semiconducting carbon nanotubes and
induced by magnetic field and disorder. Magnetic field was shown to
lead to a continuous metal-insulator transition in both kind of CN, whereas
disorder was shown not to modify qualitatively the abovementioned
pattern. Strong fluctuations of LDoS as a function of site environment
and magnetic field were found and may be of importance when designing
junction devices.
\acknowledgments
SR is indebted to the European Commission and the
Japanese Society for Promotion of Science (JSPS) for joint
financial support (Contract ERIC17CT960010), and to Prof. T.
Fujiwara from Department of Applied Physics of Tokyo
University for his kind hospitality. Part of the work by RS
is supported by a Grant-in Aid for Scientific Research
(No.~10137216) from the Ministry of Education and Science of
Japan.
| proofpile-arXiv_065-8167 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\chapter{Fractional Quantized Hall Effect \label{chap:fqhe}}
The nonrelativistic $|\phi|^4$-theory describing an interacting Bose gas
is also of importance for the description of the fractional quantized
Hall effect (FQHE). As a function of the applied magnetic field, this
two-dimensional system undergoes a zero-temperature transition between a
so-called quantum Hall liquid, where the Hall conductance is quantized
in odd fractions of $e^2/2 \pi$, or, reinstalling Planck's constant,
$e^2/h$, and an insulating phase. Here, the nonrelativistic
$|\phi|^4$-theory describes---after coupling to a Chern-Simons
term---the original electrons bound to an odd number of flux quanta.
The Hall liquid is in this picture characterized by a condensate of
composite particles.
\section{Chern-Simons-Ginzburg-Landau Theory}
\label{sec:CSGL}
The fractional quantized Hall effect (FQHE) is the hallmark of a new,
intrinsically two-dimensional condensed-matter state---the quantum Hall
liquid. Many aspects of this state are well understood in the framework
of the quantum-mechanical picture developed by Laughlin
\cite{Laughlin}. Considerable effort has nevertheless been invested in
formulating an effective field theory which captures the essential
low-energy, small-momentum features of the liquid. A similar approach
in the context of superconductors has proven most successful.
Initially, only the phenomenological model proposed by Ginzburg and
Landau \cite{GL} in 1950 was known here. Most of the fundamental
properties of the superconducting state such as superconductivity---the
property that gave this condensed-matter state its name, Meissner
effect, magnetic flux quantization, Abrikosov flux lattice, and
Josephson effect, can be explained by the model. The microscopic theory
was given almost a decade later by Bardeen, Cooper, and Schrieffer
\cite{BCS}. Shortly here after, Gorkov \cite{Gorkov} made the
connection between the two approaches by deriving the Ginzburg-Landau
model from the microscopic BCS theory, thus giving the phenomenological
model the status of an effective field theory.
A first step towards an effective field theory of the quantum Hall liquid
was taken by Girvin and MacDonald \cite{GMac} and has been developed further
by Zhang, Hansson and Kivelson \cite{ZHK}, who also gave an explicit
construction starting from a microscopic Hamiltonian. Their formulation
(for a review see Ref.\ \cite{Zhang}) incorporates time dependence which is
important for the study of quantum phase transitions.
An essential ingredient for obtaining an effective theory of the FQHE
was the identification by Girvin and MacDonald \cite{GMac} of a bosonic
operator $\phi$ exhibiting (algebraic) off-diagonal long-range order of a
type known to exist in two-dimensional bosonic superfluids. They argued
that this field should be viewed as an order parameter in terms of which
the effective field theory should be formulated. To account for the
incompressibility of the quantum Hall liquid they suggested to minimally
couple $\phi$ to a so-called statistical gauge field $(a_0, {\bf a})$
governed solely by a Chern-Simons term
\begin{equation}
\label{CSGL:CS}
{\cal L}_{\rm CS} = \tfrac{1}{2} e^2 \theta \partial_0 {\bf a} \times {\bf
a} - e^2 \theta a_0 \nabla \times {\bf a},
\end{equation}
with $\nabla \times {\bf a}$ the statistical magnetic field and $\theta$
a constant. As we will see below, the gapless Bogoliubov spectrum of
the neutral system changes as a result of this coupling into one with an
energy gap \cite{ZHK}, thus rendering the charged system incompressible.
Because of the absence of a kinetic term (the usual Maxwell term), the
statistical gauge field does not represent a physical degree of freedom. In
a relativistic setting, a Maxwell term is usually generated by quantum
corrections so that the statistical gauge field becomes dynamical at the
quantum level. The quantum theory then differs qualitatively from the
classical theory. On the other hand, as we shall see below, this need not
be the case in a nonrelativistic setting. That is to say, the {\it Ansatz}
of the absence of a Maxwell term is here not necessarily obstructed by
quantum corrections.
The effective theory of the quantum Hall liquid is given by the
so-called Chern-Simons-Ginzburg-Landau (CSGL) Lagrangian \cite{ZHK}
\begin{equation}
\label{CSGL:L}
{\cal L} = i \phi^* D_0 \phi -
\frac{1}{2m} |{\bf D} \phi|^2 + \mu_0 |\phi|^2 - \lambda_0 |\phi|^4 +
{\cal L}_{\rm CS}.
\end{equation}
The covariant derivatives $D_0 = \partial_0 + i e A_0 + i e a_0$ and
${\bf D} = \nabla - i e {\bf A} - i e {\bf a}$ give a minimal coupling
to the applied magnetic and electric field described by the gauge field
$(A_0,{\bf A})$ and also to the statistical gauge field. For
definiteness we will assume that our two-dimensional sample is
perpendicular to the applied magnetic field, defining the $z$-direction,
and we choose the electric field to point in the $x$-direction. The
charged field $\phi$ represents the Girvin-MacDonald order parameter
describing the original electrons bound to an odd number $2l+1$ of flux
quanta. To see that it indeed does, let us consider the field equation
for $a_0$:
\begin{equation} \label{CSGL:a0}
|\phi|^2 = - e \theta \nabla \times {\bf a}.
\end{equation}
The simplest solution of the CSGL Lagrangian is the uniform mean-field
solution
\begin{equation}
|\phi|^2 = \bar{n}, \;\;\;\; {\bf a} = - {\bf A}, \;\;\;\; a_0 = - A_0 =
0,
\end{equation}
where $\bar{n}$ indicates the constant fermion number density. The
statistical gauge field is seen to precisely cancel the applied field.
The constraint equation (\ref{CSGL:a0}) then becomes
\begin{equation} \label{CSGL:n}
\bar{n} = e \theta H,
\end{equation}
with $H$ the applied magnetic field. Now, if we choose $\theta^{-1} = 2
\pi (2l+1)$, it follows on integrating this equation that, as required,
with every electron there is associated $2l+1$ flux quanta:
\begin{equation}
N = \frac{1}{2l+1} N_\otimes,
\end{equation}
where $N_\otimes = \Phi/\Phi_0$, with $\Phi = \int_{\bf x} H$ the
magnetic flux, indicates the number of flux quanta. Equation
(\ref{CSGL:n}) implies an odd-denominator filling factor $\nu_H$ which
is defined by
\begin{equation}
\nu_H = \frac{\bar{n}}{H/\Phi_0}= \frac{1}{2l+1}.
\end{equation}
The coupling constant $\lambda_0 \, (>0)$ in (\ref{CSGL:L}) is the
strength of the repulsive contact interaction between the composite
particles, and $\mu_0$ is a chemical potential introduced to account for
a finite number density of composite particles.
It is well known from anyon physics that the inclusion of the
Chern-Simons term changes the statistics of the field $\phi$ to which
the statistical gauge field is coupled \cite{Wilczek}. If one composite
particle circles another, it picks up an additional Aharonov-Bohm
factor, representing the change in statistics. The binding of an odd
number of flux quanta changes the fermionic character of the electrons
into a bosonic one for the composite particles, allowing them to Bose
condense. The algebraic off-diagonal long-range order of a quantum Hall
liquid can in this picture be understood as resulting from this
condensation. Conversely, a flux quantum carries $1/(2l+1)$th of
an electron's charge \cite{Laughlin}, and also $1/(2l+1)$th of an
electron's statistics \cite{ASW}.
The defining phenomenological properties of a quantum Hall liquid are
easily shown to be described by the CSGL theory \cite{ZHK,Zhang}. From
the lowest-order expression for the induced electromagnetic current one
finds
\begin{equation}
\label{CSGL:inducedji}
e j_i = \frac{\delta {\cal L}}{\delta A_i} = - \frac{\delta {\cal
L}_\phi}{\delta a_i} = \frac{\delta {\cal L}_{\rm CS}}{\delta a_i} = - e^2
\theta \epsilon_{ij} (\partial_0 a_j - \partial_j a_0) = e^2 \theta
\epsilon_{ij} E_j,
\end{equation}
with ${\bf E}$ the applied electric field and where we have written the
Lagrangian (\ref{CSGL:L}) as a sum ${\cal L} = {\cal L}_\phi + {\cal L}_{\rm
CS}$. It follows that the Hall conductance $\sigma_{xy}$ is quantized in
odd fractions of $e^2/2 \pi$, or, reinstalling Planck's constant, $e^2/h$.
This result can also be understand in an intuitive way as follows. Since
the composite particles carry a charge $e$, the applied electric field gives
rise to an electric current
\begin{equation}
I = e \frac{\mbox{d} N}{\mbox{d} t}
\end{equation}
in the direction of ${\bf E}$, i.e., the $x$-direction. This is not the
end of the story because the composite objects carry in addition to
electric charge also $2l+1$ flux quanta. When the Goldstone field
$\varphi$ encircles $2l+1$ flux quanta, it picks up a factor $2 \pi$ for
each of them
\begin{equation}
\oint_\Gamma \nabla \cdot \varphi = 2 \pi (2l+1).
\end{equation}
Now, consider two points across the sample from each other. Let the
phase of these points initially be equal. As a composite particle moves
downstream, and crosses the line connecting the two points, the relative
phase $\Delta \varphi$ between them changes by $2 \pi (2l+1)$.
This phase slippage \cite{PWA} leads to a voltage drop across the
sample given by
\begin{equation}
V_{\rm H} = \frac{1}{e} \partial_0 \Delta \varphi = (2l+1) \Phi_0
\frac{\mbox{d} N}{\mbox{d} t},
\end{equation}
where the first equation can be understood by recalling that due to minimal
coupling $\partial_0 \varphi \rightarrow \partial_0 \varphi + e A_0$. For
the Hall resistance we thus obtain the expected value
\begin{equation}
\rho_{xy} = \frac{V_{\rm H}}{I} = (2l+1) \frac{2 \pi}{e^2}.
\end{equation}
If the CSGL theory is to describe an incompressible liquid, the spectrum
of the single-particle excitations must have a gap. Without the
coupling to the statistical gauge field, the spectrum is given by the
gapless Bogoliubov spectrum (\ref{eff:bogo}). To obtain the
single-particle spectrum of the coupled theory, we integrate out the
statistical gauge field. The integration over $a_0$ was shown to yield
the constraint (\ref{CSGL:a0}) which in the Coulomb gauge $\nabla \cdot
{\bf a} = 0$ is solved by
\begin{equation}
\label{CSGL:solution}
a_i = \frac{1}{e \theta} \epsilon_{ij} \frac{\partial_j}{\nabla^2} |\phi|^2.
\end{equation}
The integration over the remaining components of the statistical gauge field
is now simply performed by substituting (\ref{CSGL:solution}) back into the
Lagrangian. The only nonzero contribution arises from the term $- e^2
|\phi|^2 {\bf a}^2/2m$. The spectrum of the charged system acquires as a
result an energy gap $\omega_{\rm c}$
\begin{equation}
E({\bf k}) = \sqrt{\omega_{\rm c}^2 + \epsilon^2({\bf k}) + 2 \mu_0
\epsilon( {\bf k}) },
\end{equation}
with $\omega_{\rm c} = \mu_0 /2\theta m\lambda_0$. To lowest order, the gap
equals the cyclotron frequency of a free charge $e$ in a magnetic field $H$
\begin{equation}
\omega_{\rm c} = \frac{\bar{n}}{\theta m} = \frac{e H}{m}.
\end{equation}
The presence of this energy gap results in dissipationless flow with
$\sigma_{xx} =0$.
These facts show that the CSGL theory captures the essentials of a
quantum Hall liquid. Given this success, it is tempting to investigate
if the theory can also be employed to describe the field-induced
Hall-liquid-to-insulator transitions. This will be done in
Sec. \ref{sec:QHL}. It should however be borne in mind that both the
$1/|{\bf x}|$-Coulomb potential as well as impurities should be
incorporated into the theory in order to obtain a realistic description
of the FQHE. The repulsive Coulomb potential is believed to play a
decisive role in the formation of the the composite particles, while the
impurities are responsible for the width of the Hall plateaus. As the
magnetic field moves away from the magic filling factor, magnetic
vortices will materialize in the system to make up the difference
between the applied field and the magic field value. In the presence of
impurities, these defects get pinned and do not contribute to the
resistivities, so that both $\sigma_{xx}$ and $\sigma_{xy}$ are
unchanged. Only if the difference becomes too large, the system reverts
to an other quantum Hall state with a different filling factor.
\chapter{Functional Integrals \label{cap:funct}}
In these Lectures we shall adopt, unless stated otherwise, the
functional-integral approach to quantum field theory. To illustrate the
use and power of functional integrals, let us consider one of the
simplest models of {\it classical} statistical mechanics: the Ising
model. It is remarkable that functional integrals can not only be used
to describe quantum systems, governed by quantum fluctuations, but also
classical systems, governed by thermal fluctuations.
\section{Ising Model}
The Ising model provides an idealized description of an uniaxial
ferromagnet. To be specific, let us assume that the spins of some
lattice system can point only along one specific crystallographic axis.
The magnetic properties of this system can then be modeled by a lattice
with a spin variable $s({\bf x})$ attached to every site ${\bf x}$
taking the values $s({\bf x}) = \pm 1$. For definiteness we will assume
a $d$-dimensional cubic lattice. The Hamiltonian is given by
\begin{equation} \label{HIsing}
H = - \frac{1}{2} \sum_{{\bf x}, {\bf y}} J({\bf x}, {\bf y}) \, s({\bf
x}) \, s({\bf y}).
\end{equation}
Here, ${\bf x} = a \, x_i \, {\bf e}_i$, with $a$ the lattice
constant, $x_i$ integers labeling the sites, and ${\bf e}_i$ ($i = 1,
\cdots ,d$) unit vectors spanning the lattice. The
sums over ${\bf x}$ and ${\bf y}$ extend over the entire lattice, and
$J({\bf x}, {\bf y})$ is a symmetric matrix representing the
interactions between the spins. If the matrix element $J({\bf x}, {\bf
y})$ is positive, the energy is minimized when the two spins at site
${\bf x}$ and ${\bf y}$ are parallel---they are said to have a
ferromagnetic coupling. If, on the other hand, the matrix element is
negative, anti-parallel spins are favored---the spins are said to have
an anti-ferromagnetic coupling.
The classical partition function $Z$ of the Ising model reads
\begin{equation}
Z = \sum_{\{s({\bf x})\}} {\rm e}^{-\beta H},
\end{equation}
with $\beta = 1/T$ the inverse temperature. The sum is over all spin
configurations $\{s({\bf x})\}$, of which there are $2^N$, with $N$
denoting the number of lattice sites. To evaluate the partition
function we linearize the exponent by introducing an auxiliary
$\phi({\bf x})$ at each site via a so-called Hubbard-Stratonovich
transformation. Such a transformation generalizes the Gaussian integral
\begin{equation}
\exp \left(-\tfrac{1}{2} \beta J s^2 \right) = \sqrt{\frac{\beta}{2 \pi
J}} \int_\phi
\exp \left( - \tfrac{1}{2} \beta J^{-1} \phi^2 + \beta \phi s \right),
\end{equation}
where the integration variable $\phi$ runs from $-\infty$ to
$\infty$. The generalization reads
\begin{eqnarray}
\lefteqn{
\exp \left[\tfrac{1}{2} \beta \sum_{{\bf x}, {\bf y}} J({\bf x}, {\bf y})
\, s({\bf x}) \, s({\bf y}) \right] = } \\ \nonumber && \prod_{\bf x} \int
\mbox{d} \phi({\bf x}) \exp \left[ -\tfrac{1}{2} \beta \sum_{{\bf x}, {\bf
y}} J^{-1}({\bf x}, {\bf y}) \, \phi({\bf x}) \, \phi({\bf y}) + \beta
\sum_{{\bf x} }\phi({\bf x}) s({\bf x}) \right].
\end{eqnarray}
Here, $J^{-1}({\bf x}, {\bf y})$ is the inverse of the matrix $J({\bf
x}, {\bf y})$ and we ignored---as will be done throughout these
notes---an irrelevant normalization factor in front of the product at
the right-hand side. The equation should not be taken too literally.
It is an identity only if $J({\bf x}, {\bf y})$ is a symmetric
positive definite matrix. This is not true for the Ising model since
the diagonal matrix elements $J({\bf x},{\bf x})$ are all zero,
implying that the sum of the eigenvalues is zero. We will
nevertheless use this representation and regard it as a formal one.
The partition function now reads
\begin{equation} \label{ZIsing}
Z = \sum_{\{s({\bf x})\}} \prod_{\bf x} \int
\mbox{d} \phi({\bf x}) \exp \left[ -\tfrac{1}{2} \beta \sum_{{\bf x}, {\bf
y}} J^{-1}({\bf x}, {\bf y}) \, \phi({\bf x}) \,\phi({\bf y}) + \beta
\sum_{\bf x} \phi({\bf x}) \, s({\bf x}) \right].
\end{equation}
The spins are decoupled in this representation, so that the sum over
the spin configurations is easily carried out with the result
\begin{equation} \label{Zphi}
Z = \prod_{\bf x} \int
\mbox{d} \phi({\bf x}) \exp \left( -\tfrac{1}{2} \beta \sum_{{\bf x}, {\bf
y}} J^{-1}({\bf x}, {\bf y}) \, \phi({\bf x}) \, \phi({\bf y}) +
\sum_{\bf x} \ln\{\cosh\, [\beta \phi({\bf x})]\}\right),
\end{equation}
ignoring again an irrelevant constant.
The auxiliary field $\phi({\bf x})$ is not devoid of physical
relevance. To see this let us first consider its field equation:
\begin{equation} \label{fieldeq}
\phi({\bf x}) = \sum_{\bf y} J({\bf x}, {\bf y}) \, s({\bf y}) ,
\end{equation}
which follows from (\ref{ZIsing}). This shows that the auxiliary
field $\phi({\bf x})$ represents the effect of the other spins at
site ${\bf x}$. To make this more intuitive let us study the expectation
value of the field. For simplicity, we take only
nearest-neighbor interactions into account by setting
\begin{eqnarray} \label{nene}
J({\bf x}, {\bf y}) = \left\{ \begin{array}{ll} J & \mbox{if site
${\bf x}$ and ${\bf y}$ are nearest neighbors} \nonumber \\ 0 &
\mbox{otherwise,} \end{array} \right.
\end{eqnarray}
with $J$ positive, so that we have a ferromagnetic coupling between
the spins. The model is now translational invariant and the
expectation value $\langle s({\bf x}) \rangle$ is independent of ${\bf x}$:
\begin{equation}
\langle s({\bf x}) \rangle = M.
\end{equation}
We will refer to $M$ as the magnetization. Upon taking the expectation
value of the field equation (\ref{fieldeq}),
\begin{equation} \label{expvalue}
\langle \phi({\bf x}) \rangle = 2d J M,
\end{equation}
where $2d$ is the number of nearest neighbors, we see that the
expectation value of the auxiliary field represents the magnetization.
A useful approximation often studied is the so-called mean-field
approximation. It corresponds to approximating the integral over
$\phi({\bf x})$ in (\ref{Zphi}) by the saddle point---the value of the
integrand for which the exponent is stationary. This is the case for
$\phi({\bf x})$ satisfying the field equation
\begin{equation} \label{phieq}
- \sum_{\bf y} J^{-1}({\bf x}, {\bf y}) \, \phi({\bf y}) + \tanh \,
[\beta \phi({\bf x})] = 0.
\end{equation}
We will denote the solution by $\phi_{\rm mf}$. In this approximation,
the auxiliary field is no longer a fluctuating field taking all possible
real values, but a classical one having the value determined by the
field equation (\ref{phieq}). Being a nonfluctuating field, the
expectation value $\langle \phi_{\rm mf}({\bf x})
\rangle = \phi_{\rm mf}({\bf x})$, and (\ref{phieq}) yields a
self-consistent equation for the magnetization
\begin{equation} \label{self-con}
M = \tanh (2d \beta J M),
\end{equation}
where we assumed a uniform field solution and invoked Eq.\
(\ref{expvalue}). It is easily seen graphically that the equation has a
nontrivial solution when $2d \beta J > 1$. If, on the other hand, $2d
\beta J < 1$ it has only a trivial solution. It follows that
\begin{equation}
\beta^{-1}_0 = 2d J
\end{equation}
is the critical temperature separating the ordered low-temperature state
with a nonzero magnetization from the high-temperature disordered state
where the magnetization is zero.
Let us continue by expanding the Hamiltonian in powers of $\phi$. To this
end we note that the term $\ln[\cosh (\beta \phi)]$ in (\ref{Zphi}) has the
Taylor expansion
\begin{equation}
\ln[\cosh (\beta \phi)] = \tfrac{1}{2} \beta^2 \phi^2 -
\tfrac{1}{12} \beta^4 \phi^4 + \cdots .
\end{equation}
Before considering the other term in (\ref{Zphi}), $\sum_{{\bf x}, {\bf
y}} J^{-1}({\bf x}, {\bf y}) \, \phi({\bf x}) \, \phi({\bf y})$, let us
first study the related object $\sum_{{\bf x}, {\bf y}} J({\bf x}, {\bf
y}) \, s({\bf x}) \, s({\bf y})$ which shows up in the original Ising
Hamiltonian (\ref{HIsing}). With our choice (\ref{nene}) of the
interaction, the Taylor expansion of this object becomes
\begin{equation}
\sum_{{\bf x}, {\bf y}} J({\bf x}, {\bf y}) \, s({\bf x}) \,
s({\bf y}) = J \sum_{\bf x} s({\bf x}) \, (2d + a^2 \nabla^2 + \cdots)
\, s({\bf x}),
\end{equation}
neglecting higher orders in derivatives. From this it follows that
\begin{equation}
\sum_{{\bf x}, {\bf y}} J^{-1}({\bf x}, {\bf y}) \, \phi({\bf x}) \,
\phi({\bf y}) = J^{-1} \sum_{\bf x} \phi({\bf x}) \left(\frac{1}{2d} -
\frac{1}{4d^2} a^2 \nabla^2 + \cdots \right) \phi({\bf x}),
\end{equation}
and the partition function (\ref{Zphi}) becomes in the small-$\phi$
approximation
\begin{equation} \label{smallZ}
Z = \prod_{\bf x} \int \mbox{d} \phi({\bf x}) \, {\rm e}^{- \beta H},
\end{equation}
with $H$ the so-called Landau-Ginzburg Hamiltonian
\begin{equation} \label{LandauO}
H = \sum_{\bf x} \left[ \frac{a^2}{8 d^2 J} (\nabla
\phi)^2 + \frac{1}{2} \left( \frac{1}{2d J} - \beta \right) \phi^2 +
\frac{\beta^3}{12} \phi^4 \right].
\end{equation}
The model has a classical phase transition when the coefficient of the
$\phi^2$-term changes sign. This happens when $\beta = 1/2 dJ$ in accord
with the conclusion obtained by inspecting the self-consistent equation
for the magnetization (\ref{self-con}).
In the mean-field approximation, the thermal fluctuations around the
mean-field configuration are ignored, so that $\phi$ becomes a
nonfluctuating field. The functional integral $\prod_{\bf x} \int \mbox{d} \phi
({\bf x})$ is approximated by the saddle point.
For future reference we go over to the continuum by letting $a
\rightarrow 0$. To this end we replace the discrete sum $\sum_i$ by
the integral $a^{-d} \int_{\bf x}$, and rescale the field
$\phi({\rm x})$,
\begin{equation}
\phi ({\bf x}) \rightarrow \phi'({\bf x})= \sqrt{\frac{\beta a
^{2-d}}{4 d^2 J}} \phi ({\bf x}),
\end{equation}
such that the coefficient of the gradient term in the Hamiltonian takes
the canonical form of $\tfrac{1}{2}$. In this way the Hamiltonian
becomes
\begin{equation} \label{LandauC}
\beta H = \int_{\bf x} \left[ \frac{1}{2} (\nabla \phi)^2 + \frac{1}{2}
r_0 \phi^2 + \frac{1}{4!} \lambda_0 \phi^4 \right],
\end{equation}
where we dropped the prime on the field; the parameter $r_0$ and the
coupling constant $\lambda_0$ are given by
\begin{equation}
r_0 = \frac{\beta_0^{-2}}{J a^2} (\beta_0-\beta), \;\;\;
\lambda_0 =
\frac{4 \beta^2}{J^2 \beta_0^4 } a^{d-4}.
\end{equation}
The partition function now reads
\begin{equation} \label{Zfunct}
Z = \int \mbox{D} \phi \, {\rm e}^{- \beta H},
\end{equation}
where the functional integral $\int \mbox{D} \phi$ denotes the continuum
limit of the product of integrals $\prod_{\bf x} \int \mbox{d} \phi({\bf
x})$. The last two terms in the integrand of (\ref{LandauC}) constitute
the potential ${\cal V}(\phi)$,
\begin{equation}
{\cal V}(\phi) = \frac{1}{2} r_0 \phi^2 + \frac{1}{4!} \lambda_0 \phi^4.
\end{equation}
In Fig.\ \ref{fig:Isingpot}, the potential ${\cal V}(\phi)$ is depicted
in the high-temperature phase where $r_0 > 0$, and also in the
low-temperature phase where $r_0<0$. The minimum of the potential in
the low-temperature phase is obtained for a value $\phi \neq 0$, whereas
in the high-temperature phase the minimum is always at $\phi=0$.
\begin{figure}
\vspace{-.5cm}
\begin{center}
\epsfxsize=6.cm
\mbox{\epsfbox{Isingpot.eps}}
\end{center}
\vspace{-1.cm}
\caption{The potential ${\cal V}(\phi)$ of the Ising model in the
high-temperature ($r_0 >0$) and low-temperature ($r_0<0$)
phase. \label{fig:Isingpot}}
\end{figure}
\section{Derivative Expansion}
\label{sec:der}
We are interested in taking into account field fluctuations around the
mean field $\phi_{\rm mf}$, which is the solution of the field
equation obtained from (\ref{LandauC}). To this end we set $\phi =
\phi_{\rm mf} + \tilde{\phi}$, and expand the Hamiltonian around the
mean field up to second order in $\tilde{\phi}$:
\begin{equation} \label{Hexp0}
\beta H = \beta H_{\rm mf} + \frac{1}{2} \int_{\bf x} \left[
(\nabla \tilde{\phi})^2 + (r_0 + \tfrac{1}{2} \lambda_0
\phi_{\rm mf}^2 ) \tilde{\phi}^2 \right],
\end{equation}
where $H_{\rm mf}$ denotes the value of the Hamiltonian
(\ref{LandauC}) for $\phi= \phi_{\rm mf}$. Because of the change of
variables, the functional integral $\int \mbox{D} \phi$ changes to $\int
\mbox{D} \tilde{\phi}$. Since we neglected higher-order terms, the
functional integral is Gaussian and easily carried out. The
partition function (\ref{Zfunct}) becomes in this approximation
\begin{eqnarray} \label{Zapp}
Z &=& {\rm e}^{- \beta H_{\rm mf}} \int \mbox{D} \tilde{\phi} \, \exp \left\{-
\frac{1}{2} \int_{\bf x} \left[ (\nabla \tilde{\phi})^2 +
(r_0 + \tfrac{1}{2} \lambda_0 \phi_{\rm mf}^2 ) \tilde{\phi}^2
\right] \right\} \nonumber \\
&=& {\rm e}^{- \beta H_{\rm mf}} \, {\rm Det}^{-1/2} ( {\bf p}^2 +
r_0 + \tfrac{1}{2} \lambda_0 \phi_{\rm mf}^2 ),
\end{eqnarray}
with the derivative ${\bf p} = - i \nabla$. The determinant represents the
first corrections to the mean-field expression $\exp (-\beta H_{\rm mf})$ of
the partition function due to fluctuations. Using the identity
$\mbox{Det(A)} = \exp \left[ \mbox{Tr} \, \ln (A)\right]$, we can collect
them in the effective Hamiltonian
\begin{equation} \label{Heff}
\beta H_{\rm eff} = \tfrac{1}{2} {\rm Tr} \ln [ {\bf p}^2 + r_0 +
\tfrac{1}{2} \lambda_0 \phi_{\rm mf}^2 ({\bf x}) ],
\end{equation}
so that to this order
\begin{equation}
Z = {\rm e}^{-\beta(H_{\rm mf} + H_{\rm eff})}.
\end{equation}
As indicated, the mean field $\phi_{\rm mf} ({\bf x})$ may be space
dependent.
We next specify the meaning of the trace Tr appearing in (\ref{Heff}).
Explicitly,
\begin{equation} \label{Hexplicit}
\beta H_{\rm eff} = \frac{1}{2} \int_{\rm x} \ln\left\{ \left[ {\bf
p}^2 + r_0 + \tfrac{1}{2} \lambda_0 \phi_{\rm mf}^2 ({\bf x}) \right]
\delta ({\bf x} - {\bf y})\bigr|_{{\bf y} = {\bf x}} \right\}.
\end{equation}
The delta function arises because the expression in parenthesis at the
right-hand side of (\ref{Zapp}) is obtained as a functional derivative of
the Hamiltonian (\ref{Hexp0}),
\begin{equation}
\frac{\delta^{2} \beta H}{\delta {\tilde \phi}^2 ({\bf x})} =
\left[ {\bf p}^2 + r_0 +
\tfrac{1}{2} \lambda_0 \phi_{\rm mf}^2 ({\bf x}) \right] \,
\delta ({\bf x} - {\bf y}) \bigr|_{{\bf y} = {\bf x}},
\end{equation}
which gives a delta function. Since it is the unit operator in function
space, the delta function may be taken out of the logarithm and we can write
for (\ref{Hexplicit})
\begin{eqnarray} \label{Trexplicit}
\beta H_{\rm eff} &=& \frac{1}{2} \int_{\bf x}
\ln \left[ {\bf p}^2 + r_0 +
\tfrac{1}{2} \lambda_0 \phi_{\rm mf}^2 ({\bf x}) \right]
\delta ({\bf x} - {\bf y}) \bigr|_{{\bf y} = {\bf x}} \nonumber \\ &=&
\frac{1}{2} \int_{\bf x} \int_{\bf k}
\mbox{e}^{-i{\bf k} \cdot {\bf x}} \, \ln \left[ {\bf p}^2 + r_0 +
\tfrac{1}{2} \lambda_0 \phi_{\rm mf}^2 ({\bf x})
\right] \mbox{e}^{i {\bf k} \cdot {\bf x}}.
\end{eqnarray}
In the last step, we used the integral representation of the delta
function:
\begin{equation}
\delta ({\bf x}) = \int_{\bf k} {\rm e}^{i {\bf k} \cdot {\bf x}},
\end{equation}
shifted the exponential function $\exp (-i {\bf k} \cdot {\bf y})$ to the
left, which is justified because the derivative ${\bf p}$ does not operate
on it, and, finally, set ${\bf y}$ equal to ${\bf x}$. We thus see that the
trace Tr in (\ref{Trexplicit}) stands for the trace over discrete indices as
well as the integration over space and over momentum. The integral
$\int_{\bf k}$ arises because the effective Hamiltonian calculated here is a
one-loop result with ${\bf k}$ the loop momentum.
The integrals in (\ref{Trexplicit}) cannot in general be evaluated in
closed form because the logarithm contains momentum operators and
space-dependent functions in a mixed order. To disentangle the integrals
resort has to be taken to a derivative expansion \cite{FAF} in which the
logarithm is expanded in a Taylor series. Each term contains powers of the
momentum operator ${\bf p}$ which acts on every space-dependent function to
its right. All these operators are shifted to the left by repeatedly
applying the identity
\begin{equation} \label{commu}
f({\bf x}) {\bf p} g({\bf x}) = ({\bf p} + i \nabla) f({\bf x}) g({\bf x}),
\end{equation}
where $f({\bf x})$ and $g({\bf x})$ are arbitrary functions and the
derivative $\nabla$ acts {\it only} on the next object to the right. One
then integrates by parts, so that all the ${\bf p}$'s act to the left where
only a factor $\exp(-i {\bf k} \cdot {\bf x})$ stands. Ignoring total
derivatives and taking into account the minus signs that arise when
integrating by parts, one sees that all occurrences of ${\bf p}$ (an
operator) are replaced with ${\bf k}$ (an integration variable). The
exponential function $\exp(i {\bf k} \cdot {\bf x})$ can at this stage be
moved to the left where it is annihilated by the function $\exp(-i {\bf k}
\cdot {\bf x})$. The momentum integration can now in principle be carried
out and the effective Hamiltonian be cast in the form of an integral over a
local density ${\cal H}_{\rm eff}$:
\begin{equation}
H_{\rm eff} = \int_{\bf x} {\cal H}_{\rm eff}.
\end{equation}
This is in a nutshell how the derivative expansion works.
Let us illustrate the method by applying it to (\ref{Heff}). When we assume
$\phi_{\rm mf}$ to be a constant field $\bar{\phi}$, the effective
Hamiltonian (\ref{Heff}) may be evaluated in closed form:
\begin{equation} \label{Veff0}
\beta {\cal V}_{\rm eff} = \frac{1}{2} \int_{\bf k} \ln ( {\bf k}^2 +
M^2) = \frac{\Gamma(1-d/2)}{d (4 \pi)^{d/2} M^d},
\;\;\;\;\;\;\;\; M = \sqrt{r_0 + \tfrac{1}{2}\lambda_0 \bar{\phi}^2},
\end{equation}
where instead of an Hamiltonian we introduced a potential ${\cal V}_{\rm
eff}$ to indicate that we are working with a space-independent field
$\bar{\phi}$. To obtain the last equation, we first differentiated $\ln(k^2
+ M^2)$ with respect to $M^2$ and used the dimensional-regularized integral
\begin{equation}
\int_{\bf k} \frac{1}{({\bf k}^2 + M^2)^\alpha} = \frac{\Gamma(\alpha
-d/2)}{(4 \pi)^{d/2} \Gamma(\alpha)} \frac{1}{\left(M^2\right)^{\alpha-d/2}}
\end{equation}
to suppress irrelevant ultraviolet divergences, and finally integrated again
with respect to $M^2$. To illustrate the power of dimensional
regularization, let us consider the case $d=3$ in detail. Introducing a
momentum cutoff, we find in the large-$\Lambda$ limit
\begin{equation}
\beta {\cal V}_{\rm eff} = \frac{1}{8 \pi^2} \lambda_0 \bar{\phi}^2 \Lambda -
\frac{1}{12 \pi} M^3 + {\cal O} \left(\frac{1}{\Lambda}\right),
\end{equation}
where we ignored irrelevant, $\bar{\phi}$-in\-depen\-dent constants
proportional to powers of $\Lambda$. We see that in (\ref{Veff0}) only the
finite part emerges. That is, all terms that diverge with a strictly
positive power of the momentum cutoff are suppressed in dimensional
regularization. These contributions, which come from the ultraviolet
region, cannot physically be very relevant because the simple
Landau-Ginzburg model (\ref{LandauC}) stops being valid here and new
theories are required. It is a virtue of dimensional regularization that
these irrelevant divergences are suppressed.
Expanded up to fourth order in $\bar{\phi}$, (\ref{Veff0}) becomes
\begin{equation} \label{Veffexp}
\beta {\cal V}_{\rm eff} = - \frac{1}{12 \pi} r_0^{3/2} - \frac{1}{16 \pi}
\lambda_0 r_0^{1/2} \bar{\phi}^2 - \frac{1}{128 \pi} \frac{\lambda^2_0}{r_0^{1/2}}
\bar{\phi}^4 + \cdots,
\end{equation}
where the first term is an irrelevant $\bar{\phi}$-independent constant.
These one-loop contributions, when added to the mean-field potential
\begin{equation}
\beta {\cal V}_0 = \frac{1}{2} r_0 \bar{\phi}^2+ \frac{1}{4!} \lambda_0
\bar{\phi}^4,
\end{equation}
lead to a renormalization of the bare parameters
\begin{equation}
\lambda = \lambda_0 - \frac{3}{16 \pi} \frac{\lambda_0^2}{r_0^{1/2}}, \;\;\;
r = r_0 - \frac{1}{8 \pi} \lambda_0 r_0^{1/2}.
\end{equation}
In the case $\phi_{\rm mf}$ is not a constant field, we write the mean field
$\phi_{\rm mf}({\bf x})$, solving the field equation, as $\phi_{\rm mf}({\bf
x}) = \bar{\phi} + {\hat \phi}({\bf x})$, where $\bar{\phi}$ is the constant
field introduced above (\ref{Veff0}), and expand the logarithm at the
right-hand side of (\ref{Heff}) to second order in ${\hat \phi}$:
\begin{equation} \label{Heffexpand}
\beta {\hat H}_{\rm eff} = \frac{1}{4} \lambda_0 {\rm Tr} \frac{1}{{\bf p}^2
+ M^2} (2 \bar{\phi} {\hat \phi} + {\hat \phi}^2 ) - \frac{1}{8}
\lambda^2_0 \bar{\phi}^2 {\rm Tr} \frac{1}{{\bf p}^2 + M^2} {\hat \phi}
\frac{1}{{\bf p}^2 + M^2} {\hat \phi},
\end{equation}
with
\begin{eqnarray}
{\hat H}_{\rm eff} &:=& H_{\rm eff} (\bar{\phi} + {\hat \phi}) - H_{\rm eff}
(\bar{\phi}) \nonumber \\ &=& \int_{\bf x} \left[ \frac{\partial {\cal
V}_{\rm eff}}{\partial \bar{\phi}} \hat{\phi} + \frac{1}{2}
\frac{\partial^2 {\cal V}_{\rm eff}}{\partial \bar{\phi}^2} \hat{\phi}^2 +
\frac{1}{2} {\cal Z}(\bar{\phi}) (\nabla \hat{\phi})^2 + \cdots \right] .
\end{eqnarray}
Moving the momentum operator ${\bf p}$ to the left by using (\ref{commu}),
we obtain
\begin{equation} \label{H2nd}
\beta {\hat H}_{\rm eff} = \frac{1}{4} \lambda_0 {\rm Tr} \frac{1}{{\bf p}^2
+ M^2} (2 \bar{\phi} {\hat \phi} + {\hat \phi}^2 ) - \frac{1}{8}
\lambda^2_0 \bar{\phi}^2 {\rm Tr} \frac{1}{{\bf p}^2 + M^2}
\frac{1}{({\bf p}- i \nabla)^2 + M^2} {\hat \phi}{\hat \phi},
\end{equation}
where we recall the definition of the derivative $\nabla$ as operating only
on the first object to its right. Using the integral
\begin{equation}
\int_{\bf k} \frac{1}{{\bf k}^2 + M^2} \frac{1}{({\bf k} + {\bf q})^2 + M^2}
= \frac{1}{4 \pi |{\bf q}| } \arctan\left(\frac{|{\bf q}|}{2 M} \right),
\end{equation}
with ${\bf q} = -i \nabla$, we obtain for (\ref{H2nd})
\begin{equation}
\beta {\hat H}_{{\rm eff}} = - \frac{1}{16 \pi} \lambda_0 M (2 \bar{\phi}
{\hat \phi} + {\hat \phi}^2 ) - \frac{1}{32 \pi} \lambda^2_0 \bar{\phi}^2 \,
{\hat \phi} \left[\frac{1}{|{\bf q}|}
\arctan\left(\frac{|{\bf q}|}{2 M} \right) \right]{\hat \phi}.
\end{equation}
We note that only terms with an even number of derivatives appear in the
expansion of this expression. The coefficient of the linear term is
$\partial \beta {\cal V}_{\rm eff}/\partial \bar{\phi}$, while that of the
two quadratic terms independent of ${\bf q}$ is $\tfrac{1}{2} \partial^2
\beta {\cal V}_{\rm eff}/\partial \bar{\phi}^2$, as it should be. For
${\cal Z}$ we obtain
\begin{equation}
{\cal Z}(\bar{\phi}) = \frac{1}{192 \pi} \frac{\lambda_0^2 \bar{\phi}^2}{M^3}.
\end{equation}
Other terms involving higher powers of ${\hat\phi}$, obtained from
expanding the logarithm in (\ref{Heff}) to higher orders, can be treated
in a similar fashion.
\chapter*{Notation\markboth{Notation}{Notation}}
\label{chap:not}
We adopt Feynman's notation and denote a spacetime point by $x=x_\mu =(t,{\bf
x})$, $\mu = 0,1, \cdots,d$, with $d$ the number of space dimensions, while
the energy $k_0$ and momentum ${\bf k}$ of a particle will be denoted by
$k=k_\mu = (k_0,{\bf k})$. The time derivative $\partial_0 =
\partial/\partial t$ and the gradient $\nabla$ are sometimes combined in a
single vector $\tilde{\partial}_\mu = (\partial_0, -\nabla)$. The tilde on
$\partial_\mu$ is to alert the reader for the minus sign appearing in the
spatial components of this vector. We define the scalar product $k \cdot x
= k_\mu x_\mu = k_0 t - {\bf k} \cdot {\bf x}$ and use Einstein's summation
convention. Because of the minus sign in the definition of the vector
$\tilde{\partial}_\mu$ it follows that $\tilde{\partial}_\mu a_\mu =
\partial_0 a_0 + \nabla \cdot {\bf a}$, with $a_\mu$ an arbitrary vector.
Integrals over spacetime are denoted by
$$
\int_{x} = \int_{t,{\bf x}} = \int \mbox{d} t \, \mbox{d}^d x,
$$
while those over energy and momentum by
$$
\int_k = \int_{k_0,{\bf k}} = \int \frac{\mbox{d} k_0}{2 \pi}
\frac{\mbox{d}^d k}{(2 \pi)^d}.
$$
When no integration limits are indicated, the integrals are assumed to
run over all possible values of the integration variables.
Natural units $\hbar = c = k_{\rm B} = 1$ are adopted throughout.
\chapter{Prelude \label{chap:intro}}
Continuous quantum phase transitions have attracted considerable
attention in this decade both from experimentalists as well as from
theorists. (For reviews see Refs.\ \cite{UzunovB,LG,Sachdev,SGCS}.)
These transitions, taking place at the absolute zero of temperature, are
dominated by quantum and not by thermal fluctuations as is the case in
classical finite-temperature phase transitions. Whereas time plays no
role in a classical phase transition, being an equilibrium phenomenon,
it becomes important in quantum phase transitions. The dynamics is
characterized by an additional critical exponent, the so-called dynamic
exponent, which measures the asymmetry between the time and space
dimensions. The natural language to describe these transitions is
quantum field theory. In particular, the functional-integral approach,
which can also be employed to describe classical phase transitions,
turns out to be highly convenient.
The subject is at the border of condensed matter and statistical physics.
Typical systems being studied are superfluid and superconducting films,
quantum-Hall and related two-dimensional electron systems, as well as
quantum spin systems. Despite the diversity in physical content, the
quantum critical behavior of these systems shows surprising similarities. It
is fair to say that the present theoretical understanding of most of the
experimental results is scant.
The purpose of these Lectures is to provide the reader with a framework
for studying quantum phase transitions. A central role is played by a
repulsively interacting Bose gas at the absolute zero of temperature.
The universality class defined by this paradigm is believed to be of
relevance to most of the systems studied. Without impurities and a
Coulomb interaction, the quantum critical behavior of this system turns
out to be surprisingly simple. However, these two ingredients are
essential and have to be included. Very general hyperscaling arguments
are powerful enough to determine the exact value of the dynamic exponent
in the presence of impurities and a Coulomb interaction, but the other
critical exponents become highly intractable.
The emphasis in these Lectures will be on effective theories, giving a
description of the system under study valid at low energy and small
momentum. The rationale for this is the observation that the (quantum)
critical behavior of continuous phase transitions is determined by such
general features as the dimensionality of space, the symmetries
involved, and the dimensionality of the order parameter. It does not
depend on the details of the underlying microscopic theory. In the
process of deriving an effective theory starting from some microscopic
model, irrelevant degrees of freedom are integrated out and only those
relevant for the description of the phase transition are retained.
Similarities in critical behavior in different systems can, accordingly,
be more easily understood from the perspective of effective field
theories.
The ones discussed in these Lectures are so-called {\it phase-only}
theories. The are the dynamical analogs of the familiar O(2) nonlinear
sigma model of classical statistical physics. As in that model, the focus
will be on phase fluctuations of the order parameter. The inclusion of
fluctuations in the modulus of the order parameter is generally believed not
to change the critical behavior. Indeed, there are convincing arguments
that both the Landau-Ginzburg model with varying modulus and the nonlinear
O($n$) sigma model with fixed modulus belong to the same universality class.
For technical reasons a direct comparison is not possible, the
Landau-Ginzburg model usually being investigated in an expansion around four
dimensions, and the nonlinear sigma model in one around two.
In the case of a repulsively interacting Bose gas at the absolute zero
of temperature, the situation is particular simple as phase fluctuations
are the only type of field fluctuations present.
These Lectures cover exclusively lower-dimensional systems. The reason
is that it will turn out that in three space dimensions and higher the
quantum critical behavior is in general Gaussian and therefore not very
interesting.
Since time and how it compares to the space dimensions is an important
aspect of quantum phase transitions, Galilei invariance will play an
important role in the discussion.
\chapter{Quantum Phase Transitions \label{chap:qpt}}
This chapter is devoted to continuous phase transitions at the absolute zero
of temperature; so-called quantum phase transitions. Unlike in classical
phase transitions taking place at finite temperature and in equilibrium,
time plays an important role in quantum phase transitions. Put differently,
whereas the critical behavior of classical 2nd-order phase transitions is
governed by thermal fluctuations, that of 2nd-order quantum transitions is
controlled by quantum fluctuations. These transitions, which have attracted
much attention in recent years (for an introductory review, see Ref.\
\cite{SGCS}), are triggered by varying not the temperature, but some
other parameter in the system, like the applied magnetic field, the
charge carrier density, or the disorder strength. The quantum phase
transitions we will be discussing here are all dominated by phase
fluctuations.
\section{Scaling}
The natural language to describe quantum phase transitions
is quantum field theory. In addition to a diverging correlation length
$\xi$, quantum phase transitions also have a diverging correlation time
$\xi_t$. They indicate, respectively, the distance and time period over
which the order parameter characterizing the transition fluctuates
coherently. The way the diverging correlation time relates to the
diverging correlation length,
\begin{equation} \label{zcrit}
\xi_t \sim \xi^z,
\end{equation}
defines the so-called dynamic exponent $z$. It is a measure for the
asymmetry between the time and space directions and tells us how long it
takes for information to propagate across a distance $\xi$. The traditional
scaling theory of classical 2nd-order phase transitions, first put forward
by Widom \cite{Widom}, is easily extended to include the time dimension
\cite{Ma} because relation (\ref{zcrit}) implies the presence of only one
independent diverging scale. Let $\delta = K - K_{\rm c}$, with $K$ the
parameter that drives the phase transition, measure the distance from
the critical coupling $K_{\rm c}$. A physical observable at the
absolute zero of temperature $O(k_0,|{\bf k}|,K)$ can in the critical
region close to the transition be written as
\begin{equation} \label{scaling0}
O(k_0,|{\bf k}|,K) = \xi^{d_O} {\cal O}(k_0 \xi_t, |{\bf k}| \xi),
\;\;\;\;\;\;\;\; (T=0),
\end{equation}
where $d_O$ is the dimension of the observable $O$. The right-hand side
does not depend explicitly on $K$; only implicitly through $\xi$ and
$\xi_t$. The closer one approaches the critical coupling $K_{\rm c}$, the
larger the correlation length and time become.
Since a physical system is always at some finite temperature, we have to
investigate how the scaling law (\ref{scaling0}) changes when the
temperature becomes nonzero. The easiest way to include temperature in
a quantum field theory is to go over to imaginary time $\tau = it$, with
$\tau$ restricted to the interval $0 \leq \tau \leq \beta$. The
temporal dimension becomes thus of finite extend. The critical behavior
of a phase transition at finite temperature is still controlled by the
quantum critical point provided $\xi_t < \beta$. If this condition is
fulfilled, the system does not see the finite extend of the time
dimension. This is what makes quantum phase transitions experimentally
accessible. Instead of the zero-temperature scaling (\ref{scaling0}),
we now have the finite-size scaling
\begin{equation} \label{scalingT}
O(k_0,|{\bf k}|,K,T) = \beta^{d_O/z} {\cal O}(k_0 \beta, |{\bf k}|
\beta^{1/z},\beta/\xi_t), \;\;\;\;\;\;\;\; (T \neq 0).
\end{equation}
The distance to the quantum critical point is measured by the ratio
$\beta/\xi_t \sim |\delta|^{z\nu}/T$.
\section{Repulsively Interacting Bosons}
\label{sec:BT}
The first quantum phase transition we wish to investigate is the
superfluid-to-Mott-insulating transition of interacting bosons in the
absence of impurities \cite{FWGF}. The transition is described by the
nonrelativistic $|\phi|^4$-theory (\ref{eff:Lagr}), which becomes critical
at the absolute zero of temperature at some (positive) value $\mu_{\rm c}$
of the renormalized chemical potential. The Mott insulating phase is
destroyed and makes place for the superfluid phase as $\mu$ increases.
Whereas in the superfluid phase the single-particle (Bogoliubov) spectrum is
gapless and the system compressible, the single-particle spectrum of the
insulating phase has an energy gap and the compressibility $\kappa$
vanishes here.
The nature of the insulating phase can best be understood by putting the
theory on a lattice. The lattice model is defined by the Hamiltonian
\begin{equation} \label{BT:hu}
H_{\rm H} = - t \sum_j (a^{\dagger}_j a_{j+1} + {\rm
h.c.}) + \sum_j (- \mu_{\rm L} \hat{n}_j + U \hat{n}_j^2),
\end{equation}
where the sum $\sum_j$ is over all lattice sites. The operator
$a^{\dagger}_j$ creates a boson at site $j$ and $\hat{n}_j = a^{\dagger}_j
a_j$ is the particle number operator at that site; $t$ is the hopping
parameter, $U$ the interparticle repulsion, and $\mu_{\rm L}$ is the
chemical potential on the lattice. The zero-temperature phase
diagram is as follows \cite{FWGF}. In the limit $t/U \rightarrow 0$, each
site is occupied by an integer number $n$ of bosons which minimizes the
on-site energy (see Fig.\ \ref{fig:occu})
\begin{equation}
\epsilon(n) = -\mu_{\rm L} n + U n^2.
\end{equation}
It follows that within the interval $2n-1 < \mu_{\rm L}/U < 2n+1$, each
site is occupied by exactly $n$ bosons. When the chemical potential is
negative, $n=0$. The intervals become smaller when $t/U$ increases.
\begin{figure}
\begin{center}
\epsfxsize=8.cm
\mbox{\epsfbox{occu.eps}}
\end{center}
\caption{Schematic representation of the average number $n$ of particles per
site as function of the chemical potential $\mu_{\rm L}$ at some finite
value of the hopping parameter $t < t_{\rm c}$. \label{fig:occu}}
\end{figure}
Within such an interval, where the particles are pinned to the lattice
sites, the single-particle spectrum has an energy gap, and the system is
in the insulating phase with zero compressibility, $\kappa =
n^{-2}\partial n/\partial \mu_{\rm L} =0$. Outside these intervals, the
particles delocalize and can hop through the lattice. Being at zero
temperature, the delocalized bosons condense in a superfluid state. The
single-particle spectrum is gapless here and the system compressible
($\kappa \neq 0$).
As $t/U$ increases, the gap in the single-particle spectrum as well as
the width of the intervals decrease and eventually vanish at some
critical value $t_{\rm c}$. For values $t>t_{\rm c}$ of the hopping
parameter, the superfluid phase is the only phase present (see Fig.\
\ref{fig:qphase}).
\begin{figure}
\epsfxsize=10.cm
\mbox{\epsfbox{qphase.eps}}
\caption{Schematic representation of the phase diagram of the lattice
model (\protect\ref{BT:hu}) at the absolute zero of temperature
\protect\cite{FWGF}.
\label{fig:qphase}}
\end{figure}
The continuum model (\ref{eff:Lagr}), with $\mu > \mu_{\rm c}$ describes the
condensed delocalized lattice bosons which are present when the density
deviates from integer values (see Fig.\ \ref{fig:occu}). In the limit $\mu
\rightarrow \mu_{\rm c}$ from above, the number of delocalized bosons
decreases and eventually becomes zero at the phase boundary $\mu=\mu_{\rm
c}$ between the superfluid and insulating phases.
Various quantum phase transitions belong to the universality class
defined by the zero-density transition of repulsively interacting
bosons. For example, itinerant quantum antiferromagnets
\cite{Hertz,Ian,KB} as well as lower-dimensional (clean) superconductors
belong to this universality class. As we have seen in Sec.\
\ref{sec:comp}, Cooper pairs become tightly bound composite particles in
the strong-coupling limit, which are described by the nonrelativistic
$|\phi|^4$-theory with a weak repulsive interaction. For $\mu >
\mu_{\rm c}$, the field $\phi$ now describes the condensed delocalized
Cooper pairs. When the chemical potential decreases, the condensate
diminishes, and the system again becomes insulating for $\mu < \mu_{\rm
c}$ \cite{CFGWY}. By continuity, we expect also the
superconductor-to-insulator transition of a (clean) weakly interacting
BCS superconductor to be in this universality class. The restriction to
lower dimensions is necessary for two different reasons. First, only
for $d \leq 2$ the penetration depth is sufficiently large [see, for
example, below Eq.\ (\ref{2sc:mod})], so that it is appropriate to work
in the limit $\lambda_{\rm L} \rightarrow \infty$ with no fluctuating
gauge field
\cite{FGG}. Second, in lower dimensions, the energy gap which the fermionic
excitations face remains finite at the critical point, so that it is
appropriate to ignore these degrees of freedom. Moreover, since also
the coherence length remains finite at the critical point, the Cooper
pairs look like point particles on the scale of the diverging
correlation length associated with the phase fluctuations,
even in the weak-coupling limit \cite{CFGWY}.
In the preceding chapter, we argued that the nonrelativistic
$|\phi|^4$-theory is also of importance for the description of the
fractional quantized Hall effect (FQHE), where it describes---after
coupling to the Chern-Simons term---the original electrons bound to an
odd number of flux quanta. As function of the applied magnetic field,
this two-dimensional system undergoes a zero-temperature transition
between a quantum Hall liquid, where the Hall conductance is quantized
in odd fractions of $e^2/2 \pi$, and an insulating phase. The Hall
liquid corresponds to the phase with $\mu > \mu_{\rm c}$, while the
other phase again describes the insulating phase.
It should be noted however that in most of the applications of the
nonrelativistic $|\phi|^4$-theory mentioned here, impurities play an
important role; this will be the main subject of Sec.\ \ref{sec:Dirt}.
The critical properties of the zero-density transition of the
nonrelativistic $|\phi|^4$-theory were first studied by Uzunov
\cite{Uzunov}. To facilitate the discussion let us make use of the fact
that in nonrelativistic theories the mass is---as far as critical phenomena
concerned---an irrelevant parameter which can be transformed away. This
transformation changes, however, the scaling dimensions of the $\phi$-field
and the coupling constant which is of relevance to the renormalization-group
theory. The engineering dimensions become
\begin{equation} \label{BT:scale}
[{\bf x}] = -1, \;\;\;\; [t] = -2, \;\;\;\; [\mu_0] = 2, \;\;\;\;
[\lambda_0] = 2-d, \;\;\;\; [\phi] = \tfrac{1}{2}d,
\end{equation}
with $d$ denoting the number of space dimensions. In two space dimensions
the coupling constant $\lambda_0$ is dimensionless, showing that the
$|\phi|^4$-term is a marginal operator, and $d_{\rm c}=2$ the upper critical
space dimension. Uzunov showed that below the upper critical dimension
there appears a non-Gaussian infrared-stable (IR) fixed point. He computed
the corresponding critical exponents to all orders in perturbation theory
and showed them to have Gaussian values, $\nu=\tfrac{1}{2}, \; z=2, \;
\eta=0$. Here, $\nu$ characterizes the divergence of the
correlation length, $z$ is the dynamic exponent, and $\eta$ is the
correlation-function exponent which determines the anomalous dimension of
the field $\phi$. The unexpected conclusion that a non-Gaussian fixed point
has nevertheless Gaussian exponents is rooted in the analytic structure of
the nonrelativistic propagator at zero bare chemical potential ($\mu_0=0$):
\begin{equation} \label{BT:Green}
\raisebox{-0.3cm}{\epsfxsize=2.5cm
\epsfbox{prop1.eps} }
= G(k) = \frac{i {\rm e}^{i k_0 \eta}}{k_0 - \tfrac{1}{2}{\bf k}^2 + i \eta },
\end{equation}
where, as before, $\eta$ is a small positive constant that has to be
taken to zero after the loop integrations over the energies have been
carried out. The factor $\exp(i k_0 \eta)$ is an additional convergence
factor typical for nonrelativistic theories, which is needed for Feynman
diagrams involving only one $\phi$-propagator. The rule $k_0
\rightarrow k_0 + i \eta$ in (\ref{BT:Green}) expresses the fact that in
this nonrelativistic theory particles propagate only forward in time.
In diagrams involving loops with more than one propagator, the integrals
over the loop energy are convergent and can be evaluated by contour
integration with the contour closed in either the upper or the lower
half plane. If a diagram contains a loop which has all its poles in the
same half plane, it consequently vanishes. Pictorially, such a loop has
all its arrows, representing the Green functions contained in the loop,
oriented in a clockwise or anticlockwise direction \cite{OB} (see Fig.\
\ref{fig:oriented1}).
\begin{figure}
\begin{center}
\epsfxsize=2.cm
\mbox{\epsfbox{oriented1.eps}}
\end{center}
\caption{A closed oriented loop. \label{fig:oriented1}}
\end{figure}
We will refer to them as closed oriented loops. Owing to this property most
diagrams are zero. In particular, all self-energy diagrams vanish. The
only surviving ones are the so-called ring diagrams which renormalize the
vertex (see Fig.\ \ref{fig:ring}).
\begin{figure}
\begin{center}
\epsfxsize=8.cm
\mbox{\epsfbox{ring.eps}}
\end{center}
\caption{Ring diagrams renormalizing the vertex function of the neutral
$|\phi|^4$-theory. \label{fig:ring}}
\end{figure}
Because this class of diagrams constitute a geometric series, the one-loop
result is already exact. The vertex renormalization leads to a non-Gaussian
fixed point in $d < 2$, while the vanishing of all the self-energy diagrams
asserts that the exponents characterizing the transition are not affected by
quantum fluctuations and retain their Gaussian values \cite{Uzunov}. These
results have been confirmed by numerical simulations in $d=1$ \cite{BSZ} and
also by general scaling arguments \cite{FF,FWGF}.
To understand the scaling arguments, let us consider the two terms in
the effective theory (\ref{eff:Leff}) quadratic in the Goldstone field
$\varphi$ with $m$ effectively set to 1 \cite{FF}:
\begin{equation} \label{general}
{\cal L}_{\rm eff}^{(2)} = - \tfrac{1}{2} \rho_{\rm s} (\nabla
\varphi)^2 + \tfrac{1}{2} \bar{n}^2 \kappa (\partial_0 \varphi)^2.
\end{equation}
We have written this in the most general form. The coefficient $\rho_{\rm
s}$ is the superfluid mass density which in the presence of, for example,
impurities does not equal $m \bar{n}$---even at the absolute zero of
temperature. The other coefficient,
\begin{equation}
\bar{n}^2 \kappa = \frac{\partial \bar{n}}{\partial \mu} = \lim_{k
\rightarrow 0} \Pi_{0 0} (0,{\bf k}) ,
\end{equation}
with $\Pi_{0 0}$ the (0 0)-component of the full polarization tensor
(\ref{bcs:cruc}), involves the full compressibility and particle number
density of the system at rest. This is because the chemical potential
is according to (\ref{jo-pwa}) represented in the effective theory by
$\mu = -\partial_0 \varphi$ and
\begin{equation}
\frac{\partial^2 {\cal L}_{\rm eff}}{\partial \mu^2} = \bar{n}^2 \kappa.
\end{equation}
Equation (\ref{general}) leads to the general expression of the sound
velocity
\begin{equation}
c^2 = \frac{\rho_{\rm s}}{\bar{n}^2 \kappa}
\end{equation}
at the absolute zero of temperature.
Let $\delta \propto \mu - \mu_{\rm c}$ denote the distance from the phase
transition, so that $\xi \sim |\delta|^{-\nu}$. Now, on the one hand,
the singular part of the free energy density $f_{\rm sing}$ arises from
the low-energy, long-wavelength fluctuations of the Goldstone field.
(Here, we adopted the common practice of using the symbol $f$ for the
density $\Omega/V$ and of referring to it as the free energy density.)
The ensemble averages give
\begin{equation}
\langle (\nabla \varphi)^2 \rangle \sim \xi^{-2}, \;\;\;\;
\langle (\partial_0 \varphi)^2 \rangle \sim \xi_t^{-2} \sim \xi^{-2z} .
\end{equation}
On the other hand, dimensional analysis shows that the singular part of
the free energy density scales near the transition as
\begin{equation}
f_{\rm sing} \sim \xi^{-(d+z)}.
\end{equation}
Combining these hyperscaling arguments, we arrive at the following
conclusions:
\begin{equation} \label{hyperrho}
\rho_{\rm s} \sim \xi^{-(d+z-2)}, \;\;\;\; \bar{n}^2 \kappa \sim
\xi^{-(d-z)} \sim |\delta|^{(d-z)\nu}.
\end{equation}
The first conclusion is consistent with the universal jump (\ref{jump})
predicted by Nelson and Kosterlitz \cite{NeKo} which corresponds to
taking $z=0$ and $d=2$. Since $\xi \sim |\delta|^{-\nu}$, $f_{\rm
sing}$ can also be directly differentiated with respect to the chemical
potential to yield for the the singular part of the compressibility
\begin{equation}
\bar{n}^2 \kappa_{\rm sing} \sim |\delta|^{(d+z)\nu -2}.
\end{equation}
Fisher and Fisher \cite{FF} continued to argue that there are two
alternatives. Either $\kappa \sim \kappa_{\rm sing}$, implying $z \nu
=1$; or the full compressibility $\kappa$ is constant, implying $z=d$.
The former is consistent with the Gaussian values $\nu=\tfrac{1}{2}, \; z=2$
found by Uzunov \cite{Uzunov} for the pure case in $d < 2$. The latter is
believed to apply to repulsively interacting bosons in a random media. These
remarkable simple arguments thus predict the exact value $z=d$ for the
dynamic exponent in this case.
For later reference, let us consider the charged case and calculate the
conductivity $\sigma$. The only relevant term for this purpose is the
first one in (\ref{general}) with $\nabla \varphi$ replaced by $\nabla
\varphi - e {\bf A}$. We allow the superfluid mass density to vary
in space and time. The term in the action quadratic in ${\bf A}$ then
becomes in the Fourier representation
\begin{equation}
S_\sigma = - \tfrac{1}{2} e^2 \int_{k_0,{\bf k}} {\bf A}(-k_0,-{\bf k})
\rho_{\rm s} (k_0,{\bf k}) {\bf A}(k_0,{\bf k}).
\end{equation}
The electromagnetic current,
\begin{equation}
{\bf j}(k_0,{\bf k}) = \frac{\delta S_\sigma}{\delta {\bf A}(-k_0,-{\bf k})}
\end{equation}
obtained from this action can be written as
\begin{equation}
{\bf j}(k_0,{\bf k}) = \sigma(k_0,{\bf k}) {\bf E}(k_0,{\bf k})
\end{equation}
with the conductivity
\begin{equation} \label{conductivity}
\sigma(k) = i e^2 \frac{\rho_{\rm s}(k)}{k_0}
\end{equation}
essentially given by the superfluid mass density divided by $k_0$, where
it should be remembered that the mass $m$ is effectively set to 1 here.
The above hyperscaling arguments have been extended by Fisher, Grinstein, and
Girvin \cite{FGG} to include the $1/|{\bf x}|$-Coulomb potential. The
quadratic terms in the effective theory (\ref{effCoul}) may be cast in the
general form
\begin{equation}
{\cal L}_{\rm eff}^{(2)} = \frac{1}{2} \left(\rho_{\rm s} {\bf k}^2 -
\frac{|{\bf k}|^{d-1}}{\hat{e}^2} k_0^2\right) |\varphi(k)|^2,
\end{equation}
where $\hat{e}$ is the renormalized charge. From (\ref{effCoul}) we find
that to lowest order:
\begin{equation}
\hat{e}^2 = 2^{d-1} \pi^{(d-1)/2} \Gamma\left[\tfrac{1}{2}(d-1)\right] e_0^2.
\end{equation}
The renormalized charge is connected to the (0 0)-component of the full
polarization tensor (\ref{bcs:cruc}) via
\begin{equation}
\hat{e}^2 = \lim_{|{\bf k}| \rightarrow 0} \frac{|{\bf k}|^{d-1}}{\Pi_{0
0} (0,{\bf k})} .
\end{equation}
A simple hyperscaling argument like the ones given above shows that near
the transition, the renormalized charge scales as
\begin{equation}
\hat{e}^2 \sim \xi^{1-z}.
\end{equation}
They then argued that in the presence of random impurities this charge
is expected to be finite at the transition so that $z=1$. This again is
an exact results which replaces the value $z=d$ of the neutral system.
We have seen that $d_{\rm c}=2$ is the upper critical dimension of the
nonrelativistic $|\phi|^4$-theory. Dimensional analysis shows that for an
interaction term of the form
\begin{equation}
{\cal L}_{\rm i} = - g_0 |\phi|^{2k}
\end{equation}
the upper critical dimension is
\begin{equation}
\label{BT:dcnr}
d_{\rm c} = \frac{2}{k-1}.
\end{equation}
The two important physical cases are $d_{\rm c}=2$, $k=2$ and $d_{\rm c}=1$,
$k=3$, while $d_{\rm c} \rightarrow 0$ when $k \rightarrow \infty$. For space
dimensions $d > 2$ only the quadratic term, $|\phi|^2$, is relevant so that
here the critical behavior is well described by the Gaussian theory.
In the corresponding relativistic theory, the scaling dimensions of $t$ and
${\bf x}$ are, of course, equal $[t] = [{\bf x}] = -1$ and $[\phi] =
\tfrac{1}{2} (d-1)$. This leads to different upper critical (space)
dimensions, viz.,
\begin{equation}
d_{\rm c} = \frac{k+1}{k-1} = \frac{2}{k-1} + 1,
\end{equation}
instead of (\ref{BT:dcnr}). The two important physical cases are here $d_{\rm
c}=3$, $k=2$ and $d_{\rm c}=2$, $k=3$, while $d_{\rm c} \rightarrow 1$ when $k
\rightarrow \infty$. On comparison with the nonrelativistic results, we see
that the nonrelativistic theory has an upper critical space dimension which
is one lower than that of the corresponding relativistic theory (see Table
\ref{table:1}). Heuristically, this can be understood by noting that in a
nonrelativistic context the time dimension counts double in that it
has a scaling dimension twice that of a space dimension [see Eq.\
(\ref{BT:scale})], thereby increasing the {\it effective} spacetime
dimensionality by one.
\begin{table}
\caption{The upper critical space dimension $d_{\rm c}$ of a
nonrelativistic (NR) and a relativistic (R) quantum theory with a
$|\phi|^{2k}$ interaction term.}
\label{table:1}
\begin{center}
\vspace{.5cm}
\begin{tabular}{ccc|cccccccc} \hline \hline
& & & & & & & & & & \\[-.2cm]
& $k$ & & & & $d_{\rm c}$(NR)& & & $d_{\rm c}$(R) & & \\[.1cm]
\hline
& & & & & & & & & & \\[-.2cm]
& 2 & & & & 2 & & & 3 & & \\
& 3 & & & & 1 & & & 2 & & \\
& $\infty$ & & & & 0 & & & 1 & & \\[.1cm]
\hline \hline
\end{tabular}
\end{center}
\end{table}
From this analysis it follows that for a given number of space
dimensions the critical properties of a nonrelativistic theory are unrelated
to those of the corresponding relativistic extension.
In closing this section we recall that in a one-dimensional relativistic
theory---corresponding to the lowest upper critical dimension ($d_{\rm
c}=1)$---a continuous symmetry cannot be spontaneously broken. However, the
theory can nevertheless have a phase transition of the
Kosterlitz-Thouless type. Given the connection between the relativistic and
nonrelativistic theories discussed above, it seems interesting to study the
nonrelativistic theory at zero space dimension ($d = 0$) to see if a similar
rich phenomenon as in the lower critical dimension of the relativistic
theory occurs here. This may be of relevance to so-called quantum dots.
\section{Quantum Hall Liquid}
\label{sec:QHL}
In this section we shall argue that the effective theory of a quantum
Hall liquid can be used to describe its liquid-to-insulator transition
as the applied magnetic field changes, and study its critical properties.
Experimentally, if the external field is changed so that the filling
factor $\nu_H$ moves away from an odd-denominator value, the system
eventually becomes critical and undergoes a transition to an insulating
phase. Elsewhere \cite{NP}, we have argued that this feature is encoded
in the CSGL theory. In the spirit of Landau, we took a phenomenological
approach towards this field-induced phase transition. And assumed that
when the applied magnetic field $H$ is close to the upper critical field
$H^+_{\nu_H}$ at which the quantum Hall liquid with filling factor
$\nu_H$ is destroyed, the chemical potential of the composite particles
depends linearly on $H$, i.e., $\mu_0
\propto eH^+_{\nu_H}- eH$. This state can of course also be destroyed by
lowering the applied field. If the system is near the lower critical
field $H^-_{\nu_H}$, we assumed that the chemical potential is instead
given by $\mu_0 \propto eH - eH^-_{\nu_H}$. This is the basic postulate of
our approach.
We modify the CSGL Lagrangian (\ref{CSGL:L}) so that it only includes
the fluctuating part of the statistical gauge field. That is, we ignore
the classical part of $a$ which yields a magnetic field that precisely
cancels the externally applied field. We can again transform the mass
$m$ of the nonrelativistic $|\phi|^4$-theory away. In addition to the
engineering dimensions (\ref{BT:scale}), we have for the Chern-Simons
field
\begin{equation} \label{CSGL:dim}
[e a_i] = 1, \;\;\;\; [e a_0] = 2, \;\;\;\; [\theta] = 0.
\end{equation}
In two space dimensions, the coupling constant $\lambda_0$ was seen to be
dimensionless, implying that the $|\phi|^4$-term is a marginal operator.
From (\ref{CSGL:dim}) it follows that also the Chern-Simons term is a
marginal operator. Hence, the CSGL theory contains---apart from a
random and a Coulomb term---precisely those terms relevant to the
description of the liquid-to-insulator transition in a quantized Hall
system.
It is well-known \cite{CH,LSW}, that the coefficient of the Chern-Simons
term is not renormalized by quantum corrections.
To first order in a loop expansion, the theory in two space dimensions
was known to have an IR fixed point determined by the zero of the beta
function \cite{LBL},
\begin{equation}
\label{RG:beta1loop}
\beta(\lambda) = \frac{1}{\pi} \left(4\lambda^2 - \frac{1}{\theta^2} \right).
\end{equation}
The calculation of $\beta(\lambda)$ has been extended to fourth order in the
loop expansion \cite{NP}. This study revealed that the one-loop result
(\ref{RG:beta1loop}) is unaffected by these higher-order loops. Presumably,
this remains true to all orders in perturbation theory, implying that---just
as in the neutral system which corresponds to taking the limit $\theta
\rightarrow \infty$---the one-loop beta function (\ref{RG:beta1loop}) is
exact.
It is schematically represented in Fig.\ \ref{fig:beta},
\begin{figure}
\vspace{-1.cm}
\begin{center}
\epsfxsize=7.cm
\mbox{\epsfbox{beta.eps}}
\end{center}
\vspace{-1.5cm}
\caption{Schematic representation of the beta function
(\ref{RG:beta1loop}). \label{fig:beta}}
\end{figure}
and is seen to yield a nontrivial IR fixed point
$\lambda^*\mbox{}^2= 1/4\theta^2$ determined by the filling
factor. More precisely, the strength of the repulsive coupling at the
fixed point $\lambda^{*} = \pi (2l+1)$ increases with the number $2l+1$
of flux quanta bound to the electron. The presence of the fixed point
shows that the CSGL theory undergoes a 2nd-order phase transition when
the chemical potential of the composite particles tends to a critical
value. As in the neutral case, it can be shown that the boson
self-energy $\Sigma$ also vanishes at every loop order in the charged
theory, and that the self-coupling parameter $\lambda$ is the only
object that renormalizes. The 2nd-order phase transition described by
the nontrivial IR fixed point has consequently again Gaussian exponents
$\nu=\tfrac{1}{2}, \; z=2$ \cite{NP}. It should be noted that only the
location of the fixed point depends on $\theta$, the critical
exponents---which in contrast to the strength of the coupling at the
fixed point are independent of the regularization and renormalization
scheme---are universal and independent of the filling factor. This
``triviality'' is in accord with the experimentally observed
universality of the FQHE. A dependence of the critical exponents on
$\theta$ could from the theoretical point of view hardly be made
compatible with the hierarchy construction
\cite{HH} which implies a cascade of phase transitions. From this
viewpoint the present results are most satisfying: the CSGL theory is
shown to encode a new type of field-induced 2nd-order quantum phase
transition that is simple enough not to obscure the observed
universality of the FQHE.
We stress again that in order to arrive at a realistic description of
the FQHE, the CSGL theory has to be extended to include
a $1/|{\bf x}|$-Coulomb potential and impurities. Both will change
the critical behavior we found in this section. In particular, the
Coulomb potential will change the Gaussian value $z=2$ into $z=1$.
\section{Random Theory}
\label{sec:Dirt}
In Sec.\ \ref{sec:BT}, we saw that in the absence of impurities
repulsively interacting bosons undergo a 2nd-order quantum phase
transition as function of the chemical potential. As was pointed out
there, this universality class is of relevance to various
condensed-matter systems. However, in most of the systems mentioned
there, as well in $^4$He in porous media, impurities play an essential
if not decisive role. For example, the two-dimensional
superconductor-to-insulator transition investigated by Hebard and
Paalanen
\cite{HPsu1} is driven by impurities. This means that, e.g., the correlation
length $\xi$ diverges as $|\Delta_{\rm c} - \Delta|^{-\nu}$ when
the disorder strength $\Delta$ characterizing the randomness approaches
the critical value $\Delta_{\rm c}$. Hence, a realistic description
of the critical behavior of these systems should include impurities.
To include these, we proceed as before and add the random term
(\ref{Dirt:dis}) to the nonrelativistic $|\phi|^4$-theory (\ref{eff:Lagr}).
The random field $\psi({\bf x})$ has the Gaussian distribution
(\ref{random}). We shall study the theory in the symmetrical state where
the bare chemical potential is negative and the global U(1) symmetry
unbroken. We therefore set $\mu_0 = - r_0$ again, with $r_0>0$. We leave
the number of space dimensions $d$ unspecified for the moment. As we
remarked before, since $\psi({\bf x})$ depends only on the $d$ spatial
dimensions, the impurities it describes should be considered as grains
randomly distributed in space. When---as is required for the study of
quantum critical phenomena---time is included, the static grains trace out
straight worldlines. That is to say, these impurities are linelike in the
quantum theory. It has been shown by Dorogovtsev \cite{Dorogovtsev} that
the critical properties of systems with extended defects must be studied in
a double $\epsilon$-expansion, otherwise no IR fixed point is found. The
method differs from the usual $\epsilon$-expansion, in that it also includes
an expansion in the defect dimensionality $\epsilon_{\rm d}$. To carry out
this program in the present context, where the defect dimensionality is
determined by the dimensionality of time, the theory has to be formulated in
$\epsilon_{\rm d}$ time dimensions. The case of interest is $\epsilon_{\rm
d}=1$, while in the opposite limit, $\epsilon_{\rm d}\rightarrow 0$, the
random nonrelativistic $|\phi|^4$-theory reduces to the classical spin model
with random (pointlike) impurities. Hence, $\epsilon_{\rm d}$ is a
parameter with which quantum fluctuations can be suppressed. An expansion
in $\epsilon_{\rm d}$ is a way to perturbatively include the effect of
quantum fluctuations on the critical behavior. Ultimately, we will be
interested in the case $\epsilon_{\rm d}=1$.
To calculate the quantum critical properties of the random theory, which
have first been studied in \cite{KU}, we will not employ the replica
method \cite{GrLu}, but instead follow Lubensky
\cite{Lubensky}. In this approach, the averaging over impurities is carried
out for each Feynman diagram separately. The upshot is that only those
diagrams are to be included which remain connected when $\Delta_0$, the
parameter characterizing the Gaussian distribution of the impurities, is set
to zero \cite{Hertzrev}. To obtain the relevant Feynman rules of the random
theory we average the interaction term (\ref{Dirt:dis}) over the
distribution (\ref{random}):
\begin{eqnarray} \label{Dirt:int}
\lefteqn{\int \mbox{D} \psi \, P(\psi)
\exp\left[i^{\epsilon_{\rm d}} \int
\mbox{d}^{\epsilon_{\rm d}} t \, \mbox{d}^d x \, \psi({\bf x}) \, |\phi(x)|^2 \right]
= } \nonumber \\ && \exp \left[\tfrac{1}{4} i^{2 \epsilon_{\rm d}} \Delta_0
\int \mbox{d}^{\epsilon_{\rm d}} t \, \mbox{d}^{\epsilon_{\rm d}} t' \, \mbox{d}^d x \,
|\phi(t,{\bf x})|^2 |\phi(t',{\bf x})|^2 \right].
\end{eqnarray}
The randomness is seen to result in a quartic interaction term which is
nonlocal in time. The factor $i^{\epsilon_{\rm d}}$ appearing in
(\ref{Dirt:int}) arises from the presence of $\epsilon_{\rm d}$ time
dimensions, each of which is accompanied by a factor of $i$. The Feynman
rules of the random theory are now easily obtained
\begin{eqnarray} \label{Dirt:Feynart}
\raisebox{-0.3cm}{\epsfxsize=2.5cm
\epsfbox{rprop.eps} }
&=& \frac{-i^{- \epsilon_{\rm d}} {\rm e}^{i(\omega_1 + \omega_2 + \cdots +
\omega_{\epsilon_{\rm d}}) \eta}}{\omega_1 + \omega_2 + \cdots +
\omega_{\epsilon_{\rm d}} -{\bf k}^2 - r_0 + i \eta} \nonumber \\
\raisebox{-0.5cm}{\epsfxsize=2.5cm
\epsfbox{rver1.eps} }
&=& -4 i^{\epsilon_{\rm d}} \lambda_0 \nonumber \\
\raisebox{-0.5cm}{\epsfxsize=2.5cm
\epsfbox{rver2.eps} }
&=& i^{\epsilon_{\rm d}} (2 \pi)^{\epsilon_{\rm d}} \delta^{\epsilon_{\rm
d}}(\omega_1 + \omega_2 + \cdots + \omega_{\epsilon_{\rm d}})
\Delta_0,
\end{eqnarray}
where we note that the Lagrangian in $\epsilon_{\rm d}$ time dimensions
involves instead of just one time derivative, a sum of $\epsilon_{\rm d}$
derivatives: $\partial_t \rightarrow \partial_{t_1} +
\partial_{t_2} + \cdots + \partial_{t_{\epsilon_{\rm d}}}$.
Following Weichman and Kim \cite{WK}, we evaluate the integrals over loop
energies assuming that all energies are either positive or negative. This
allows us to employ Schwinger's propertime representation of propagators
\cite{proptime}, which is based on the integral representation (\ref{gamma})
of the gamma function. The energy integrals we encounter to the one-loop
order can be carried out with the help of the equations
\begin{eqnarray} \label{Dirt:inta}
\lefteqn{\int' \frac{\mbox{d}^{\epsilon_{\rm d}} \omega}{(2\pi)^{\epsilon_{\rm d}}}
\frac{1}{\omega_1 + \omega_2 + \cdots + \omega_{\epsilon_{\rm d}} -x
\pm i \eta} =} \nonumber \\ && -\frac{\Gamma(1-\epsilon_{\rm
d})}{(2\pi)^{\epsilon_{\rm d}}} {\rm sgn}(x) |x|^{\epsilon_{\rm d}-1}
\left({\rm e}^{\pm i \, {\rm sgn}(x) \pi
\epsilon_{\rm d}} + 1 \right), \\
\lefteqn{\int' \frac{\mbox{d}^{\epsilon_{\rm d}} \omega}{(2\pi)^{\epsilon_{\rm d}}}
\frac{{\rm e}^{i(\omega_1 + \omega_2 + \cdots + \omega_{\epsilon_{\rm
d}})\eta}}{\omega_1 + \omega_2 + \cdots + \omega_{\epsilon_{\rm d}} -x + i x
\eta} =} \nonumber \\ && \frac{i \pi}{(2\pi)^{\epsilon_{\rm
d}}\Gamma(\epsilon_{\rm d})} (i|x|)^{\epsilon_{\rm d}-1} \left[
\sin(\tfrac{1}{2} \pi \epsilon_{\rm d}) - \frac{{\rm
sgn}(x)}{\sin(\tfrac{1}{2} \pi \epsilon_{\rm d} )} \right].
\label{Dirt:intb}
\end{eqnarray}
The prime on the integrals is to remind the reader that the energy integrals
are taken over only two domains with either all energies positive or
negative. The energy integrals have been carried out by using again the
integral representation (\ref{gamma}) of the gamma function. In doing
so, the integrals are regularized and---as is always the case with analytic
regularizations---irrelevant divergences suppressed.
By differentiation with respect to $x$, Eq.\ (\ref{Dirt:inta}) can,
for example, be employed to calculate integrals involving integrands of the
form $1/(\omega_1 + \omega_2 + \cdots + \omega_{\epsilon_{\rm d}} -x + i
\eta)^2$. Is is easily checked that in the limit $\epsilon_{\rm d}
\rightarrow 1$, where the energy integral can be performed with help of
contour integration, Eqs.\ (\ref{Dirt:inta}) and (\ref{Dirt:intb}) reproduce
the right results. When considering the limit of zero time dimensions
($\epsilon_{\rm d} \rightarrow 0$), it should be borne in mind that the
energy integrals were taken over two separate domains with all energies
either positive or negative. Each of these domains is contracted to a
single point in the limit $\epsilon_{\rm d} \rightarrow 0$, so that one
obtains a result which is twice that obtained by simply purging any
reference to the time dimensions. The integral (\ref{Dirt:intb}) contains
an additional convergence factor $\exp(i\omega \eta)$ for each of the
$\epsilon_{\rm d}$ energy integrals. This factor, which---as we remarked
before---is typical for nonrelativistic quantum theories \cite{Mattuck}, is
to be included in self-energy diagrams containing only one
$\phi$-propagator.
Before studying the random theory, let us briefly return to the repulsively
interacting bosons in the absence of impurities. In this case, there is no
need for an $\epsilon_{\rm d}$-expansion and the formalism outlined above
should yield results for arbitrary time dimensions $0 \leq
\epsilon_{\rm d} \leq 1$, interpolating between the classical and quantum
limit. After the energy integrals have been performed with the help of
Eqs.\ (\ref{Dirt:inta}) and (\ref{Dirt:intb}), the standard technique of
integrating out a momentum shell can be applied to obtain the
renormalization-group equations. For the correlation-length exponent
$\nu$ we obtain in this way \cite{pla}
\begin{equation} \label{Dirt:nupure}
\nu = \frac{1}{2} \left[1 + \frac{\epsilon}{2} \frac{m+1}{(m+4) -
(m+3) \epsilon_{\rm d}} \cos^2( \tfrac{1}{2} \pi \epsilon_{\rm d}) \right].
\end{equation}
Here, $\epsilon = 4-2\epsilon_{\rm d}-d$ is the deviation of the {\it
effective} spacetime dimensionality from 4, where it should be noted that in
(canonical) nonrelativistic theories, time dimensions have an engineering
dimension twice that of space dimensions. (This property is brought out by
the Gaussian value $z=2$ for the dynamic exponent $z$.) For comparison we
have extended the theory (\ref{eff:Lagr}) to include $m$ complex
$\phi$-fields instead of just one field. In the classical limit, Eq.\
(\ref{Dirt:nupure}) gives the well-known one-loop result for a classical
spin model with $2m$ real components \cite{Ma},
\begin{equation}
\nu \rightarrow \frac{1}{2} \left(1 + \frac{\epsilon}{2}
\frac{m+1}{m+4} \right),
\end{equation}
while in the quantum limit it gives the result $\nu \rightarrow
\frac{1}{2}$, as required.
The exponent (\ref{Dirt:nupure}), and also the location of the fixed point,
diverges when the number of time dimensions becomes $\epsilon_{\rm
d}\rightarrow (m+4)/(m+3)$. Since this value is always larger than one, the
singularity is outside the physical domain $0 \leq \epsilon_{\rm d} \leq 1$.
This simple example illustrates the viability of the formalism developed
here to generate results interpolating between the classical and quantum
limit.
We continue with the random theory. After the energy integrals have been
carried out, it is again straightforward to derive the renormalization-group
equations by integrating out a momentum shell $\Lambda/b<k<\Lambda$, where
$\Lambda$ is a high-momentum cutoff and $b=\exp(l)$, with $l$ infinitesimal.
Defining the dimensionless variables
\begin{equation}
\hat{\lambda} = \frac{K_d}{(2 \pi)^{\epsilon_{\rm d}}} \lambda
\Lambda^{-\epsilon}; \;\;\;
\hat{\Delta} = K_d \Delta \Lambda^{d-4}; \;\;\;
\hat{r} = r \Lambda^{-2},
\end{equation}
where
\begin{equation}
K_d = \frac{2}{(4\pi)^{d/2}\Gamma(\tfrac{d}{2})}
\end{equation}
is the area of a unit sphere in $d$ spatial dimensions divided by
$(2\pi)^d$, we find \cite{pla}
\begin{eqnarray} \label{Dirt:reneq}
\frac{\mbox{d} \hat{\lambda}}{\mbox{d} l} &=& \epsilon \hat{\lambda} -8
\left[\Gamma(1-\epsilon_{\rm d}) + (m+3) \Gamma(2-\epsilon_{\rm d}) \right]
\cos(\tfrac{1}{2}\pi \epsilon_{\rm d})
\hat{\lambda}^2 + 6 \hat{\Delta} \hat{\lambda} \nonumber \\
\frac{\mbox{d} \hat{\Delta}}{\mbox{d} l} &=& (\epsilon + 2\epsilon_{\rm
d})\hat{\Delta } + 4 \hat{\Delta}^2 - 16 (m+1)
\Gamma(2-\epsilon_{\rm d}) \cos(\tfrac{1}{2}\pi \epsilon_{\rm d} )
\hat{\lambda} \hat{\Delta} \nonumber \\
\frac{\mbox{d} \hat{r}}{\mbox{d} l} &=& 2 \hat{r} + 4 \pi \frac{m+1}{
\Gamma(\epsilon_{\rm d})} \frac{\cos^2(\tfrac{1}{2} \pi \epsilon_{\rm d})}
{\sin(\tfrac{1}{2} \pi \epsilon_{\rm d} )} \hat{\lambda} -
\hat{\Delta}.
\end{eqnarray}
These results are to be trusted only for small values of $\epsilon_{\rm d}$.
For illustrative purposes we have, however, kept the full $\epsilon_{\rm d}$
dependence. The set of equations yields the fixed point
\begin{eqnarray} \label{Dirt:fp}
\hat{\lambda}^* &=& \frac{1}{16 \cos(\tfrac{1}{2}\pi \epsilon_{\rm d} )
\Gamma(1-\epsilon_{\rm d})} \, \frac{\epsilon + 6 \epsilon_{\rm d}}
{2m(1-\epsilon_{\rm d}) -1} \\
\hat{\Delta}^* &=& \frac{1}{4} \frac{
m(1-\epsilon_{\rm d}) (2 \epsilon_{\rm d} -\epsilon) + 2 \epsilon_{\rm d}
(4-3\epsilon_{\rm d}) + \epsilon (2 -\epsilon_{\rm d})}{2m(1-\epsilon_{\rm d})
-1}, \nonumber
\end{eqnarray}
and the critical exponent
\begin{equation} \label{Dirt:nufull}
\nu = \frac{1}{2} + \frac{\epsilon +2 \epsilon_{\rm d}}{16} +
\frac{m+1}{16} \frac{
(6\epsilon_{\rm d} + \epsilon ) [\epsilon_{\rm d}+\cos( \pi
\epsilon_{\rm d})]}{2m(1-\epsilon_{\rm d})-1}.
\end{equation}
The dynamic exponent is given by $z = 2 + \hat{\Delta}^*$. When the
equations are expanded to first order in $\epsilon_{\rm d}$, we recover
the IR fixed point found by Weichman and Kim \cite{WK} using an
high-energy cutoff:
\begin{equation} \label{dirt:WK}
\hat{\lambda}^*= \frac{1}{16} \frac{\epsilon + 6 \epsilon_{\rm d}}{2m-1};
\;\;\; \hat{\Delta}^*= \frac{1}{4} \frac{(2-m)\epsilon + 2(m+4)
\epsilon_{\rm d}}{2m-1},
\end{equation}
with the critical exponent
\begin{equation} \label{Dirt:nuqm}
\nu = \frac{1}{2} \left[1 + \frac{1}{8} \frac{3m \epsilon + (5m +2)
2\epsilon_{\rm d}}{2m-1} \right].
\end{equation}
The value of the critical exponent (\ref{Dirt:nuqm}) should be compared with
that of the classical spin model with $2m$ components in the presence of
random impurities of dimension $\epsilon_{\rm d}$ \cite{Dorogovtsev}:
\begin{equation}
\nu = \frac{1}{2} \left[1 + \frac{1}{8} \frac{3m \epsilon + (5m +2)
\epsilon_{\rm d}}{2m-1} \right].
\end{equation}
Taking into account that in a nonrelativistic quantum theory, time
dimensions count double as compared to space dimensions, we see that both
results are equivalent. As to the dynamic exponent, notice that the
perturbative result $z = 2 + \hat{\Delta}^*$, with $\hat{\Delta}^*$ given by
(\ref{dirt:WK}), is far away from the exact value $z=d$ for $\epsilon_{\rm
d}=1$ \cite{FF}.
\section{Experiments}
\subsection{Superconductor-To-Insulator Transition}
The first experiments we wish to discuss are those performed by Hebard
and Paalanen on superconducting films in the presence of random
impurities \cite{HPsu1,HPsu2}. It has been predicted by Fisher
\cite{MPAFisher} that with increasing applied magnetic field such
systems undergo a zero-temperature transition into an insulating state.
(For a critical review of the experimental data available in 1993, see
Ref.\ \cite{LG}.)
Let us restrict ourselves for the moment to the $T\Delta$-plane of the
phase diagram by setting the applied magnetic field $H$ to zero. For
given disorder strength $\Delta$, the system then undergoes a
Kosterlitz-Thouless transition induced by the unbinding of magnetic
vortex pairs at a temperature $T_{\rm KT}$ well below the bulk
transition temperature (see Sec.\ \ref{sec:2sc}). The
Kosterlitz-Thouless temperature is gradually suppressed to zero when the
disorder strength approaches criticality $\Delta \rightarrow
\Delta_{\rm c}$. The transition temperature scales with the correlation
length $\xi \sim |\Delta_{\rm c} - \Delta|^{-\nu}$ as
$T_{\rm KT} \sim \xi^{-z}$.
In the $H\Delta$-plane, i.e., at $T=0$, the situation is as follows.
For given disorder strength, there is now at some critical value $H_{\rm
c}$ of the applied magnetic field a phase transition from a
superconducting state of pinned vortices and condensed Cooper pairs to
an insulating state of pinned Cooper pairs and condensed vortices. The
condensation of vortices disorder the ordered state as happens in
classical, finite temperature superfluid- and superconductor-to-normal
phase transitions \cite{GFCM}. When the disorder strength approaches
criticality again, $H_{\rm c}$ is gradually suppressed to zero. The
critical field scales with $\xi$ as $H_{\rm c} \sim \Phi_0/\xi^2$. In
fact, this expresses a more fundamental result, namely that the scaling
dimension $d_{\bf A}$ of ${\bf A}$ is one,
\begin{equation}
d_{\bf A} = 1,
\end{equation}
so that $|{\bf A}| \sim \xi^{-1}$. From this it in turn
follows that $E \sim \xi_t^{-1} \xi^{-1} \sim \xi^{-(z+1)}$, and that
the scaling dimension $d_{A_0}$ of $A_0$ is $z$,
\begin{equation}
d_{A_0} = z,
\end{equation}
so that $A_0 \sim \xi_t^{-1} \sim \xi^{-z}$. Together, the scaling
results for $T_{\rm KT}$ and $H_{\rm c}$ imply that
\cite{MPAFisher}
\begin{equation} \label{H-T}
H_{\rm c} \sim T_{\rm KT}^{2/z}.
\end{equation}
This relation, linking the critical field of the zero-temperature
transition to the Kosterlitz-Thouless temperature, provides a direct way
to measure the dynamic exponent $z$ at the $H=0$, $T=0$ transition.
This has been first done by Hebard and Paalanen \cite{HPsu1,HPsu2}.
Their experimental determination of $T_{\rm KT}$ and $H_{\rm c}$ for
five different films with varying amounts of impurities confirmed the
relation (\ref{H-T}) with $2/z = 2.04 \pm 0.09$. The zero-temperature
critical fields were obtained by plotting $\mbox{d} \rho_{xx}/\mbox{d} T|_H$
versus $H$ at the lowest accessible temperature and interpolating to the
the field where the slope is zero. The resulting value $z= 0.98 \pm .04$
is in accordance with Fisher's prediction \cite{MPAFisher}, $z=1$, for a
random system with a $1/|{\bf x}|$-Coulomb potential.
Hebard and Paalanen \cite{HPsu1} also investigated the field-induced
zero-temperature transition. The control parameter is here $\delta \propto
H -H_{\rm c}$. When plotted as function of $|H -H_{\rm c}|/T^{1/\nu_H
z_H}$ they saw their resistivity data collapsing onto two branches; an upper
branch tending to infinity for the insulating state, and a lower branch
bending down for the superconducting state. The unknown product $\nu_H z_H$
is experimentally determined by looking for which value the best scaling
behavior is obtained. Further experiments carried out by Yazdani and
Kapitulnik \cite{YaKa} also determined the product $\nu_H (z_H+1)$ (see
below). The two independent measurements together fix the critical
exponents $\nu_H$ and $z_H$ separately. From their best data, Yazdani and
Kapitulnik extracted the values \cite{YaKa}
\begin{equation} \label{zHnuH}
z_H = 1.0 \pm 0.1, \;\;\;\; \nu_H = 1.36 \pm 0.05.
\end{equation}
\subsection{Quantum-Hall Systems}
We continue to discuss the field-induced quantum phase transitions in
quantum Hall systems. Since an excellent discussion recently appeared in
the literature \cite{SGCS}, we shall be brief, referring the reader to
that review for a more thorough discussion and additional references.
One can image transitions from one Hall liquid to another Hall liquid with a
different (integer or fractional) filling factor, or to the insulating
state. Experiments seem to suggest that all the quantum-Hall transitions
are in the same universality class. The transitions are probed by measuring
the resistivities $\rho_{xx}$ and $\rho_{xy}$. From the dependence of the
conductivity $\sigma$ on the superfluid mass density, Eq.\
(\ref{conductivity}), and the scaling relation (\ref{hyperrho}), it follows
that it scales as \cite{AALR}
\begin{equation}
\sigma \sim \xi^{-(d-2)}.
\end{equation}
In other words, the scaling dimension of the conductivity and therefore
that of the resistivity is zero in two space dimensions. On account of
the general finite-size scaling form (\ref{scalingT}), we then have in
the limit $|{\bf k}| \rightarrow 0$:
\begin{equation}
\rho_{xx/y}(k_0,H,T) = \varrho _{xx/y}(k_0/T, |\delta|^{\nu z}/T),
\end{equation}
where the distance to the zero-temperature critical point is measured by
$\delta \propto H - H_{\nu_H}^\pm \sim T^{1/\nu z}$. This scaling of
the width of the transition regime with temperature has been
corroborated by DC or $k_0=0$ experiments on various transitions between
integer quantum-Hall states which were all found to yield the value
$1/\nu z = 0.42 \pm 0.04$ \cite{WTPP}.
A second measurement of the critical exponents involves the applied
electric field. As we have seen above, it scales as $E
\sim \xi^{-(z+1)}$, so that for the DC resistivities we now obtain the
scaling form:
\begin{equation} \label{scalingE}
\rho_{xx/y}(H,T,E) = \varrho
_{xx/y}(|\delta|^{\nu z}/T,|\delta|^{\nu (z+1)}/E).
\end{equation}
The scaling $|\delta| \sim E^{1/\nu (z+1)}$ has again been corroborated by
experiment which yielded the value $\nu (z+1) \approx 4.6$ \cite{WET}.
Together with the previous result obtained from the temperature scaling
this gives
\begin{equation}
z \approx 1, \;\;\;\; \nu \approx 2.3.
\end{equation}
The value of the dynamic exponent strongly suggests that it is a result
of the presence of the $1/|{\bf x}|$-Coulomb potential. The correlation
length exponent $\nu$ is large.
\subsection{$2d$ Electron Systems}
Recently, silicon MOSFET's at extremely low electron number densities
has been studied \cite{MIT,KSSMF,SKS,PFW}. Earlier experiments at
higher densities seemed to confirm the general believe, based on the
work by Abrahams {\it et al.} \cite{AALR}, that such two-dimensional
electron systems do not undergo a quantum phase transition. In that
influential paper, it was demonstrated that even weak disorder is
sufficient to localize the electrons at the absolute zero of temperature
thus excluding conducting behavior. Electron-electron interactions were
however not included. As we saw in Sec.\
\ref{sec:ET}, at low densities, the $1/|{\bf x}|$-Coulomb interaction
becomes important and the analysis of Abrahams {\it et al.}
\cite{AALR} no longer applies.
The recent experiments have revealed a zero-temperature
conductor-to-insulator transition triggered by a change in the charge
carrier density $\bar{n}$. That is, the distance to the critical point
is in these systems measured by $\delta \propto \bar{n} - \bar{n}_{\rm c}$.
Like in the quantum-Hall systems, these transitions are probed by
measuring the resistivity. It scales with temperature near the
transition according to the scaling form (\ref{scalingE}) with $H$ set
to zero. For $\bar{n} < \bar{n}_{\rm c}$, where the Coulomb interaction
is dominant and fluctuations in the charge carrier density are
suppressed, the electron system is insulating. On increasing the
density, these fluctuations intensify and at the critical value
$\bar{n}_{\rm c}$, the system reverts to a conducting phase. By plotting
their conductivity data as function of $T/|\delta|^{\nu z}$ with $\nu z =
1.6 \pm 0.1$, Popov\'{\i}c, Fowler, and Washburn \cite{PFW} saw it
collapse onto two branches; the upper branch for the conducting side of the
transition, and the lower one for the insulating side. A similar
collapse with a slightly different value $1/\nu z = 0.83 \pm 0.08$ was found
in Ref.\ \cite{KSSMF}, where also the collapse of the data when plotted
as function of $|\delta|/E^{1/\nu (z+1)}$ was obtained. The best collapse
resulted for $1/(z+1) \nu = 0.37 \pm 0.01$, leading to
\begin{equation} z = 0.8 \pm 0.1, \;\;\;\; \nu = 1.5 \pm 0.1.
\end{equation}
The value for the dynamic exponent is close to the expected value $z=1$ for
a charged system with a $1/|{\bf x}|$-Coulomb interaction, while that of
$\nu$ is surprisingly close to the value (\ref{zHnuH}) found for the
superconductor-to-insulator transition.
A further experimental result for these two-dimensional electron systems
worth mentioning is the suppression of the conducting phase by an
applied magnetic field found by Simonian, Kravchenko, and Sarachik
\cite{SKS}. They applied the field {\it parallel} to the plane of the
electrons instead of perpendicular as is done in quantum-Hall
measurements. In this way, the field presumably couples only to the
spin of the electrons and the complications arising from orbital effects
do not arise. At a fixed temperature, a rapid initial raise in the
resistivity was found with increasing field. Above a value of about 20
kOe, the resistivity saturates. It was pointed out that both the
behavior in a magnetic filed, as well as in zero field strongly
resembles that near the superconductor-to-insulator transition discussed
above, suggesting that the conducting phase might in fact be
superconducting.
\subsection{Conclusions}
We have seen that general scaling arguments can be employed to understand
the scaling behavior observed in various quantum phase transitions. Most
of the experiments seem to confirm the expected value $z=1$ for a random
system with a $1/|{\bf x}|$-Coulomb interaction. The number of different
universality classes present is yet not known. Even if the
conductor-to-insulator transition observed in silicon MOSFET's at low
electron number densities turns out to be in the same universality class as
the superconductor-to-insulator transition, there are still the
field-induced transitions in quantum-Hall systems, which have a larger
correlation-length exponent.
The paradigm provided by a repulsively interacting Bose gas, seems to be a
good starting point to describe the various systems. However,
high-precision estimates calculated from this theory with impurities and
a $1/|{\bf x}|$-Coulomb interaction included are presently lacking.
\chapter{Superconductivity \label{chap:sc}}
In this chapter we shall demonstrate a close connection between the
Bogoliubov theory of superfluidity discussed in the previous chapter and
the strong-coupling limit of the BCS theory of superconductivity. The
phase-only effective theory governing the superconducting state is
derived. It is also pointed out that a superconducting film at finite
temperature undergoes a Kosterlitz-Thouless phase transition.
\section{BCS Theory}
Our starting point is the famous microscopic model of Bardeen, Cooper,
and Schrieffer (BCS) defined by the Lagrangian \cite{BCS}
\begin{eqnarray} \label{bcs:BCS}
{\cal L} &=& \psi^{\ast}_{\uparrow} [i\partial_0 - \xi(-i \nabla)]
\psi_{\uparrow}
+ \psi_{\downarrow}^{\ast} [i \partial_0 - \xi(-i
\nabla)]\psi_{\downarrow} - \lambda_0
\psi_{\uparrow}^{\ast}\,\psi_{\downarrow}
^{\ast}\,\psi_{\downarrow}\,\psi_{\uparrow} \nonumber \\ &:=& {\cal
L}_{0} + {\cal L}_{\rm i},
\end{eqnarray}
where ${\cal L}_{\rm i} = - \lambda_0 \psi_{\uparrow}^{\ast} \,
\psi_{\downarrow}^{\ast}\,\psi_{\downarrow}\,\psi_{\uparrow}$ is a
contact interaction term, representing the effective, phonon mediated,
attraction between electrons with coupling constant $\lambda_0 < 0$, and
${\cal L}_{0}$ is the remainder. In (\ref{bcs:BCS}), the field
$\psi_{\uparrow (\downarrow )}$ is an anticommuting field describing the
electrons with mass $m$ and spin up (down); $\xi(-i \nabla) = \epsilon(-i
\nabla) - \mu_0$, with $\epsilon(-i \nabla) = - \nabla^2/2m$, is the kinetic
energy operator with the chemical potential $\mu_0$ subtracted.
The Lagrangian (\ref{bcs:BCS}) is invariant under global U(1)
transformations. Under such a transformation, the electron fields pick up
an additional phase factor
\begin{equation} \label{bcs:3g}
\psi_{\sigma} \rightarrow \mbox{e}^{i \alpha }
\psi_{\sigma}
\end{equation}
with $\sigma = \uparrow, \downarrow$ and $\alpha$ a constant.
Notwithstanding its simple form, the microscopic model (\ref{bcs:BCS}) is a
good starting point to describe BCS superconductors. The reason is that the
interaction term allows for the formation of Cooper pairs which below a
critical temperature condense. This results in a nonzero expectation value
of the field $\Delta$ describing the Cooper pairs, and a spontaneous
breakdown of the global U(1) symmetry. This in turn gives rise to the
gapless Anderson-Bogoliubov mode which---after incorporating the
electromagnetic field---lies at the root of most startling properties of
superconductors \cite{Weinberg}.
To obtain the effective theory governing the Anderson-Bogoliubov mode,
let us integrate out the fermionic degrees of freedom. To this end we
introduce Nambu's notation and rewrite the Lagrangian (\ref{bcs:BCS}) in
terms of a two-component field
\begin{equation} \label{bcs:32}
\psi = \left( \begin{array}{c} \psi_{\uparrow} \\
\psi_{\downarrow}^{\ast} \end{array} \right) \:\:\:\:\:\:
\psi^{\dagger} = (\psi_{\uparrow}^{\ast},\psi_{\downarrow}).
\end{equation}
In this notation, ${\cal L}_{0}$ becomes
\begin{equation} \label{bcs:33}
{\cal L}_{0} = \psi^{\dagger}\,
\left(\begin{array}{cc}
i \partial_0 - \xi(-i \nabla) & 0 \\
0 & i \partial_0 + \xi(-i \nabla)
\end{array}\right) \, \psi,
\end{equation}
where we explicitly employed the anticommuting character of the electron
fields and neglected terms which are a total derivative. The partition
function,
\begin{equation} \label{bcs:34}
Z = \int \mbox{D} \psi^{\dagger} \mbox{D} \psi \exp \left( i \int_x
\,{\cal L} \right),
\end{equation}
must for our purpose be manipulated in a form bilinear in the electron
fields. This is achieved by rewriting the quartic interaction term as a
functional integral over auxiliary fields $\Delta$ and $\Delta^*$ (for
details see Ref.\ \cite{KleinertFS}):
\begin{eqnarray} \label{bcs:35}
\lefteqn{
\exp \left( -i \lambda_0 \int_x \psi_{\uparrow}^{\ast}
\, \psi_{\downarrow}^{\ast} \, \psi_{\downarrow}\, \psi_{\uparrow}
\right) = } \\
& & \!\!\!\!\!\! \int \mbox{D} \Delta^* \mbox{D} \Delta \exp \left[ -i
\int_x \left( \Delta^* \, \psi_{\downarrow}\,\psi_{\uparrow} +
\psi_{\uparrow}^{\ast} \, \psi_{\downarrow}^{\ast} \, \Delta -
\frac{1}{\lambda_0 } \Delta^* \Delta \right) \right], \nonumber
\end{eqnarray}
where, as always, an overall normalization factor is omitted. Classically,
$\Delta$ merely abbreviates the product of two electron fields
\begin{equation} \label{bcs:del}
\Delta = \lambda_0 \psi_{\downarrow} \psi_{\uparrow}.
\end{equation}
It would therefore be more appropriate to give $\Delta$ two spin labels
$\Delta_{\downarrow \uparrow}$. Since $\psi_{\uparrow}$ and
$\psi_{\downarrow}$ are anticommuting fields, $\Delta$ is antisymmetric in
these indices. Physically, it describes the Cooper pairs of the
superconducting state.
By employing (\ref{bcs:35}), we can cast the partition function in the desired
bilinear form:
\begin{eqnarray} \label{bcs:36}
Z = \int \mbox{D} \psi^{\dagger} \mbox{D} \psi \int \mbox{D}
\Delta^* \mbox{D} \Delta \!\!\!\!\! && \!\!\!\!\!
\exp\left(\frac{i}{\lambda_0} \int_x \Delta^* \Delta \right) \\ &&
\!\!\!\!\!\!\!\!\!\!\! \times \exp \left[
i \int_x \, \psi^{\dagger} \left( \begin{array}{cc} i \partial_{0} -
\xi(-i \nabla) & -\Delta \\ -\Delta^* & i \partial_{0} + \xi(-i \nabla)
\end{array} \right) \psi \right] \nonumber .
\end{eqnarray}
Changing the order of integration and performing the Gaussian integral over
the Grassmann fields, we obtain
\begin{equation} \label{bcs:37}
Z = \int \mbox{D} \Delta^* \mbox{D} \Delta \, \exp \left(i S_{\rm eff} [
\Delta^*, \Delta] + \frac{i}{\lambda_0}
\int_x \Delta^* \Delta \right),
\end{equation}
where $S_{\rm eff}$ is the one-loop effective action which, using the
identity Det($A$) = exp[Tr ln($A$)], can be cast in the form
\begin{equation} \label{bcs:312}
S_{\rm eff}[\Delta^*, \Delta] = -i \, {\rm Tr} \ln \left(
\begin{array}{cc} p_{0} - \xi ({\bf p}) & -\Delta \\ -\Delta^* &
p_{0} + \xi ({\bf p})
\end{array}\right),
\end{equation}
where $p_0 = i \partial_0$ and $\xi({\bf p}) = \epsilon({\bf p}) - \mu_0$,
with $\epsilon({\bf p}) = {\bf p}^2/2m$.
In the mean-field approximation, the functional integral (\ref{bcs:37}) is
approximated by the saddle point:
\begin{equation} \label{bcs:38}
Z = \exp \left(i S_{\rm eff}
[ \Delta^*_{\rm mf}, \Delta_{\rm mf} ] + \frac{i}{\lambda_0} \int_x
\Delta^*_{\rm mf} \Delta_{\rm mf} \right),
\end{equation}
where $\Delta_{\rm mf}$ is the solution of mean-field equation
\begin{equation} \label{bcs:gap}
\frac{\delta S_{\rm eff} }{\delta \Delta^* (x) } = - \frac{1}{\lambda_0} \Delta.
\end{equation}
If we assume the system to be spacetime independent so that $\Delta_{\rm
mf}(x) = \bar{\Delta}$, Eq.\ (\ref{bcs:gap}) yields the celebrated BCS gap
\cite{BCS} equation:
\begin{eqnarray} \label{bcs:gape}
\frac{1}{\lambda_0} &=& - i \int_k \frac{1}{k_{0}^{2} - E^{2}(k) + i \eta}
\nonumber \\ &=& - \frac{1}{2} \int_{\bf k} \frac{1}{E({\bf k})},
\end{eqnarray}
where $\eta$ is an infinitesimal positive constant that is to be set to
zero at the end of the calculation, and
\begin{equation} \label{bcs:spec}
E({\bf k}) = \sqrt{\xi^2({\bf k}) + |\bar{\Delta}|^2}
\end{equation}
is the spectrum of the elementary fermionic excitations. If this equation
yields a solution with $\bar{\Delta} \neq 0$, the global U(1) symmetry
(\ref{bcs:3g}) is spontaneously broken since
\begin{equation}
\bar{\Delta} \rightarrow \mbox{e}^{2i \alpha } \bar{\Delta} \neq
\bar{\Delta}
\end{equation}
under this transformation. The factor $2$ in the exponential function arises
because $\Delta$, describing the Cooper pairs, is built from two electron
fields. It satisfies Landau's definition of an order parameter as its value
is zero in the symmetric, disordered state and nonzero in the state with
broken symmetry. It directly measures whether the U(1) symmetry is
spontaneously broken.
In the case of a spacetime-independent system, the effective action
(\ref{bcs:312}) is readily evaluated. Writing
\begin{equation}
\left(
\begin{array}{cc} p_{0} - \xi ({\bf p}) & -\bar{\Delta} \\ -\bar{\Delta}^* &
p_{0} + \xi ({\bf p}) \end{array}\right) =
\left(
\begin{array}{cc} p_{0} - \xi ({\bf p}) & 0 \\ 0 &
p_{0} + \xi ({\bf p}) \end{array}\right) - \left(
\begin{array}{cc} 0 & \bar{\Delta} \\ \bar{\Delta}^* & 0 \end{array}\right),
\end{equation}
and expanding the second logarithm in a Taylor series, we recognize the
form
\begin{equation}
S_{\rm eff}[\bar{\Delta}^*, \bar{\Delta}] = -i \, {\rm Tr} \ln \left(
\begin{array}{cc} p_{0} - \xi ({\bf p}) & 0 \\ 0 &
p_{0} + \xi ({\bf p}) \end{array}\right) - i \, {\rm Tr}
\ln \left(1 - \frac{|\bar{\Delta}|^2}{p_0^2 - \xi^2({\bf p})} \right),
\end{equation}
up to an irrelevant constant. The integral over the loop energy $k_0$ gives
for the corresponding effective Lagrangian
\begin{equation}
{\cal L}_{\rm eff} = \int_{\bf k} \left[ E({\bf k}) - \xi({\bf k})
\right].
\end{equation}
To this one-loop result we have to add the tree term
$|\bar{\Delta}|^2/\lambda_0$. Expanding $E({\bf k})$ in $\bar{\Delta}$, we see
that the effective Lagrangian also contains a term quadratic in
$\bar{\Delta}$. This term amounts to a renormalization of the coupling
constant; we find to this order for the renormalized coupling constant
$\lambda$:
\begin{equation} \label{bcs:ren}
\frac{1}{\lambda} = \frac{1}{\lambda_0} + \frac{1}{2} \int_{\bf k}
\frac{1}{|\xi({\bf k})|},
\end{equation}
where it should be remembered that the bare coupling constant $\lambda_0$ is
negative, so that there is an attractive interaction between the fermions.
We shall analyze this equation later on, for the moment it suffice to note
that we can distinguish two limits. One, the limit where the bare coupling
constant is taken to zero, $\lambda_0 \rightarrow 0^-$, which is the famous
weak-coupling BCS limit. Second, the limit where the bare coupling is taken
to minus infinity $\lambda_0 \rightarrow - \infty$. This is the
strong-coupling limit, where the attractive interaction is such that the
fermions form tightly bound pairs \cite{Leggett}. These composite bosons
have a weak repulsive interaction and can undergo Bose-Einstein
condensation (see succeeding section).
Since there are two unknowns contained in the theory, viz., $\bar{\Delta}$
and $\mu_0$, we need a second equation to determine these variables in the
mean-field approximation \cite{Leggett}. To find the second equation we
note that the average fermion number $N$, which is obtained by
differentiating the effective action (\ref{bcs:312}) with respect to $\mu$
\begin{equation}
N = \frac{\partial S_{\rm eff}}{\partial \mu},
\end{equation}
is fixed. If the system is spacetime independent, this reduces in the
one-loop approximation to
\begin{equation} \label{bcs:n}
\bar{n} = - i\, {\rm tr} \int_k \, G_0(k) \tau_3,
\end{equation}
where $\bar{n}=N/V$, with $V$ the volume of the system, is the constant
fermion number density, $\tau_3$ is the diagonal Pauli matrix in Nambu
space,
\begin{equation}
\tau_3 = \left(
\begin{array}{cr} 1 & 0 \\ 0 & -1
\end{array} \right),
\end{equation}
and $G_0(k)$ is the Feynman propagator,
\begin{eqnarray} \label{bcs:prop}
G_0(k) &=&
\left( \begin{array}{cc} k_0 - \xi ({\bf k})
& -\bar{\Delta} \\ -\Delta^*_0 & k_0 + \xi ({\bf k})
\end{array} \right)^{-1} \\ &=&
\frac{1}{k_0^2 - E^2({\bf k}) + i \eta }
\left( \begin{array}{cc} k_{0} \, {\rm e}^{i k_0 \eta } + \xi
({\bf k}) &
\bar{\Delta} \\ \bar{\Delta}^* & k_{0} \, {\rm e}^{-i k_0 \eta}- \xi
({\bf k}) \end{array} \right). \nonumber
\end{eqnarray}
Here, $\eta$ is an infinitesimal positive constant that is to be set to zero
at the end of the calculation. The exponential functions in the diagonal
elements of the propagator are an additional convergence factor needed in
nonrelativistic theories \cite{Mattuck}. If the integral over the loop
energy $k_0$ in the particle number equation (\ref{bcs:n}) is carried out,
it takes the familiar form
\begin{equation} \label{bcs:ne}
\bar{n} = \int_{\bf k} \left(1 - \frac{\xi({\bf k})}{E({\bf k})} \right)
\end{equation}
The two equations (\ref{bcs:gape}) and (\ref{bcs:ne}) determine $\bar{\Delta}$
and $\mu_0$. They are usually evaluated in the weak-coupling BCS limit.
However, as was first pointed out by Leggett \cite{Leggett}, they can also
be easily solved in the strong-coupling limit (see succeeding section),
where the fermions are tightly bound in pairs. More recently, also the
crossover between the weak-coupling BCS limit and the strong-coupling
composite boson limit has been studied in detail
\cite{Haussmann,DrZw,MRE,MPS}.
We are now in a position to derive the effective theory governing the
Anderson-Bogoliubov mode. To this end we write the order parameter
$\Delta_{\rm mf}$ as
\begin{equation} \label{bcs:London}
\Delta_{\rm mf}(x) = \bar{\Delta} \, {\rm e}^{2i \varphi (x)},
\end{equation}
where $\bar{\Delta}$ is a spacetime-independent solution of the mean-field
equation (\ref{bcs:gap}) and $\varphi(x)$ represents the
Anderson-Bogoliubov mode, i.e., the Goldstone mode of the spontaneously
broken U(1) symmetry. This approximation, where the phase of the order
parameter is allowed to vary in spacetime while the modulus is kept
fixed, is called the London limit. This limit is relevant for our
discussion of the zero-temperature superconductor-to-insulator phase
transition in Ch.\ \ref{chap:qpt} because this transition is driven by
phase fluctuations; the modulus of the order parameter remains finite
and constant at the transition. The critical behavior can thus be
studied with this effective theory formulated solely in terms of the
phase field. We proceed by decomposing the Grassmann field as, cf.\
\cite{ESA}
\begin{equation} \label{bcs:decompose}
\psi_\sigma(x) = {\rm e}^{i \varphi(x)} \chi_\sigma(x)
\end{equation}
and substituting the specific form (\ref{bcs:London}) of the order
parameter in the partition function (\ref{bcs:36}). Instead of the
effective action (\ref{bcs:312}) we now obtain
\begin{equation}
S_{\rm eff} = -i {\rm Tr} \ln \left(
\begin{array}{cc} p_{0} - \partial_0 \varphi - \xi ({\bf p} +
\nabla \varphi) & -\bar{\Delta} \\
-\Delta^*_0 & p_{0} + \partial_0 \varphi + \xi ({\bf p} -
\nabla \varphi)
\end{array} \right),
\end{equation}
where the derivative $\tilde{\partial}_\mu \varphi$ of the Goldstone field
plays the role of an Abelian gauge field. This expression can be handled
with the help of the derivative expansion outlined in Sec.\ \ref{sec:der},
to yield the phase-only effective theory. We shall not give any details
here and merely state the result \cite{effBCS}, that the effective theory is
again of the form (\ref{eff:Leff}).
\section{Composite Boson Limit}
\label{sec:comp}
In this section we shall investigate the strong-coupling limit of the
pairing theory. In this limit, the attractive interaction between the
fermions is such that they form tightly bound pairs of mass $2m$. To
explicate this limit in arbitrary dimension $d$, we swap the bare coupling
constant for a more convenient parameter---the binding energy $\epsilon_a$
of a fermion pair in vacuum \cite{RDS}. Both parameters characterize the
strength of the contact interaction. To see the connection between the two,
let us consider the Schr\"odinger equation for the problem at hand. In
reduced coordinates it reads
\begin{equation}
\left[- \frac{\nabla^2}{m} + \lambda_0 \, \delta({\bf x}) \right] \psi({\bf
x}) = - \epsilon_a,
\end{equation}
where the reduced mass is $m/2$ and the delta-function potential, with
$\lambda_0 < 0$, represents the attractive contact interaction ${\cal
L}_{\rm i}$ in (\ref{bcs:BCS}). We stress that this is a two-particle
problem in vacuum; it is not the famous Cooper problem of two interacting
fermions on top of a filled Fermi sea. The equation is most easily solved
by Fourier transforming it. This yields the bound-state equation
\begin{equation}
\psi({\bf k}) = - \frac{\lambda_0}{{\bf k^2}/m + \epsilon_a} \psi(0),
\end{equation}
or
\begin{equation}
- \frac{1}{\lambda_0} = \int_{\bf k} \frac{1}{{\bf k^2}/m + \epsilon_a} .
\end{equation}
This equation allows us to replace the coupling constant with the binding energy
$\epsilon_a$. When substituted in the gap equation (\ref{bcs:gape}),
the latter becomes
\begin{equation} \label{bcs:reggap}
\int_{\bf k} \frac{1}{{\bf k^2}/m + \epsilon_a} = \frac{1}{2}
\int_{\bf k} \frac{1}{E({\bf k})}.
\end{equation}
By inspection, it is easily seen that this equation has a solution given
by \cite{Leggett}
\begin{equation} \label{comp:self}
\bar{\Delta} \rightarrow 0, \;\;\;\;\; \mu_0 \rightarrow - \tfrac{1}{2}
\epsilon_a,
\end{equation}
where it should be noted that the chemical potential is negative here. This
is the strong-coupling limit. To appreciate the physical significance of
the specific value found for the chemical potential in this limit, we note
that the spectrum $E_{\rm b}({\bf q})$ of the two-fermion bound state
measured relative to the pair chemical potential $2\mu_0$ reads
\begin{equation}
E_{\rm b}({\bf q}) = - \epsilon_a + \frac{{\bf q}^2}{4m} -2 \mu_0.
\end{equation}
The negative value for $\mu_0$ found in (\ref{comp:self}) is precisely the
condition for a Bose-Einstein condensation of the composite bosons in the
${\bf q} = 0$ state.
To investigate this limit further, we consider the effective action
(\ref{bcs:312}) and expand $\Delta(x)$ around a constant value $\bar{\Delta}$
satisfying the gap equation (\ref{bcs:gape}),
\begin{equation}
\Delta(x) = \bar{\Delta} + \tilde{\Delta}(x).
\end{equation}
We obtain in this way,
\begin{equation}
S_{\rm eff} = i \, {\rm Tr} \sum_{l =1}^\infty \frac{1}{l} \left[ G_0(p)
\left( \begin{array}{cc} 0 & \tilde{\Delta} \\
\tilde{\Delta}^* & 0 \end{array} \right) \right]^l,
\end{equation}
where $G_0$ is given in (\ref{bcs:prop}). We are interested in terms
quadratic in $\tilde{\Delta}$. Employing the derivative expansion
outlined in Sec.\ \ref{sec:der}, we find
\begin{eqnarray} \label{comp:Seff}
S_{\rm eff}^{(2)}(q) \!\!\!\! &=& \!\!\!\! \tfrac{1}{2}i \, {\rm Tr} \,
\frac{1}{p_0^2 - E^2({\bf p})}
\frac{1}{(p_0 + q_0)^2 - E^2({\bf p} - {\bf q})} \\ &&
\;\;\;\; \times
\Bigr\{ \bar{\Delta}^2 \, \tilde{\Delta}^* \tilde{\Delta}^*
+ [p_0 + \xi({\bf p})] [p_0 + q_0 - \xi({\bf p} - {\bf q})] \tilde{\Delta}
\tilde{\Delta}^* \nonumber \\ && \;\;\;\;\;\;\;\;\; + \bar{\Delta}^*\mbox{}^2
\tilde{\Delta} \tilde{\Delta}
+ [p_0 - \xi({\bf p})] [p_0 + q_0 + \xi({\bf p} - {\bf q})] \tilde{\Delta}^*
\tilde{\Delta} \Bigl\}, \nonumber
\end{eqnarray}
where $q_\mu = i\tilde{\partial}_\mu$. It is to be recalled here that the
derivative $p_\mu$ operates on everything to its right, while
$\tilde{\partial}_\mu$ operates only on the first object to its right. Let
us for a moment ignore the derivatives in this expression. After carrying
out the integral over the loop energy $k_0$ and using the gap equation
(\ref{bcs:gape}), we then obtain
\begin{equation} \label{comp:Lag1}
{\cal L}^{(2)}(0) = -\frac{1}{8} \int_{\bf k} \frac{1}{E^3({\bf k})}
\left(\bar{\Delta}^2 \, \tilde{\Delta}^*\mbox{}^2 + \bar{\Delta}^*\mbox{}^2
\tilde{\Delta}^2 + 2 |\bar{\Delta}|^2 |\tilde{\Delta}|^2 \right).
\end{equation}
In the composite boson limit $\bar{\Delta} \rightarrow 0$, so that the
spectrum (\ref{bcs:spec}) of the elementary fermionic excitations can be
approximated by
\begin{equation}
E({\bf k}) \approx \epsilon({\bf k}) + \tfrac{1}{2} \epsilon_a.
\end{equation}
The remaining integrals in (\ref{comp:Lag1}) then become elementary,
\begin{equation}
\int_{\bf k} \frac{1}{E^3({\bf k})} = \frac{4 \Gamma(3-d/2)}{(4 \pi)^{d/2}}
m^{d/2} \epsilon_a^{d/2-3}.
\end{equation}
We next consider the terms involving derivatives in (\ref{comp:Seff}).
Following Ref.\ \cite{Haussmann} we set $\bar{\Delta}$ to zero here. The
integral over the loop energy is easily carried out, with the result
\begin{eqnarray}
{\cal L}^{(2)}(q) &=& - \frac{1}{2} \int_{\bf k}
\frac{1}{q_0 - {\bf k}^2/m +
2 \mu_0 - {\bf q}^2/4m} \tilde{\Delta} \tilde{\Delta}^* \nonumber \\ &&
- \frac{1}{2} \int_{\bf k} \frac{1}{-q_0 - {\bf k}^2/m +
2 \mu_0 - {\bf q}^2/4m} \tilde{\Delta}^* \tilde{\Delta}.
\end{eqnarray}
The integral over the loop momentum ${\bf k}$ gives in the strong-coupling
limit using dimensional regularization
\begin{equation} \label{comp:Lag2}
\int_{\bf k} \frac{1}{q_0 - {\bf k}^2/m -\epsilon_a - {\bf q}^2/4m} =
- \frac{\Gamma(1-d/2)}{(4 \pi)^{d/2}} m^{d/2} (-q_0 + \epsilon_a + {\bf
q}^2/4m)^{d/2-1},
\end{equation}
or expanded in derivatives
\begin{eqnarray}
\lefteqn{\int_{\bf k} \frac{1}{q_0 - {\bf k}^2/m - \epsilon_a - {\bf
q}^2/4m} =} \\ &&
- \frac{ \Gamma(1-d/2)}{(4 \pi)^{d/2}} m^{d/2} \epsilon_a^{d/2-1} -
\frac{ \Gamma(2-d/2)}{(4 \pi)^{d/2}} m^{d/2}
\epsilon_a^{d/2-2} \left(q_0 - \frac{{\bf q}^2}{4m} \right) + \cdots.
\nonumber
\end{eqnarray}
The first term at the right-hand side yields as contribution to the
effective theory
\begin{equation} \label{bcs:con}
{\cal L}^{(2)}_\lambda = \frac{\Gamma(1-d/2)}{(4 \pi)^{d/2}} m^{d/2}
\epsilon_a^{d/2-1} |\tilde{\Delta}|^2.
\end{equation}
To this we have to add the contribution $|\tilde{\Delta}|^2/\lambda_0$
coming from the tree potential, i.e., the last term in the partition
function (\ref{bcs:37}). But this combination is no other than the one
needed to defined the renormalized coupling constant via (\ref{bcs:ren}),
which in the strong-coupling limit reads explicitly
\begin{equation}
\frac{1}{\lambda} = \frac{1}{\lambda_0} +
\frac{\Gamma(1-d/2)}{(4 \pi)^{d/2}} m^{d/2} \epsilon_a^{d/2-1}.
\end{equation}
In other words, the contribution (\ref{bcs:con}) can be combined with the
tree contribution to yield the term $|\tilde{\Delta}|^2/\lambda$. Expanding
the square root in (\ref{comp:Lag2}) in powers of the derivative $q_\mu$
using the value (\ref{comp:self}) for the chemical potential, and pasting
the pieces together, we obtain for the terms quadratic in $\tilde{\Delta}$
\cite{Haussmann},
\begin{equation}
{\cal L}^{(2)} = \frac{1}{2} \frac{\Gamma(2-d/2)}{(4 \pi)^{d/2}} m^{d/2}
\epsilon_a^{d/2-2}\, \tilde{\Psi}^\dagger \,
M_0(q) \, \tilde{\Psi}, \;\;\;\;\; \tilde{\Psi} = \left(\begin{array}{l}
\tilde{\Delta} \\ \tilde{\Delta}^* \end{array} \right),
\end{equation}
where $M_0(q)$ is the $2 \times 2$ matrix,
\begin{eqnarray} \label{comp:M}
\lefteqn{M_0(q) =} \\ && \!\!\!\!\!\!\!\!
\left( \begin{array}{cc}
q_0 - {\bf q}^2/4m - (2-d/2) |\bar{\Delta}|^2/ \epsilon_a &
\!\!\!\! - (2-d/2) \bar{\Delta}^2/ \epsilon_a \\
- (2-d/2) \bar{\Delta}^*\mbox{}^2/ \epsilon_a
& \!\!\!\! -q_0 - {\bf q}^2/4m - (2-d/2) |\bar{\Delta}|^2/ \epsilon_a
\end{array} \right). \nonumber
\end{eqnarray}
This Lagrangian is precisely of the form found in (\ref{eff:L0})
describing an interacting Bose gas. On comparing with Eq.\ (\ref{eff:M}
), we conclude that the composite bosons have---as expected---a mass
$m_{\rm b}=2m$ twice the fermion mass $m$, and a small chemical
potential
\begin{equation}
\mu_{0,{\rm b}} = (2-d/2) \frac{|\bar{\Delta}|^2}{\epsilon_a}.
\end{equation}
From (\ref{comp:M}) one easily extracts the Bogoliubov spectrum and the
velocity $c_0$ of the sound mode it describes,
\begin{equation}
c_0^2 = \frac{\mu_{0,{\rm b}}}{m_{\rm b}} = (1-d/4)
\frac{|\bar{\Delta}|^2}{m \epsilon_a}.
\end{equation}
Also the number density $\bar{n}_{0,{\rm b}}$ of condensed composite bosons,
\begin{equation}
\bar{n}_{0,{\rm b}} = \frac{\Gamma(2-d/2)}{(4 \pi)^{d/2}} m^{d/2}
\epsilon_a^{d/2-2} |\bar{\Delta}|^2
\end{equation}
as well as the weak repulsive interaction $\lambda_{0,{\rm b}}$ between the
composite bosons,
\begin{equation} \label{comp:lambda}
\lambda_{0,{\rm b}} = (4 \pi)^{d/2} \frac{1-d/4}{\Gamma(2-d/2)}
\frac{\epsilon_a^{1-d/2}}{m^{d/2}}
\end{equation}
follow immediately. We in this way have explicitly demonstrated that
the BCS theory in the composite boson limit maps onto the Bogoliubov
theory.
In concluding this section, we remark that in $d=2$ various integrals we
encountered become elementary for arbitrary values of $\bar{\Delta}$. For
example, the gap equation (\ref{bcs:reggap}) reads explicitly in $d=2$
\begin{equation}
\epsilon_a = \sqrt{\mu_0^2 + |\bar{\Delta}|^2} - \mu_0,
\end{equation}
while the particle number equation (\ref{bcs:ne}) becomes
\begin{equation}
\bar{n} = \frac{m}{2 \pi} \left(\sqrt{\mu_0^2 + |\bar{\Delta}|^2} + \mu_0 \right).
\end{equation}
Since in two dimensions,
\begin{equation}
\bar{n} = \frac{k_{\rm F}^2}{2 \pi} = \frac{m}{\pi} \epsilon_{\rm F},
\end{equation}
with $k_{\rm F}$ and $\epsilon_{\rm F} = k_{\rm F}^2/2m$ the Fermi momentum
and energy, the two equations can be combined to yield \cite{RDS}
\begin{equation}
\frac{\epsilon_a}{\epsilon_{\rm F}} = 2 \frac{\sqrt{\mu_0^2 +
|\bar{\Delta}|^2} - \mu_0}{\sqrt{\mu_0^2 + |\bar{\Delta}|^2} + \mu_0}.
\end{equation}
The composite boson limit we have been discussing in this section is
easily retrieved from these more general equations. Also note that in
this limit, $\bar{n} = 2 \bar{n}_{0,{\rm b}}$, while the renormalization
of the coupling constant takes the same form as for an interacting Bose
gas
\begin{equation}
\frac{1}{\lambda_0} = \frac{1}{\kappa^\epsilon} \left( \frac{1}{\hat{\lambda}} -
\frac{m}{4 \pi \epsilon} \right),
\end{equation}
cf.\ (\ref{eff:lambdar}).
\section{Dual Theory}
\label{sec:2sc}
We now turn to the dual description of a superconducting film at finite
temperature. We thereto minimally couple the model of Sec.\
\ref{sec:KT} to a magnetic field described by the magnetic vector
potential ${\bf A}$. For the time being we ignore vortices by setting
the vortex gauge field $\bbox{\varphi}^{\rm P}$ to zero. The partition
function of the system then reads
\begin{equation} \label{2sc:znovor}
Z = \int \mbox{D}\varphi \int \mbox{D} {\bf A} \, \Xi ({\bf A})
\, \exp\left( -\beta \int_{\bf x}{\cal H} \right),
\end{equation}
where $\Xi({\bf A})$ is a gauge-fixing factor for the gauge field ${\bf A}$,
and ${\cal H}$ is the Hamiltonian
\begin{equation} \label{2sc:H}
{\cal H} = \tfrac{1}{2} \rho_{\rm s} {\bf v}_{\rm s}^2 + \tfrac{1}{2}
(\nabla \times {\bf A})^2
\end{equation}
with
\begin{equation} \label{2sc:vs}
{\bf v}_{\rm s} = \frac{1}{m} (\nabla \varphi - 2e {\bf A}).
\end{equation}
The double charge $2e$ stands for the charge of the Cooper pairs which are
formed at the bulk transition temperature. The functional integral over
$\varphi$ in (\ref{2sc:znovor}) is easily carried out with the result
\begin{equation} \label{2sc:schwlike}
Z = \int \mbox{D} {\bf A} \, \Xi({\bf A}) \, {\rm exp}\left\{-\frac{\beta}{2}
\int_{\bf x} \left[(\nabla \times {\bf A})^2 + m_A^2 A_i \left( \delta_{i j}
- \frac{\partial _i
\partial_j}{\nabla^2} \right) A_j \right] \right\},
\end{equation}
where the last term, with $m^2_A = 4 e^2 \rho_{\rm s}/m^2$, is a
gauge-invariant, albeit nonlocal mass term for the gauge field
generated by the Higgs mechanism. The number of degrees of freedom does
not change in the process. This can be seen by noting that a gapless
gauge field in two dimensions represents no physical degrees of freedom.
(In Minkowski spacetime, this is easily understood by recognizing that
in $1+1$ dimensions there is no transverse direction.) Before the Higgs
mechanism took place, the system therefore contains only a single
physical degree of freedom described by $\varphi$. This equals the
number of degrees of freedom contained in (\ref{2sc:schwlike}).
We next introduce an auxiliary field $\tilde{h}$ to linearize the first term
in (\ref{2sc:schwlike}),
\begin{equation} \label{2sc:efield}
\exp \left[-\frac{\beta}{2} \int_{\bf x} (\nabla \times {\bf A})^2
\right] = \int \mbox{D} \tilde{h} \, {\rm exp}\left[-\frac{1}{2 \beta} \int_{\bf x}
\tilde{h}^2 + i \int_{\bf x} \tilde{h} (\nabla
\times {\bf A}) \right],
\end{equation}
and integrate out the gauge-field fluctuations [with a gauge-fixing term
$(1/2\alpha)(\nabla \cdot {\bf A})^2$]. The result is a manifestly
gauge-invariant expression for the partition function in terms of a massive
scalar field $\tilde{h}$, representing the single degree of freedom
contained in the theory:
\begin{equation} \label{2sc:massivescalar}
Z = \int \mbox{D} \tilde{h} \, {\rm exp}\left\{-\frac{1}{2 \beta} \int_{\bf x}
\left[ \frac{1}{m_A^2} (\nabla \tilde{h})^2 + \tilde{h}^2 \right] \right\}.
\end{equation}
To understand the physical significance of this field, we note from
(\ref{2sc:efield}) that it satisfies the field equation
\begin{equation} \label{2sc:id}
\tilde{h} = i \beta \nabla \times {\bf A}.
\end{equation}
That is, the fluctuating field $\tilde{h}$ represents the local magnetic
induction, which is a scalar in two space dimensions. Equation
(\ref{2sc:massivescalar}) shows that the magnetic field has a finite
penetration depth $\lambda_{\rm L} = 1/m_A$. In contrast to the original
description where the functional integral runs over the gauge potential, the
integration variable in (\ref{2sc:massivescalar}) is the physical field.
We next include vortices. The penetration depth $\lambda_{\rm L}$ provides
the system with an infrared cutoff so that a single magnetic vortex in the
charged theory has a finite energy. Vortices can therefore be thermally
activated. This is different from the superfluid phase of the neutral
model, where the absence of an infrared cutoff permits only tightly bound
vortex-antivortex pairs to exist. We expect, accordingly, the
superconducting phase to describe a plasma of vortices, each carrying one
magnetic flux quantum $\pm \pi/e$. The partition function now reads
\begin{equation} \label{2sc:vincluded}
Z = \sum_{N_{+},N_{-}=0}^{\infty} \frac{z^{N_{+}+N_{-}}}{N_{+}!\, N_{-}!}
\prod_{\alpha} \int_{{\bf x}^\alpha} \, \int \mbox{D} \varphi
\int \mbox{D} {\bf A} \, \Xi({\bf A}) \, \exp \left(- \beta \int_{\bf
x} {\cal H} \right)
\end{equation}
where $z$ is the fugacity, i.e., the Boltzmann factor associated with the
vortex core energy. The velocity appearing in the Hamiltonian (\ref{2sc:H})
now includes the vortex gauge field
\begin{equation}
{\bf v}_{\rm s} = \frac{1}{m} (\nabla \varphi - 2e {\bf A} -
\bbox{\varphi}^{\rm P}).
\end{equation}
This field can be shifted from the first to the second term in the
Hamiltonian (\ref{2sc:H}) by applying the transformation ${\bf A}
\rightarrow {\bf A} - \bbox{\varphi}^{\rm P}/2e$. This results in the shift
\begin{equation}
\nabla \times {\bf A} \rightarrow \nabla \times {\bf A} - B^{\rm P},
\end{equation}
with the plastic field
\begin{equation} \label{2sc:BP}
B^{\rm P} = -\Phi_0 \sum_{\alpha} w_{\alpha} \, \delta({\bf x} - {\bf
x}^{\alpha })
\end{equation}
representing the magnetic flux density. Here, $\Phi_0 = \pi/e$ is the
elementary flux quantum. Repeating the steps of the previous paragraph
we now obtain instead of (\ref{2sc:massivescalar})
\begin{equation} \label{2sc:vortexsum}
Z = \sum_{N_\pm=0}^\infty \frac{z^{N_{+}+N_{-}}}{N_{+}!\, N_{-}! }
\prod_{\alpha} \int_{{\bf x}^\alpha} \int \mbox{D} \tilde{h} \, {\rm
exp}\left\{-\frac{1}{2\beta} \int_{\bf x} \left[\frac{1}{m_A^2} (\nabla
\tilde{h})^2 + \tilde{h}^2 \right] + i \int_{\bf x} B^{\rm P}
\tilde{h}\right\},
\end{equation}
where $\tilde{h}$ represents the physical local magnetic induction $h$
\begin{equation}
\tilde{h} = i \beta (\nabla \times {\bf A} - B^{\rm P}) = i \beta h.
\end{equation}
The field equation for $\tilde{h}$ obtained from (\ref{2sc:vortexsum})
yields for the magnetic induction:
\begin{equation} \label{2sc:fam}
- \nabla^2 h + m_A^2 h = m_A^2 B^{\rm P},
\end{equation}
which is the familiar equation in the presence of magnetic vortices.
The last term in (\ref{2sc:vortexsum}) shows that the charge $g$ with which
a magnetic vortex couples to the fluctuating $\tilde{h}$-field is the
product of an elementary flux quantum (contained in the definition of
$B^{\rm P}$) and the inverse penetration depth $m_A = 1/\lambda_{\rm L}$,
\begin{equation} \label{2sc:g}
g = \Phi_0 m_A.
\end{equation}
For small fugacities the summation indices $N_{+}$ and $N_{-}$ can be
restricted to the values $0,1$ and we arrive at the partition function
of the massive sine-Gordon model \cite{Schaposnik}
\begin{equation} \label{2sc:sineGordon}
Z = \int \mbox{D} \tilde{h} \, {\rm exp} \left( - \int_{\bf x}
\left\{\frac{1}{2 \beta} \left[\frac{1}{m_A^2} (\nabla \tilde{h})^2 +
\tilde{h}^2\right]- 2z \cos \left( \Phi_0 \tilde{h} \right) \right\}
\right).
\end{equation}
This is the dual formulation of a two-dimensional superconductor. The
magnetic vortices of unit winding number $w_\alpha = \pm 1$ turned the
otherwise free theory (\ref{2sc:massivescalar}) into an interacting one.
The final form (\ref{2sc:sineGordon}) demonstrates the rationales for going
over to a dual theory. First, it is a formulation directly in terms of a
physical field representing the local magnetic induction. There is no
redundancy in this description and therefore no gauge invariance. Second,
the magnetic vortices are accounted for in a nonsingular fashion. This is
different from the original formulation of the two-dimensional
superconductor where the local magnetic induction is the curl of an
unphysical gauge potential ${\bf A}$, and where the magnetic vortices appear
as singular objects.
Up to this point we have discussed a genuine two-dimensional superconductor.
As a model to describe superconducting films this is, however, not adequate.
The reason is that the magnetic interaction between the vortices takes place
mostly not through the film but through free space surrounding the film
where the photon is gapless. This situation is markedly different from a
superfluid film. The interaction between the vortices there is mediated by
the Kosterlitz-Thouless mode which is confined to the film. A genuine
two-dimensional theory therefore gives a satisfactory description of a
superfluid film.
To account for the fact that the magnetic induction is not confined to the
film and can roam in outer space, the field equation (\ref{2sc:fam}) is
modified in the following way \cite{Pearl,deGennes}
\begin{equation} \label{2sc:mod}
- \nabla^2 h({\bf x}_\perp,x_3) + \frac{1}{\lambda_\perp} \delta_d(x_3) h({\bf
x}_\perp,x_3) = \frac{1}{\lambda_\perp} \delta_d(x_3) B^{\rm P}({\bf x}).
\end{equation}
Here, $1/\lambda_\perp = d m_A^2 $, with $d$ denoting the thickness of the
superconducting film, is an inverse length scale, ${\bf x}_\perp$ denotes
the coordinates in the plane, $h$ the component of the induction field
perpendicular to the film, and $\delta_d(x_3)$ is a smeared delta function
of thickness $d$ along the $x_3$-axis
\begin{equation}
\delta_d(x_3) \left\{ \begin{array}{cc} = 0 & {\rm for} \;\;\;\; |x_3| > d/2
\\ \neq 0 & {\rm for} \;\;\;\; |x_3| \leq d/2 \end{array} \right. .
\end{equation}
The reason for including the smeared delta function at the right-hand side
of (\ref{2sc:mod}) is that the vortices are confined to the film. The delta
function in the second term at the left-hand side is included because this
term is generated by screening currents which are also confined to the film.
To be definite, we consider a single magnetic vortex located at the origin.
The induction field found from (\ref{2sc:mod}) reads
\begin{equation}
h({\bf x}_\perp,0) = \frac{\Phi_0}{2 \pi} \int_0^\infty \mbox{d} q \frac{q}{1+ 2
\lambda_\perp q} J_0(q |{\bf x}_\perp|),
\end{equation}
with $J_0$ the 0th Bessel function of the first kind. At small distances
from the vortex core ($\lambda_\perp q >> 1$)
\begin{equation} \label{2sc:vincin}
h({\bf x}_\perp,0) \sim \frac{\Phi_0}{4 \pi \lambda_\perp |{\bf x}_\perp|},
\end{equation}
while far away ($\lambda_\perp q << 1$)
\begin{equation}
h({\bf x}_\perp,0) \sim \frac{\Phi_0 \lambda_\perp}{\pi |{\bf x}_\perp|^3}.
\end{equation}
This last equation shows that the field does not exponentially decay as
would be the case in a genuine two-dimensional system. The reason for
the long range is that most of the magnetic interaction takes place in
free space outside the film where the photon is gapless. If, as is
often the case, the length $\lambda_\perp =1/d m_A^2$ is much larger
than the sample size, it can be effectively set to infinity. In this
limit, the effect of the magnetic interaction diminishes, as can be seen
from (\ref{2sc:vincin}), and the vortices behave as in a superfluid
film. One therefore expects a superconducting film to also undergo a
Kosterlitz-Thouless transition at a temperature $T_{\rm KT}$
characterized by an unbinding of vortex-antivortex pairs. The first
experiment to study this possibility was carried out in Ref.\
\cite{BMO}. Because the transition temperature $T_{\rm KT}$ is well
below the bulk temperature $T_{\rm c}$ where the Cooper pairs form, the
energy gap of the fermions remains finite at the critical point
\cite{CFGWY}. This prediction has been corroborated by experiments
performed by Hebard and Paalanen on superconducting films \cite{HPsu1}.
For temperatures $T_{\rm KT} \leq T \leq T_{\rm c}$, there is a plasma
of magnetic vortices which disorder the superconducting state. At
$T_{\rm KT}$ vortices and antivortices bind into pairs and
algebraic long-range order sets in.
\chapter{Superfluidity \label{chap:super}}
A central role in these Lectures is played by an interacting Bose gas.
In this chapter we wish to study some of its salient features, notably
its ability to become superfluid below a critical temperature. We shall
derive the zero-temperature effective theory of the superfluid state,
and discuss the effect of the inclusion of impurities and of a $1/|{\bf
x}|$-Coulomb potential. Finally, vortices both at the absolute zero of
temperature and at finite temperature are studied.
\section{Bogoliubov Theory}
The system of an interacting Bose gas is defined by the the Lagrangian
\cite{GrPi}
\begin{equation} \label{eff:Lagr}
{\cal L} = \phi^* \bigl[i \partial_0 - \epsilon(-i \nabla) + \mu_0\bigr]
\phi - \lambda_0 |\phi|^4,
\end{equation}
where the complex scalar field $\phi$ describes the atoms of mass $m$,
$\epsilon(-i \nabla) = - \nabla^2/2m$ is the kinetic energy operator, and
$\mu_0$ is the chemical potential. The last term with positive coupling
constant, $\lambda_0 > 0$, represents a repulsive contact interaction. The
(zero-temperature) grand-canonical partition function $Z$ is obtained by
integrating over all field configurations weighted with an exponential
factor determined by the action $S = \int_x {\cal L}$:
\begin{equation}
Z = \int \mbox{D} \phi^* \mbox{D} \phi \, {\rm e}^{i S}.
\end{equation}
This is the quantum analog of Eq.\ (\ref{Zfunct})---the
functional-integral representation of a classical partition function.
The theory (\ref{eff:Lagr}) possesses a global U(1) symmetry under which
\begin{equation}
\phi(x) \rightarrow {\rm e}^{i \alpha} \phi(x),
\end{equation}
with $\alpha$ a constant transformation parameter. At zero temperature,
this symmetry is spontaneously broken by a nontrivial ground state, and
the system is in its superfluid phase. Most of the startling phenomena
of a superfluid follow from this symmetry breakdown. The nontrivial
groundstate can be easily seen by considering the shape of the potential
\begin{equation} \label{eff:V}
{\cal V} = - \mu_0 |\phi|^2 + \lambda_0 |\phi|^4,
\end{equation}
depicted in Fig.\ \ref{fig:potential}. It is seen to have a minimum away
from the origin $\phi = 0$.
\begin{figure}
\begin{center}
\epsfxsize=8.cm
\mbox{\epsfbox{potential.eps}}
\end{center}
\caption{Graphical representation of the potential
(\protect\ref{eff:V}). \label{fig:potential}}
\end{figure}
To account for this, we shift $\phi$ by a (complex) constant $\bar{\phi}$ and
write
\begin{equation} \label{eff:newfields}
\phi(x) = {\rm e}^{i \varphi(x)} \, [\bar{\phi} + \tilde{\phi}(x)].
\end{equation}
The phase field $\varphi(x)$ represents the Goldstone mode accompanying the
spontaneous breakdown of the global U(1) symmetry. At zero temperature, the
constant value
\begin{equation} \label{eff:min}
|\bar{\phi}|^2 = \frac{1}{2} \frac{\mu_0}{\lambda_0 }
\end{equation}
minimizes the potential energy. It physically represents the number
density of particles contained in the condensate for the total
particle number density is given by
\begin{equation}
n(x) = |\phi(x)|^2 .
\end{equation}
Because $\bar{\phi}$ is a constant, the condensate is a uniform,
zero-momentum state. That is, the particles residing in the ground state
are in the ${\bf k}=0$ mode. We will be working in the Bogoliubov
approximation which amounts to including only the quadratic terms in
$\tilde{\phi}$ and ignoring the higher-order ones. These terms may be cast
in the matrix form
\begin{equation} \label{eff:L0}
{\cal L}^{(2)} = \tfrac{1}{2} \tilde{\Phi}^{\dagger} M_0(p,x)
\tilde{\Phi}, \;\;\;\;\;\; \tilde{\Phi} = \left(\begin{array}{l}
\tilde{\phi} \\
\tilde{\phi}^* \end{array} \right),
\end{equation}
with
\begin{eqnarray} \label{eff:M}
\lefteqn{M_0(p,x) =} \\ \nonumber && \!\!\!\!\!\!\!
\left( \begin{array}{cc}
p_0 - \epsilon({\bf p}) + \mu_0 - U(x) - 4 \lambda_0 |\bar{\phi}|^2 &
\!\!\!\! - 2 \lambda_0 \bar{\phi}^2 \\
- 2 \lambda_0 \bar{\phi}^*\mbox{}^2 & \!\!\!\! -p_0 - \epsilon ({\bf p})
+ \mu_0 - U(x) - 4 \lambda_0 |\bar{\phi}|^2
\end{array} \right),
\end{eqnarray}
where $U$ stands for the combination
\begin{equation} \label{eff:U}
U(x) = \partial_0 \varphi(x) + \frac{1}{2m} [\nabla \varphi(x)]^2.
\end{equation}
In writing (\ref{eff:M}) we have omitted a term $\nabla^2 \varphi$
containing two derivatives which is irrelevant in the regime of low
momentum in which we shall be interested. We also omitted a term of the
form $\nabla \varphi \cdot {\bf j}$, where ${\bf j}$ is the Noether
current associated with the global U(1) symmetry,
\begin{equation}
{\bf j} = \frac{1}{2 i m} \phi^*
\stackrel{\leftrightarrow}{\nabla} \phi.
\end{equation}
This term, which after a partial integration becomes $- \varphi \nabla
\cdot {\bf j}$, is irrelevant too at low energy and small momentum
because in a first approximation the particle number density is
constant, so that the classical current satisfies the condition
\begin{equation}
\nabla \cdot {\bf j} =0.
\end{equation}
The spectrum $E({\bf k})$ obtained from the matrix $M_0$ with the
field $U$ set to zero is the famous single-particle Bogoliubov
spectrum \cite{Bogoliubov},
\begin{eqnarray} \label{eff:bogo}
E({\bf k}) &=& \sqrt{ \epsilon ^2({\bf k}) + 2 \mu_0 \epsilon({\bf k}) }
\nonumber \\ &=& \sqrt{ \epsilon ^2({\bf k}) + 4 \lambda_0 |\bar{\phi}|^2
\epsilon({\bf k}) }.
\end{eqnarray}
The most notable feature of this spectrum is that it is gapless,
behaving for small momentum as
\begin{equation} \label{eff:micror}
E({\bf k}) \sim u_0 \, |{\bf k}|,
\end{equation}
with $u_0 = \sqrt{\mu_0/m}$ a velocity which is sometimes referred to as the
microscopic sound velocity. It was first shown by Beliaev \cite{Beliaev} that
the gaplessness of the single-particle spectrum persists at the one-loop
order. This was subsequently proven to hold to all orders in
perturbation theory by Hugenholtz and Pines \cite{HP}. For large
momentum, the Bogoliubov spectrum takes a form
\begin{equation} \label{eff:med}
E({\bf k}) \sim \epsilon({\bf k}) + 2 \lambda_0 |\bar{\phi}|^2
\end{equation}
typical for a nonrelativistic particle with mass $m$ moving in a medium. To
highlight the condensate we have chosen here the second form in
(\ref{eff:bogo}) where $\mu_0$ is replaced with $2 \lambda_0 |\bar{\phi}|^2$.
\section{Effective Theory} \label{sec:ET}
Since gapless modes in general require a justification for their existence,
we expect the gaplessness of the single-particle spectrum to be a result of
Goldstone's theorem. This is corroborated by the relativistic version of
the theory. There, one finds two spectra, one corresponding to a massive
Higgs particle which in the nonrelativistic limit becomes too heavy and
decouples from the theory, and one corresponding to the Goldstone mode of
the spontaneously broken global U(1) symmetry \cite{BBD}. The latter
reduces in the nonrelativistic limit to the Bogoliubov spectrum. Also, when
the theory is coupled to an electromagnetic field, one finds that the
single-particle spectrum acquires an energy gap. This is what one expects
to happen with the spectrum of a Goldstone mode when the Higgs mechanism is
operating. The equivalence of the single-particle excitation and the
collective density fluctuation has been proven to all orders in perturbation
by Gavoret and Nozi\`eres \cite{GN}.
Let us derive the effective theory governing the Goldstone mode at low
energy and small momentum by integrating out the fluctuating field
$\tilde{\Phi}$ \cite{effbos}. The effective theory is graphically
represented by Fig.\ \ref{fig:effective}.
\begin{figure}
\begin{center}
\epsfxsize=8.cm
\mbox{\epsfbox{effective.eps}}
\end{center}
\caption{Graphical representation of the effective theory
(\protect\ref{eff:Leff}). The symbols are explained in the
text. \label{fig:effective}}
\end{figure}
A line with a shaded bubble inserted stands for $i$ times the {\it full}
Green function $G$ and the black bubble denotes $i$ times the {\it full}
interaction $\Gamma$ of the $\tilde{\Phi}$-field with the field $U$
which is denoted by a wiggly line. Both $G$ and $\Gamma$ are $2 \times 2$
matrices. The full interaction is obtained from the inverse Green function
by differentiation with respect to the chemical potential,
\begin{equation} \label{bcs:defga}
\Gamma = - \frac{\partial G^{-1}}{\partial \mu}.
\end{equation}
This follows because $U$, as defined in (\ref{eff:U}), appears in the theory
only in the combination $\mu_0 - U$. To lowest order, the inverse
propagator is given by the matrix $M_0$ in (\ref{eff:M}) with $U(x)$ set to
zero. It follows that the vertex of the interaction between the
$\tilde{\Phi}$ and $U$-fields is minus the unit matrix. Because in terms of
the full Green function $G$, the particle number density reads
\begin{equation}
\bar{n} = \frac{i}{2} \, {\rm tr} \int_k G (k),
\end{equation}
we conclude that the first diagram in Fig.\ \ref{fig:effective} stands
for $-\bar{n} U$. The bar over $n$ is to indicate that the particle
number density obtained in this way is a constant, representing the
density of the uniform system with $U(x)$ set to zero. The second
diagram without the wiggly lines denotes $i$ times the (0 0)-component
of the {\it full} polarization tensor, $\Pi_{0 0}$, at zero energy
transfer and low momentum ${\bf q}$,
\begin{equation} \label{eff:pi}
i \lim_{{\bf q} \rightarrow 0} \Pi_{0 0}(0,{\bf q}) = -\frac{1}{2}
\lim_{{\bf q} \rightarrow 0} {\rm tr} \int_k G \, \Gamma \, G \, (k_0,{\bf
k}+ {\bf q}).
\end{equation}
The factor $\tfrac{1}{2}$ is a symmetry factor which arises because the two
Bose lines are identical. We proceed by invoking an argument due to Gavoret
and Nozi\`eres \cite{GN} to relate the left-hand side of (\ref{eff:pi}) to
the sound velocity. By virtue of relation (\ref{bcs:defga}) between the
full Green function $G$ and the full interaction $\Gamma$, the (0
0)-component of the polarization tensor can be cast in the form
\begin{eqnarray} \label{bcs:cruc}
\lim_{{\bf q} \rightarrow 0} \Pi_{0 0} (0,{\bf q}) &=& - \frac{i}{2}
\lim_{{\bf q} \rightarrow 0} {\rm tr} \int_k G \, \frac{\partial
G^{-1}}{\partial \mu} \, G (k_0,{\bf k}+ {\bf q}) \nonumber \\ &=&
\frac{i}{2} \frac{\partial }{\partial \mu} \lim_{{\bf q} \rightarrow 0} {\rm
tr} \int_k G (k_0,{\bf k}+ {\bf q}) \nonumber \\ &=& \frac{\partial
\bar{n}}{\partial \mu} = - \frac{1}{V} \frac{\partial \Omega}{\partial \mu^2},
\end{eqnarray}
where $\Omega$ is the thermodynamic potential and $V$ the volume of the
system. The right-hand side of (\ref{bcs:cruc}) is $\bar{n}^2 \kappa$, with
$\kappa$ the compressibility. Because it is related to the macroscopic
sound velocity $c$ via
\begin{equation}
\kappa = \frac{1}{m \bar{n} c^2},
\end{equation}
we conclude that the (0 0)-component of the full polarization tensor
satisfies the so-called compressibility sum rule of statistical
physics \cite{GN}
\begin{equation} \label{bec:rel}
\lim_{{\bf q} \rightarrow 0} \Pi_{0 0} (0,{\bf q}) = \bar{n}^2 \kappa =
\frac{\bar{n}}{m c^2}.
\end{equation}
Putting the pieces together, we infer that the diagrams in Fig.\
\ref{fig:effective} stand for the effective theory
\begin{equation} \label{eff:Leff}
{\cal L}_{{\rm eff}} = -\bar{n}\left[\partial_{0}\varphi +
\frac{1}{2m}( {\bf \nabla} \varphi)^{2} \right] + \frac{\bar{n}}{2m
c^{2}}\left[\partial_{0}\varphi + \frac{1}{2m}( {\bf
\nabla}\varphi)^{2}\right]^{2},
\end{equation}
where we recall that ${\bar n}$ is the particle number density of the
fluid at rest. The theory describes a nonrelativistic sound wave, with
the dimensionless phase field $\varphi$ representing the Goldstone
mode of the spontaneously broken global U(1) symmetry. It has the
gapless dispersion relation $E^2({\bf k}) = c^2 {\bf k}^2$. The
effective theory gives a complete description of the superfluid valid at
low energies and small momenta. The same effective theory appears in
the context of (neutral) superconductors \cite{effBCS} (see next
chapter) and also in that of classical hydrodynamics \cite{hydro}.
The chemical potential $\mu$ is represented in the effective theory
(\ref{eff:Leff}) by \cite{PWA}
\begin{equation} \label{jo-pwa}
\mu(x) = - \partial_0 \varphi(x),
\end{equation}
so that
\begin{equation}
\frac{\partial {\cal L}_{\rm eff}}{\partial \mu} = - \frac{\partial {\cal
L}_{\rm eff}}{\partial \partial_0 \phi} = n(x),
\end{equation}
as required. It also follows from this equation that the particle
number density $n(x)$ is canonical conjugate to $-\phi(x)$.
The most remarkable aspect of the effective theory (\ref{eff:Leff}) is that
it is nonlinear. The nonlinearity is necessary to provide a
Galilei-invariant description of a gapless mode, as required in a
nonrelativistic context. Under a Galilei boosts,
\begin{equation} \label{boost}
t \rightarrow t' = t, \;\; {\bf x} \rightarrow {\bf x}' = {\bf x} -
{\bf u} t; \;\;\;\;
\partial_0 \rightarrow \partial_0' = \partial_0 + {\bf u} \cdot
\nabla, \;\; \nabla \rightarrow \nabla' = \nabla,
\end{equation}
with ${\bf u}$ a constant velocity, the Goldstone field $\varphi(x)$
transforms as
\begin{equation}
\frac{1}{m} \varphi(x) \rightarrow \frac{1}{m} \varphi'(x') =
\frac{1}{m} \varphi(x) - {\bf u} \cdot {\bf x} + \tfrac{1}{2} {\bf u}^2
t.
\end{equation}
As a result, the superfluid velocity ${\bf v}_{\rm s} = \nabla
\varphi/m$ and the chemical potential (per unit mass) $\mu/m = -
\partial_0 \varphi/m$ transform under a Galilei boost in the correct
way,
\begin{equation}
{\bf v}_{\rm s}(x) \rightarrow {\bf v}'_{\rm s}(x') = {\bf v}_{\rm s}(x)
- {\bf u}, \;\;\; \mu(x)/m \rightarrow \mu'(x')/m = \mu (x)/m - {\bf u}
\cdot {\bf v}_{\rm s}(x) + \tfrac{1}{2} {\bf u}^2.
\end{equation}
It is readily checked that the field $U(x)$ defined in (\ref{eff:U}) and
therefore the effective theory (\ref{eff:Leff}) is invariant under Galilei
boosts.
Since the Goldstone field in (\ref{eff:Lagr}) is always accompanied by a
derivative, we see that that the nonlinear terms carry additional factors of
$|{\bf k}|/mc$, with $|{\bf k}|$ the wave number. They can therefore be
ignored provided the wave number is smaller than the inverse coherence
length $\xi^{-1} = mc$,
\begin{equation} \label{zwd}
|{\bf k}| < 1/\xi.
\end{equation}
For example, in the case of $^4$He the coherence length, or Compton
wavelength, is about 10 nm. In this system, the bound (\ref{zwd}), below
which the nonlinear terms can be neglected, coincide with the region where
the spectrum is linear and the description in terms of solely a sound mode is
applicable.
The alert reader might be worrying about an apparent mismatch in the number
of degrees of freedom in the normal and the superfluid phase. Whereas the
normal phase is described by a complex $\phi$-field, the superfluid phase is
described by a real scalar field $\varphi$. The resolution of this paradox
lies in the spectrum of the modes \cite{Leutwyler}. In the normal phase,
the spectrum $E({\bf k}) = {\bf k}^2/2m$ is linear in $E$, so that only
positive energies appear in the Fourier decomposition, and one needs---as is
well known from standard quantum mechanics---a complex field to describe a
single particle. In the superfluid phase, where the spectrum, $E^2({\bf k})
= c^2 {\bf k}^2$, is quadratic in $E$, the counting goes differently. The
Fourier decomposition now contains positive as well as negative energies and
a single real field suffice to describe this mode. In other words, although
the number of fields is different, the number of degrees of freedom is the
same in both phases.
The particle number density and current that follows from (\ref{eff:Leff})
read
\begin{eqnarray}
n(x) &=& \bar{n} -\frac{\bar{n}}{m c^{2}} \left\{ \partial_{0}
\varphi(x) + \frac{1}{2 m} [ {\bf \nabla} \varphi(x)]^{2}\right\}
\label{roh1} \\ {\bf j}(x) &=& n(x) {\bf v}_{\rm s}(x). \label{roh2}
\end{eqnarray}
Physically, (\ref{roh1}) reflects Bernoulli's principle which states that in
regions of rapid flow, the density and therefore the pressure is low.
The diagrams of Fig.~\ref{fig:effective} can be evaluated in a loop
expansion to obtain explicit expressions for the particle number density
$\bar{n}$ and the sound velocity $c$ to any given order \cite{effbos}. In
doing so, one encounters---apart from ultraviolet divergences which will be
dealt with shortly---also infrared divergences because the Bogoliubov
spectrum is gapless. When however all one-loop contributions are added
together, these divergences are seen to cancel \cite{effbos}. One finds for
$d=2$ to the one-loop order
\begin{equation} \label{eff:nc}
\bar{n} = \frac{1}{2} \frac{\mu}{\lambda}, \;\;\;
c^2 = 2 \frac{\lambda \bar{n}}{m},
\end{equation}
where $\mu$ and $\lambda$ are the renormalized parameters. Following
Ref.\
\cite{NP}, we adopted a dimensional regularization scheme, in which after
the integrals over the loop energies have been carried out, the remaining
integrals over the loop momenta are analytically continued to arbitrary
space dimensions $d$. As renormalization prescription we employed the
modified minimal subtraction scheme. This leads to the following relation
between the bare ($\lambda_0$) and renormalized coupling constant [see
Eq. (\ref{Vd=2}) below]
\begin{equation} \label{eff:lambdar}
\frac{1}{\lambda_0} = \frac{1}{\kappa^\epsilon} \left(\frac{1}{\hat{\lambda}} -
\frac{m}{\pi \epsilon}\right),
\end{equation}
where $\epsilon = 2-d$, and $\kappa$ is an arbitrary renormalization
group scale parameter introduced to give the renormalized coupling
constant $\hat{\lambda}$ the same engineering dimension as in $d=2$.
The chemical potential is not renormalized to this order.
Incidentally, from the vantage point of renormalization, the mass $m$ is an
irrelevant parameter in nonrelativistic theories and can be scaled away
(see, e.g., Ref.\ \cite{NP}).
The form of the effective theory (\ref{eff:Leff}) can also be derived from
general symmetry arguments \cite{GWW}. More specifically, it follows from
making the presence of a gapless Goldstone mode compatible with Galilei
invariance which demands that the mass current and the momentum density are
equal. The latter observation leads to the conclusion that the U(1)
Goldstone field $\varphi$ can only appear in the combination (\ref{eff:U}).
To obtain the required linear spectrum for the Goldstone mode it is
necessary then to have the form (\ref{eff:Leff}). Given the form of the
effective theory, the particle number density and sound velocity can then
more easily be obtained directly from the thermodynamic potential $\Omega$
via
\begin{equation} \label{bec:thermo}
\bar{n} = - \frac{1}{V} \frac{\partial \Omega }{\partial \mu}; \;\;\;\;\;\;
\frac{1}{c^2} = - \frac{1}{V} \frac{m}{\bar{n}} \frac{\partial^2 \Omega
}{\partial \mu^2},
\end{equation}
where $V$ is the volume of the system. In this approach, one only has to
calculate the thermodynamic potential which at zero temperature and in the
Bogoliubov approximation in which we are working is given by the sum ${\cal
V}$ of the classical potential ${\cal V}_0$ and the effective potential
${\cal V}_{\rm eff}$ corresponding to the theory (\ref{eff:Lagr}):
\begin{equation} \label{eff:Omega}
\Omega = \int_{\bf x} ({\cal V}_0 + {\cal V}_{\rm eff}),
\end{equation}
where ${\cal V}_0$ is given by (\ref{eff:V}) with $\phi$ replaced by
$\bar{\phi}$. The effective potential for the uniform system is obtained as
follows. In the Bogoliubov approximation of ignoring higher than second
order in the fields, the integration over $\tilde{\Phi}$ is Gaussian.
Carrying out this integral, we obtain for the zero-temperature partition
function
\begin{eqnarray} \label{bec:Z}
Z &=& {\rm e}^{-i \int_x {\cal V}_0} \int \mbox{D} \phi^* \mbox{D} \phi \exp
\left(i \int_x {\cal L}^{(2)} \right)
\nonumber \\ &=& {\rm e}^{- i \int_x {\cal V}_0} \, {\rm Det}^{-1/2}
(M_0),
\end{eqnarray}
where $M_0$ stands for the matrix introduced in (\ref{eff:M}). Setting
\begin{equation} \label{Zeff}
Z = \exp\left[i \left(-\int_x{\cal V}_0 + S_{\rm eff}\right)\right],
\end{equation}
we conclude from (\ref{bec:Z}) that the effective action in the Bogoliubov
approximation is given to the one-loop order by
\begin{equation} \label{bec:Seff}
S_{\rm eff} = \tfrac{1}{2} i {\rm Tr} \ln[M_0(p,x)],
\end{equation}
where we again used the identity Det($A$) = exp[Tr ln($A$)].
The trace Tr appearing here stands besides for the trace over discrete
indices now also for the integral $\int_x$ over spacetime as well as the one
$\int_k$ over energy and momentum. The latter integral reflects the fact
that the effective action calculated here is a one-loop result with $k_\mu$
the loop energy and momentum. To disentangle the integrals one has to carry
out similar steps as the ones outlined in Sec.\ \ref{sec:der} and repeatedly
apply the identity
\begin{equation}
f(x) p_\mu g(x) = (p_\mu - i \tilde{\partial}_\mu) f(x) g(x),
\end{equation}
where $f(x)$ and $g(x)$ are arbitrary functions of spacetime and the
derivative $\tilde{\partial}_\mu = (\partial_0,-\nabla)$ acts only on the
next object to the right. The method outlined there can easily be
transcribed to the present case where the time dimension is included.
If the field $U(x)$ in $M_0$ is set to zero, things simplify because $M_0$
now depends on $p_\mu$ only. The effective action then becomes $S_{\rm eff}
= - \int_x {\cal V}_{\rm eff}$ with
\begin{equation} \label{eff:Veff}
{\cal V}_{\rm eff} = -\frac{i}{2} {\rm tr} \int_k \ln[M_0(k)]
\end{equation}
the effective potential. The easiest way to evaluate the integral over
the loop variable $k$ is to first differentiate the expression with
respect to the chemical potential $\mu_0$
\begin{equation}
\frac{\partial}{\partial \mu_0} {\rm tr} \, \int_k \ln[M_0(k)] = -2 \,
\int_k \frac{\epsilon({\bf k})}{k_0^2 - E^2({\bf k}) + i \eta },
\end{equation}
with $E({\bf k})$ the Bogoliubov spectrum (\ref{eff:bogo}). The
integral over $k_0$ can be carried out with the help of a contour
integration, yielding
\begin{equation}
\int_k \frac{\epsilon({\bf k})}{k_0^2 - E^2({\bf k}) + i \eta } =
- \frac{i}{2} \, \int_{\bf k} \frac{\epsilon({\bf k})}{E({\bf k})}.
\end{equation}
This in turn is easily integrated with respect to $\mu_0$. Putting the
pieces together, we obtain
\begin{equation} \label{Veff}
{\cal V} = - \frac{\mu_0^2}{4 \lambda_0} + \frac{1}{2} \int_{\bf
k} E({\bf k}).
\end{equation}
The integral over the loop momentum in arbitrary space dimension $d$
yields
\begin{equation} \label{regularized}
{\cal V} = - \frac{\mu_0^2}{4 \lambda_0} - L_d m^{d/2} \mu_0^{d/2 + 1},
\;\;\; L_d = \frac{\Gamma(1-d/2) \Gamma(d/2 + 1/2)}{2 \pi^{d/2 + 1/2}
\Gamma(d/2+2)}
\end{equation}
where we employed the integral representation of the Gamma function
\begin{equation} \label{gamma}
\frac{1}{a^z} = \frac{1}{\Gamma(z)} \int_0^\infty \frac{\mbox{d} \tau}{\tau}
\tau^z {\rm e}^{-a \tau}
\end{equation}
together with dimensional regularization to suppress irrelevant ultraviolet
divergences.
For comparison, let us also evaluate the integral in (\ref{Veff}) over the
loop momentum in three dimensions by introducing a momentum cutoff
$\Lambda$
\begin{eqnarray} \label{bec:Vnon}
\lefteqn{{\cal V}_{\rm eff} = \frac{1}{2} \int_{\bf k} E({\bf k}) =
\frac{1}{4 \pi^2} \int_0^\Lambda k^2
E(|{\bf k}|) = } \nonumber \\ && \frac{1}{4 \pi^2} \left(\frac{1}{10}
\frac{\Lambda^5}{m} +\frac{1}{3} \mu_0 \Lambda^3 - m \mu_0^2 \Lambda +
\frac{32}{15} m^{3/2} \mu_0^{5/2} \right) + {\cal O}
\left(\frac{1}{\Lambda}\right).
\end{eqnarray}
From (\ref{regularized}), we obtain by setting $d=3$ only the finite part,
so that all terms diverging with a strictly positive power of the momentum
cutoff are suppressed. As we remarked in Sec.\ \ref{sec:der}, these
contributions, which come from the ultraviolet region, cannot be physically
very relevant because the simple model (\ref{eff:Lagr}) breaks down here.
On account of the uncertainty principle, stating that large momenta
correspond to small distances, these terms are always local and can be
absorbed by redefining the parameters appearing in the Lagrangian
\cite{Donoghue}. Since $\mu_0 = 2 \lambda_0 |\bar{\phi}|^2$, we see that
the first diverging term in (\ref{bec:Vnon}) is an irrelevant constant,
while the two remaining diverging terms can be absorbed by introducing the
renormalized parameters
\begin{eqnarray}
\mu &=& \mu_0 - \frac{1}{6\pi^2} \lambda_0 \Lambda^3 \label{bec:renmu} \\
\lambda &=& \lambda_0 - \frac{1}{\pi^2} m \lambda_0^2
\Lambda. \label{bec:renla}
\end{eqnarray}
Because the diverging terms are---at least to this order---of a form already
present in the original Lagrangian, the theory is called ``renormalizable''.
The renormalized parameters are the physical ones that are to be identified
with those measured in experiment. In this way, we see that the
contributions to the loop integral stemming from the ultraviolet region are
of no importance. What remains is the finite part
\begin{equation} \label{bec:finite}
{\cal V}_{\rm eff} = \frac{8}{15 \pi^2} m^{3/2} \mu_0^{5/2},
\end{equation}
which, as we have seen, is obtained directly without
renormalization when using dimensional regularization. In this scheme,
divergences proportional to powers of the cutoff never show up. Only
logarithmic divergences appear as $1/\epsilon$ poles, where $\epsilon$
is the deviation from the upper critical dimension ($d=2$ in the present
case). These logarithmic divergences $\ln(\Lambda/E)$, with $E$ an
energy scale, are relevant also in the infrared because for fixed cutoff
$\ln(\Lambda/E) \rightarrow -\infty$ when $E$ is taken to zero.
In so-called ``nonrenormalizable'' theories, the ultraviolet-diverging terms
are still local but not of a form present in the original Lagrangian.
Whereas in former days such theories were rejected because there supposed
lack of predictive power, the modern view is that there are no fundamental
theories and that there is no basic difference between renormalizable and
nonrenormalizable theories \cite{CaoSc}. Even a renormalizable theory like
(\ref{eff:Lagr}) should be extended to include all higher-order terms such
as a $|\phi|^6$-term which are allowed by symmetry. These additional terms
render the theory ``nonrenormalizable''. This does not however change the
predictive power of the theory. The point is that when describing the
physics at an energy scale $E$ far below the cutoff, the higher-order terms
are suppressed by powers of $E/\Lambda$, as follows from dimensional
analysis. Therefore, far below the cutoff, the nonrenormalizable terms are
negligible.
That $d=2$ is the upper critical dimension of the problem at hand can be
seen by noting that $L_d$ in (\ref{regularized}) diverges when $d$ tends to
2. Special care has to be taken for this case. For $d \neq 2$, we obtain
with the help of (\ref{bec:thermo})
\cite{Weichman}
\begin{equation} \label{bec:n}
\bar{n} = \frac{\mu_0}{2 \lambda_0} \left[1 + (d + 2) L_d m^{d/2} \lambda_0
\mu_0^{d/2-1}\right]
\end{equation}
and
\begin{equation}
c^2 = \frac{\mu_0}{m} \left[1 - (d-2) (d/2+1) L_d
m^{d/2}\lambda_0 \mu_0^{d/2-1} \right],
\end{equation}
where to arrive at the last equation an expansion in the coupling constant
$\lambda_0$ is made. Up to this point, we have considered the chemical
potential to be the independent parameter, thereby assuming the presence of
a reservoir that can freely exchange particles with the system under study.
The system can thus have any number of particles, only the average number is
fixed by external conditions. From the experimental point of view it is,
however, often more realistic to consider the particle number fixed. If
this is the case, the particle number density $\bar{n}$ should be considered
as independent variable and the chemical potential should be expressed in
terms of it. This can be achieved by inverting relation (\ref{bec:n}):
\begin{equation}
\mu_0 = 2 \lambda_0 \bar{n} \left[1 - 2 (d-2) (d/2+1) L_d
m^{d/2} \lambda_0 (2 \lambda_0 \bar{n})^{d/2-1} \right].
\end{equation}
The sound velocity expressed in terms of the particle number density reads
\begin{equation} \label{bec:c}
c^2 = \frac{2 \lambda_0 \bar{n}}{m} \left[1 - d (d/2+1)
L_d m^{d/2} \lambda_0 (2 \lambda_0 \bar{n})^{d/2-1} \right].
\end{equation}
These formulas reproduce the known results in $d=3$ \cite{FW} and $d=1$
\cite{Lieb}.
To investigate the case $d=2$, we expand the potential (\ref{regularized})
around $d=2$:
\begin{equation} \label{Vd=2}
{\cal V} = - \frac{\mu_0^2}{4 \lambda_0} - \frac{1}{4
\pi \epsilon} \frac{m \mu_0^2}{\kappa^\epsilon} + {\cal O}(\epsilon^0),
\end{equation}
with $\epsilon = 2-d$. This expression is seen to diverge in the limit $d
\rightarrow 2$. The theory can be rendered finite by
introducing a renormalized coupling constant via (\ref{eff:lambdar}).
We also see that the chemical potential is not renormalized to this order.
The beta function $\beta(\hat{\lambda})$ follows as
\cite{Uzunov}
\begin{equation}
\beta(\hat{\lambda}) = \kappa \left. \frac{\partial \hat{\lambda}}{\partial
\kappa} \right|_{\lambda_0} = -\epsilon \hat{\lambda} + \frac{m}{\pi}
\hat{\lambda}^2.
\end{equation}
In the upper critical dimension, this yields only one fixed point, viz.\
the infrared-stable (IR) fixed point $\hat{\lambda}^* = 0$. Below
$d=2$, this point is shifted to $\hat{\lambda}^* = \epsilon \pi/m$. It
is now easily checked that Eqs.\ (\ref{bec:n}) and (\ref{bec:c})
also reproduce the two-dimensional results (\ref{eff:nc}).
In the one-loop approximation there is no field renormalization; this is the
reason why in (\ref{eff:Lagr}) we gave only the bare parameters $\mu_0$ and
$\lambda_0$ an index 0, and not $\phi$.
We proceed by calculating the fraction of particles residing in the
condensate. In deriving the Bogoliubov spectrum (\ref{eff:bogo}), we
set $|\bar{\phi}|^2 = \mu_0/2 \lambda_0$ thereby fixing the number
density of particles contained in the condensate,
\begin{equation} \label{bec:n0}
\bar{n}_0 = |\bar{\phi}|^2,
\end{equation}
in terms of the chemical potential. For our present consideration we
have to keep $\bar{\phi}$ as independent variable. The spectrum of the
elementary excitation expressed in terms of $\bar{\phi}$ is
\begin{equation} \label{bec:bogog}
E({\bf k}) = \sqrt{\bigl[ \epsilon({\bf k}) - \mu_0 + 4 \lambda_0
|\bar{\phi}|^2 \bigr]^2 - 4 \lambda_0^2 |\bar{\phi}|^4 } \, .
\end{equation}
It reduces to the Bogoliubov spectrum when the mean-field value
(\ref{eff:min}) for $\bar{\phi}$ is inserted. Equation (\ref{eff:Veff})
for the effective potential is still valid, and so is
(\ref{eff:Omega}). We thus obtain for the particle number density
\begin{equation}
\bar{n} = \left. |\bar{\phi}|^2 - \frac{1}{2} \frac{\partial}{\partial \mu_0}
\int_{\bf k} E ({\bf k}) \right|_{|\bar{\phi}|^2 = \mu_0/2 \lambda_0},
\end{equation}
where the mean-field value for $\bar{\phi}$ is to be substituted after the
differentiation with respect to the chemical potential has been carried out.
We find
\begin{equation}
\bar{n} = |\bar{\phi}|^2 - 2^{d/2-2} \frac{d^2-4}{d-1} L_d m^{d/2}
\lambda_0^{d/2} |\bar{\phi}|^d
\end{equation}
or for the so-called depletion of the condensate \cite{TN}
\begin{equation} \label{depl}
\frac{\bar{n}}{\bar{n}_0} -1 \approx - 2^{d/2-2} \frac{d^2-4}{d-1} L_d m^{d/2}
\lambda^{d/2} n^{d/2-1},
\end{equation}
where in the last term we replaced the bare coupling constant with the
(one-loop) renormalized one. This is consistent to this
order since this term is already a one-loop result. Equation
(\ref{depl}) shows that even at zero temperature not all the particles
reside in the condensate. Due to the interparticle repulsion, particles
are removed from the zero-momentum ground state and put in states of
finite momentum. It has been estimated that in bulk superfluid
$^4$He---a strongly interacting system---only about 8\% of the particles
condense in the zero-momentum state \cite{PeOn}. For $d=2$, the
right-hand side of Eq.\ (\ref{depl}) reduces to
\begin{equation}
\frac{\bar{n}}{\bar{n}_0} -1 \approx \frac{m \lambda}{2 \pi},
\end{equation}
which is seen to be independent of the particle number density.
Despite the fact that not all the particles reside in the condensate,
they all participate in the superfluid motion at zero temperature
\cite{NoPi}. Apparently, the condensate drags the normal fluid along
with it. To show this, let us assume that the entire system moves with
a velocity ${\bf u}$ relative to the laboratory system. As is known
from standard hydrodynamics the time derivate in the frame following the
motion of the fluid is $\partial_0 + {\bf u} \cdot \nabla$ [see Eq.\
(\ref{boost})]. If we insert this in the Lagrangian
(\ref{eff:Lagr}) of the interacting Bose gas, it becomes
\begin{equation} \label{bec:Lagu}
{\cal L} = \phi^* [i \partial_0 - \epsilon(-i \nabla) + \mu_0 - {\bf u}
\cdot (-i \nabla)] \phi - \lambda_0 |\phi|^4,
\end{equation}
where the extra term features the total momentum $\int_{\bf x}
\phi^* (-i \nabla) \phi$ of the system. The velocity $-{\bf u}$
multiplying this is on the same footing as the chemical potential $\mu_0$
multiplying the particle number $\int_{\bf x} |\phi|^2$. Whereas
$\mu_0$ is associated with particle number conservation, ${\bf u}$ is
related to the conservation of momentum.
In the two-fluid picture, the condensate can move with a different velocity
${\bf v}_{\rm s}$ as the rest of the system. To bring this out we introduce
new fields, cf.\ (\ref{eff:newfields})
\begin{equation}
\phi (x) \rightarrow \phi'(x) = {\rm e}^{im {\bf v}_{\rm s} \cdot {\bf x}}
\phi (x)
\end{equation}
in terms of which the Lagrangian becomes \cite{Brown}
\begin{equation} \label{bec:Lagus}
{\cal L} = \phi^* \bigl[i\partial_0 - \epsilon(-i \nabla) + \mu_0 -
\tfrac{1}{2} m {\bf v}_{\rm s} \cdot ({\bf v}_{\rm s} - 2 {\bf u}) -
({\bf u} - {\bf v}_{\rm s}) \cdot (-i\nabla) \bigr] \phi - \lambda_0
|\phi|^4,
\end{equation}
where we dropped the primes on $\phi$ again. Both velocities appear in
this expression. Apart from the change ${\bf u} \rightarrow {\bf u} -
{\bf v}_{\rm s}$ in the second last term, the field transformation
resulted in a change of the chemical potential
\begin{equation} \label{bec:mureplacement}
\mu_0 \rightarrow \mu_{\rm eff} :=
\mu_0 - \tfrac{1}{2} m {\bf v}_{\rm s} \cdot ({\bf v}_{\rm s} - 2 {\bf u})
\end{equation}
where $\mu_{\rm eff}$ may be considered as an effective chemical potential.
The equations for the Bogoliubov spectrum and the thermodynamic
potential are readily written down for the present case with these two
changes are kept in mind. In particular, the effective potential is
given by (\ref{Veff}) with the replacement Eq.\
(\ref{bec:mureplacement}). The momentum density, or equivalently, the
mass current ${\bf g}$ of the system is obtained in this approximation
by differentiating the effective potential with respect to $-{\bf u}$.
We find, using the equation
\begin{equation}
\frac{\partial \mu_{\rm eff}}{\partial {\bf u}} = m {\bf v}_{\rm s}
\end{equation}
that it is given by
\begin{equation} \label{bec:j}
{\bf g} = \rho_{\rm s} {\bf v}_{\rm s} ,
\end{equation}
with $\rho_{\rm s} = m \bar{n}$ the superfluid mass density. This
equation, comprising the total particle number density $\bar{n}$, shows
that at zero temperature indeed all the particles are involved in the
superflow, despite the fact that only a fraction of them resides in the
condensate \cite{NoPi}. The superfluid mass density $\rho_{\rm s}$,
obtained by evaluating the response of the system to an externally
imposed velocity field ${\bf u}$, should not be confused with the number
density $\bar{n}_0$ of particles contained in the condensate introduced in
Eq.\ (\ref{bec:n0}).
Let us close this section by pointing out a quick trail to arrive at the
effective theory (\ref{eff:Leff}) starting from the microscopic model
(\ref{eff:Lagr}). To this end we set
\begin{equation}
\phi(x) = {\rm e}^{i \varphi(x)} \, [\sqrt{\bar{n}} + \tilde{\phi}(x)],
\end{equation}
and expand the Lagrangian (\ref{eff:Lagr}) up to quadratic terms in
$\tilde{\phi}$. This leads to
\begin{equation}
{\cal L}^{(2)} = - {\cal V}_0 - \bar{n} U - \sqrt{\bar{n}} U
(\tilde{\phi} +
\tilde{\phi}^*) - \lambda_0 \bar{n} (\tilde{\phi} + \tilde{\phi}^*)^2,
\end{equation}
where we used the mean-field equation $\mu_0 = 2 \lambda_0 \bar{n}$. We next
integrate out the tilde fields---which is tantamount to substituting the
field equation for these fields back into the Lagrangian---to obtain
\begin{equation} \label{eff:quick}
{\cal L}_{\rm eff} = - \bar{n} U(x) + \frac{1}{4} U(x) \frac{1}{\lambda_0}
U(x),
\end{equation}
apart from the irrelevant constant term ${\cal V}_0$. This form of the
effective theory is equivalent to the one found before in (\ref{eff:Lagr}).
We have cast the last term in a form that can be easily generalized to
systems with long-ranged interactions. A case of particular interest to us
is the Coulomb potential
\begin{equation}
V({\bf x}) = \frac{e_0^2}{|{\bf x}|},
\end{equation}
whose Fourier transform in $d$ space dimensions reads
\begin{equation}
V({\bf k}) = 2^{d-1} \pi^{(d-1)/2} \Gamma\left[\tfrac{1}{2}(d-1)\right]
\frac{e_0^2}{|{\bf k}|^{d-1}}.
\end{equation}
The simple contact interaction $L_{\rm i} = - \lambda_0 \int_{\bf x}
|\phi(x)|^4$ in (\ref{eff:Lagr}) gets now replaced by
\begin{equation}
L_{\rm i} = - \frac{1}{2} \int_{{\bf x}, {\bf y}} |\phi(t,{\bf x})|^2
V({\bf x} - {\bf y}) |\phi(t,{\bf y})|^2.
\end{equation}
The rationale for using the three-dimensional Coulomb potential even when
considering charges confined to move in a lower dimensional space is that
the electromagnetic interaction remains three-dimensional. The effective
theory (\ref{eff:quick}) now becomes in the Fourier representation
\begin{equation} \label{effCoul}
{\cal L}_{\rm eff} = - \bar{n} U(k) + \frac{1}{2} U(k_0,{\bf k})
\frac{1}{V({\bf k})} U(k_0,-{\bf k})
\end{equation}
and leads to the dispersion relation
\begin{equation}
E^2({\bf k}) = 2^d \pi^{(d-1)/2} \Gamma\left[\tfrac{1}{2}(d-1)\right]
\frac{\bar{n} e_0^2}{m} |{\bf k}|^{3-d}.
\end{equation}
For $d=3$, this yields the famous plasma mode with an energy gap given by
the plasma frequency $\omega_{\rm p}^2 = 4 \pi \bar{n} e_0^2/m$.
To appreciate under which circumstances the Coulomb interaction becomes
important, we note that for electronic systems $1/|{\bf x}| \sim k_{\rm
F}$ for dimensional reasons and the fermion number density $\bar{n} \sim
k_{\rm F}^d$, where $k_{\rm F}$ is the Fermi momentum. The ratio of the
Coulomb interaction energy $\epsilon_{\rm C}$ to the Fermi energy
$\epsilon_{\rm F} = k_{\rm F}^2/2m$ is therefore proportional to
$\bar{n}^{-1/d}$. This means that the lower the electron number
density, the more important the Coulomb interaction becomes.
\section{Quenched Impurities}
In most of the quantum systems we will be considering, impurities plays
an important role. The main effect of impurities is typically to
localize states. Localization counteracts the tendency of the system to
become superfluid. We shall therefore now include impurities in the
interacting Bose gas to see whether this leads to localization and
whether the system still has a superfluid phase. It is expected that on
increasing the strength of the disorder for a given repulsive
interparticle interaction, the superfluid undergoes a zero-temperature
phase transition to an insulating phase of localized states. The
location and nature of this transition will be the subject of
Ch. \ref{chap:qpt}.
We shall assume that the impurities are fixed and that their
distribution is not affected by the host system. This type of
impurities is called quenched impurities and is to be distinguished from
so-called annealed impurities which change with and depend on the host
system. To account for impurities, we add to the theory
(\ref{eff:Lagr}) the term
\begin{equation} \label{Dirt:dis}
{\cal L}_{\Delta} = \psi({\bf x}) \, |\phi(x)|^2,
\end{equation}
with $\psi({\bf x})$ a random field whose distribution is assumed to be
Gaussian \cite{Ma}
\begin{equation} \label{random}
P(\psi) = \exp \left[-\frac{1}{\Delta_0} \int_{\bf x} \, \psi^2({\bf x})
\right],
\end{equation}
and characterized by the disorder strength $\Delta_0$. The engineering
dimension of the random field is the same as that of the chemical
potential which is one, $[\psi]=1$, while that of the parameter
$\Delta_0$ is $[\Delta_0] = 2-d$ so that the exponent in (\ref{random})
is dimensionless. Since $\psi({\bf x})$ depends only on the $d$ spatial
dimensions, the impurities it describes should be considered as grains
randomly distributed in space. The quantity
\begin{equation} \label{Dirt:Z}
Z[\psi] = \int \mbox{D} \phi^* \mbox{D} \phi \, \exp\left(i \int_x \, {\cal L}
\right),
\end{equation}
where now ${\cal L}$ stands for the Lagrangian (\ref{eff:Lagr}) with the
term (\ref{Dirt:dis}) added, is the zero-temperature partition function
for a given impurity configuration $\psi$. In the case of quenched
impurities, the average of an observable $O(\phi^*,\phi)$ is obtained as
follows
\begin{equation}
\langle O(\phi^*,\phi) \rangle = \int \mbox{D} \psi P(\psi) \langle
O(\phi^*,\phi) \rangle_\psi,
\end{equation}
where $\langle O(\phi^*,\phi) \rangle_\psi$ indicates the
grand-canonical average for a given impurity configuration. In other
words, first the ensemble average is taken, and only after that the
averaging over the random field is carried out.
In terms of the shifted field, the added term reads
\begin{equation}
{\cal L}_{\Delta} = \psi({\bf x}) (|\bar{\phi}|^2 + |\tilde{\phi}|^2 +
\bar{\phi} \tilde{\phi}^* + \bar{\phi}^* \tilde{\phi} ).
\end{equation}
The first two terms lead to an irrelevant change in the chemical
potential, so that we only have to consider the last two terms, which we can
cast in the form
\begin{equation}
{\cal L}_{\Delta} = \psi({\bf x}) \, \bar{\Phi}^\dagger \tilde{\Phi},
\;\;\;\;\;\;\;
\bar{\Phi} = \left(\begin{array}{l} \bar{\phi} \\ \bar{\phi}^*
\end{array} \right).
\end{equation}
The integral over $\tilde{\Phi}$ is Gaussian in the Bogoliubov
approximation and is easily performed to yield an additional term to the
effective action
\begin{equation}
S_{\Delta} = -\frac{1}{2} \int_{x,y} \psi({\bf x}) \bar{\Phi}^\dagger \, G_0(x-y)
\bar{\Phi} \psi({\bf y}),
\end{equation}
where the propagator $G_0$ is the inverse of the matrix $M_0$ introduced
in (\ref{eff:M}) with the field $U(x)$ set to zero. Let us
first Fourier transform the fields,
\begin{eqnarray}
G_0(x-y) &=& \int_k {\rm e}^{-i k \cdot (x-y)} \, G_0(k) \\
\psi({\bf x}) &=& \int_{\bf k} {\rm e}^{i {\bf k} \cdot {\bf x}} \psi({\bf k}).
\end{eqnarray}
The contribution to the effective action then appears in the form
\begin{equation} \label{S_d}
S_{\Delta} = -\frac{1}{2} \int_{\bf k} |\psi({\bf k})|^2
\bar{\Phi}^\dagger G(0,{\bf k}) \bar{\Phi}.
\end{equation}
Since the random field is Gaussian distributed [see (\ref{random})], the
average over this field representing quenched impurities yields,
\begin{equation}
\langle |\psi({\bf k})|^2 \rangle = \tfrac{1}{2} V \Delta_0.
\end{equation}
The remaining integral over the loop momentum in (\ref{S_d}) is readily
carried out to yield
\begin{equation} \label{L_D}
\langle {\cal L}_\Delta \rangle = \frac{1}{2} \Gamma(1-d/2)
\left(\frac{m}{2 \pi} \right)^{d/2} |\bar{\phi}|^2 (6 \lambda_0
|\bar{\phi}|^2 - \mu_0)^{d/2-1} \Delta_0.
\end{equation}
This contribution is seen to diverge in the limit $d \rightarrow 2$:
\begin{equation} \label{L2_D}
\langle {\cal L}_\Delta \rangle = \frac{1}{4 \pi} \frac{m
\mu_0}{\lambda_0 \kappa^\epsilon} \frac{\Delta_0}{\epsilon},
\end{equation}
where we substituted the mean-field value $\mu_0 = 2 \lambda_0
|\bar{\phi}|^2$. Recall that $\kappa$ is an arbitrary scale parameter
introduced for dimensional reasons; the engineering dimension of the
right-hand side in (\ref{L2_D}) has the correct value $3 - \epsilon$ this
way. The result (\ref{L2_D}) is a first indication of the importance of
impurities in $d=2$, showing that in order to render the random theory finite
a modified renormalized coupling constant $\hat{\lambda}$ has to be
introduced via, cf.\ (\ref{eff:lambdar}),
\begin{equation}
\frac{1}{\lambda_0} = \frac{1}{\kappa^\epsilon}
\left[ \frac{1}{\hat{\lambda}} -
\frac{m}{\pi\epsilon} \left(1 - \frac{\hat{\Delta}}{\mu \hat{\lambda}}
\right) \right],
\end{equation}
which depends on the disorder strength. The renormalized parameter
$\hat{\Delta}$ is defined in the same way as $\hat{\lambda}$.
In the previous section we saw that due to the interparticle repulsion,
not all the particles reside in the condensate. We expect that the
random field causes an additional depletion of the condensate. To
obtain this, we differentiate (\ref{L_D}) with respect to the chemical
potential. This gives \cite{pla}
\begin{equation} \label{depDelta}
\bar{n}_\Delta = \frac{\partial {\langle \cal L
}_\Delta \rangle}{\partial \mu} =
\frac{2^{d/2-5}\Gamma(2-d/2)}{\pi^{d/2}} m^{d/2} \lambda^{d/2-2}
\bar{n}_0^{d/2-1} \Delta,
\end{equation}
where $\bar{n}_0$ denotes the density of particles residing in the
condensate. We have here again replaced the bare parameters with the
(one-loop) renormalized ones. This is consistent to this order since
(\ref{depDelta}) is already a one-loop result.
The divergence in the limit $\lambda \rightarrow 0$ for
$d <4$ signals the collapse of the system when the interparticle
repulsion is removed. Note that in $d=2$, the depletion is independent
of the condensate density $\bar{n}_0$ \cite{GPS}:
\begin{equation}
\bar{n}_\Delta = \frac{1}{16 \pi} \frac{m}{\lambda} \Delta .
\end{equation}
The total particle number density $\bar{n}$ is given by
\begin{equation}
\bar{n} = \bar{n}_0 \left(1 + \frac{m \lambda}{2 \pi} \right) +
\frac{1}{16 \pi} \frac{m}{\lambda} \Delta.
\end{equation}
We next calculate the mass current ${\bf g}$ to determine the superfluid
mass density, i.e., the mass density flowing with the superfluid
velocity ${\bf v}_{\rm s}$. As we have seen in the preceding section,
in the absence of impurities and at zero temperature all the particles
participate in the superflow and move on the average with the velocity
${\bf v}_{\rm s}$. We expect this to no longer hold in the presence of
impurities. To determine the change in the superfluid mass density due
to impurities, we replace $\mu_0$ with $\mu_{\rm eff}$ as defined in
(\ref{bec:mureplacement}) and $i\partial_0$ with $i\partial_0 - ({\bf u}
- {\bf v}_{\rm s}) \cdot (-i \nabla)$ in the contribution (\ref{S_d}) to
the effective action, and differentiate it with respect to $-{\bf
u}$---the externally imposed velocity. We find to linear order in the
difference ${\bf u}- {\bf v}_{\rm s}$:
\begin{equation}
{\bf g} = \rho_{\rm s} {\bf v}_{\rm s} + \rho_{\rm n} {\bf u},
\end{equation}
with the superfluid and normal mass density \cite{pla}
\begin{equation}
\rho_{\rm s} = m\left(\bar{n} - \frac{4}{d} \bar{n}_\Delta \right), \;\;\;\;
\rho_{\rm n} = \frac{4}{d} m \bar{n}_\Delta.
\end{equation}
We see that the normal density is a factor $4/d$ larger than the mass
density $m\bar{n}_\Delta$ knocked out of the condensate by the impurities.
(For $d=3$ this gives the factor $\tfrac{4}{3}$ first found in Ref.\
\cite{HM}.) Apparently, part of the zero-momentum states belongs for $d < 4$
not to the condensate, but to the normal fluid. Being trapped by the
impurities, this fraction of the zero-momentum states are localized. This
shows that the phenomenon of localization can be accounted for in the
Bogoliubov theory of superfluidity by including a random field.
\section{Vortices}
We shall now include vortices in the system. A vortex in two space
dimensions may be pictured as a point-like object at scales large
compared to their core seize. It is characterized by the winding number
$w$ of the map
\begin{equation}
\varphi({\bf x}) : {\rm S}^1_{\bf x} \rightarrow {\rm S}^1
\end{equation}
of a circle S$^1_{\bf x}$ around the vortex into the internal circle
S$^1$ parameterized by the Goldstone field $\varphi$. In the
microscopic theory (\ref{eff:Lagr}), the asymptotic solution of a static
vortex with winding number $w$ located at the origin is well known
\cite{Fetter}
\begin{equation} \label{qm:sol}
\phi({\bf x}) = \sqrt{\frac{\mu_0}{2 \lambda_0}} \left(1 - \xi_0^2
\frac{w^2}{4 {\bf x}^2}\right) {\rm e}^{i w \theta} + {\cal
O}\left(\frac{1}{{\bf x}^4} \right),
\end{equation}
where $\theta$ is the azimuthal angle and $\xi_0 =
1/\sqrt{m\mu_0}=1/mc_0$ is the coherence length. The density profile
$n({\bf x})$ in the presence of this vortex follows from taking
$|\phi({\bf x})|^2$.
To incorporate vortices in the effective theory we employ the powerful
principle of defect gauge symmetry developed by Kleinert
\cite{GFCM,KleinertPl,KleinertCam}. In this approach, one introduces a
so-called vortex gauge field $\varphi_\mu^{\rm P} = (\varphi_0^{\rm P},
\bbox{\varphi}^{\rm P})$ in the effective theory (\ref{eff:Leff}) via
minimally coupling to the Goldstone field:
\begin{equation} \label{hydro:minimal}
\tilde{\partial}_\mu \varphi \rightarrow \tilde{\partial}_\mu \varphi +
\varphi_\mu^{\rm P},
\end{equation}
with $\tilde{\partial}_\mu = (\partial_0,-\nabla)$. If there are $N$
vortices with winding number $w_\alpha$ ($\alpha=1, \cdots, N$)
centered at ${\bf X}^1(t), \cdots , {\bf X}^{N}(t)$, the plastic field
satisfies the relation
\begin{equation} \label{qm:pla}
\nabla \times \bbox{\varphi}^{\rm P}(x) = - 2 \pi \sum_\alpha w_\alpha
\delta[{\bf x} - {\bf X}^\alpha(t)],
\end{equation}
so that we obtain for the superfluid velocity field
\begin{equation} \label{qm:vort}
\nabla \times {\bf v}_{\rm s} = \sum_\alpha \gamma_\alpha
\delta[{\bf x} - {\bf X}^\alpha(t)],
\end{equation}
as required. Here, $\gamma_\alpha = (2 \pi/m) w_\alpha$ is the
circulation of the $\alpha$th vortex which is quantized in units of $2
\pi/m$. A summation over the indices labeling the vortices will always be
made explicit. The combination $\tilde{\partial}_\mu \varphi +
\varphi_\mu^{\rm P}$ is invariant under the local gauge transformation
\begin{equation}
\varphi(x) \rightarrow \varphi(x) + \alpha(x); \;\;\;\;\;
\varphi^{\rm P}_\mu \rightarrow \varphi^{\rm P}_\mu - \tilde{\partial}_\mu
\alpha(x),
\end{equation}
with $\varphi^{\rm P}_\mu$ playing the role of a gauge field.
In the gauge $\varphi^{\rm P}_0=0$, Eq.\ (\ref{qm:pla}) can be solved to yield
\begin{equation} \label{eff:pla}
\varphi^{\rm P}_i(x) = 2 \pi \epsilon_{ij} \sum_\alpha w_\alpha
\delta_j[x,L_\alpha(t)]
\end{equation}
where $\epsilon_{ij}$ is the antisymmetric Levi-Civita symbol in two
dimensions, with $\epsilon_{12}=1$, and $\bbox{\delta} [x,L_\alpha(t)]$ is a
delta function on the line $L_\alpha(t)$ starting at the center ${\bf
X}^\alpha(t)$ of the $\alpha$th vortex and running to spatial infinity along
an arbitrary path:
\begin{equation}
\delta_i [x,L_\alpha(t)] = \int_{L_\alpha(t)} \mbox{d} y_i \, \delta({\bf x} -
{\bf y}).
\end{equation}
Let us for the moment concentrate on static vortices. The field equation
obtained from the effective theory (\ref{eff:Leff}) with $\nabla \varphi$
replaced by the covariant derivative $\nabla \varphi - \bbox{\varphi}^{\rm
P}$ and $\partial_0 \varphi$ set to zero simply reads
\begin{equation}
\nabla \cdot {\bf v}_{\rm s} = 0, \;\;\;\; {\rm or} \;\;\;\; \nabla \cdot (\nabla
\varphi - \bbox{\varphi}^{\rm P}) = 0,
\end{equation}
when the fourth-order term is neglected. It can be easily solved to
yield
\begin{equation} \label{qm:solution}
\varphi ({\bf x}) = - \int_{\bf y} G({\bf x} - {\bf y}) \nabla \cdot
\bbox{\varphi}^{\rm P}({\bf y}),
\end{equation}
where $G({\bf x})$ is the Green function of the Laplace operator
\begin{equation}
G({\bf x}) = \int_{\bf k} \frac{ {\rm e}^{i {\bf k}
\cdot {\bf x}}}{{\bf k}^2} = - \frac{1}{2 \pi} \ln( |{\bf x}|).
\end{equation}
For the velocity field we obtain in this way the well-known expression
\cite{Lamb}
\begin{equation} \label{qm:vortices}
v_i({\bf x}) = \frac{1}{2 \pi} \epsilon_{ij} \sum_{
\alpha=1}^{N} \gamma_\alpha \frac{x_j- X^\alpha_j}{|
{\bf x}-{\bf X}^\alpha | ^{2}} ,
\end{equation}
which is valid for ${\bf x}$ sufficiently far away from the vortex cores.
Let us now specialize to the case of a single static vortex at the origin.
On substituting the corresponding solution in (\ref{roh1}), we find for the
density profile in the presence of a static vortex asymptotically
\begin{equation}
n({\bf x}) = \bar{n} \left(1 - \xi_0^2 \frac{w^2}{2{\bf x}^2} \right).
\end{equation}
This is the same formula as the one obtained from the solution
(\ref{qm:sol}) of the microscopic theory. This exemplifies that with
the aid of the defect gauge symmetry principle, vortices are correctly
accounted for in the effective theory.
Let us proceed to investigate the dynamics of vortices in this formalism and
derive the action which governs it. We consider only the first part of
the effective theory (\ref{eff:Leff}). In ignoring the higher-order terms,
we approximate the superfluid by an incompressible fluid for which the
particle number density is constant, $n(x) = \bar{n}$, see Eq.\
(\ref{roh1}). We again work in the gauge $\varphi^{\rm P}_0=0$ and replace
$\nabla \varphi$ by the covariant derivative $\nabla \varphi -
\bbox{\varphi}^{\rm P}$, with the plastic field given by (\ref{qm:pla}). The
solution of the resulting field equation for $\varphi$ is again of the form
(\ref{qm:solution}), but now it is time-dependent because the plastic field
is. Substituting this in the action $S_{\rm eff} = \int_x {\cal
L}_{\rm eff}$, we find after some straightforward calculus
\begin{equation} \label{qm:action}
S_{\rm eff} = m \bar{n} \int_t \left[\frac{1}{2} \sum_\alpha \gamma_\alpha
{\bf X}^\alpha \times \dot{\bf X}^\alpha + \frac{1}{2\pi} \sum_{\alpha <
\beta} \gamma_\alpha \gamma_\beta \ln(|{\bf X}^\alpha - {\bf X}^\beta|/a)
\right].
\end{equation}
The constant $a$ has the dimension of a length and is included in the
argument of the logarithm for dimensional reasons. Physically, it
represents the core size of a vortex. The first term in
(\ref{qm:action}) leads to a twisted canonical structure which is
reminiscent of that found in the so-called Landau problem of a charged
particle confined to move in a plane perpendicular to an applied
magnetic field $H$.
To display the canonical structure, let us rewrite the first term of the
Lagrangian corresponding to (\ref{qm:action}) as
\begin{equation}
L_1 = m \bar{n} \sum_\alpha \gamma_\alpha X^\alpha_1 \dot{X}^\alpha_2,
\end{equation}
where we ignored a total derivative. It follows that the canonical
conjugate to the second component $X_2^\alpha$ of the center coordinate ${\bf
X}^\alpha$ is essentially its first component \cite{YM}
\begin{equation}
\frac{\partial L_1}{\partial \dot{X}_2^\alpha} = m \bar{n} \gamma_\alpha
X^\alpha_1.
\end{equation}
It implies that phase space coincides with real space and gives rise to the
commutation relation
\begin{equation}
[X_2^\alpha, X_1^\beta ] = \frac{i}{w_\alpha} \ell^2 \delta^{\alpha
\beta},
\end{equation}
where
\begin{equation} \label{qm:ell}
\ell = 1/\sqrt{2 \pi \bar{n}}
\end{equation}
is a characteristic length whose definition is such that $2 \pi \ell^2$ is
the average area occupied by a particle of the superfluid film. The
commutation relation leads to an uncertainty in the location of the vortex
centers given by
\begin{equation}
\Delta X_1^\alpha \Delta X_2^\alpha \geq \frac{\ell^2}{2 |w_\alpha|} ,
\end{equation}
which is inverse proportional to the particle number density.
From elementary quantum mechanics we know that to each unit cell (of area
$h$) in phase space there corresponds one quantum state. That is, the
number of states in an area $S$ of phase space is given by
\begin{equation}
\mbox{\# states in} \; S = \frac{1}{h} \int_S \mbox{d} p \, \mbox{d} q,
\end{equation}
where $p$ and $q$ are a pair of canonically conjugate variables. For the
case at hand, this implies that the available number of states in an area
$S_\alpha$ of {\it real} space is
\begin{equation}
\mbox{\# states in} \; S_\alpha = |w_\alpha| \, \bar{n}
S_\alpha ,
\end{equation}
or, equivalently, that the number of states per unit area available to the
$\alpha$th vortex is $|w_\alpha| \, \bar{n}$.
This phenomenon that phase space coincides with real space is known to
also arise in the Landau problem. There, it leads to the well-known
degeneracy $|e_\alpha| H/h$ of each Landau level, where $e_\alpha =
v_\alpha e_0$ is the electric charge of the particle, with $e_0 (>0)$
the unit of charge. In terms of the magnetic flux quantum $\Phi_0 =
h/e_0$, the Landau degeneracy can be rewritten as $|v_\alpha| H/\Phi_0 =
|v_\alpha| \bar{n}_\otimes$, with $\bar{n}_\otimes$ the flux number
density. In other words, whereas the degeneracy in the case of vortices
in a superfluid film is given by the particle number density, here it is
given by the flux number density. Using this analogy, we see that the
characteristic length (\ref{qm:ell}) translates into $\ell_H = 1/\sqrt{2
\pi \bar{n}_\otimes}$ which is precisely the magnetic length of the Landau
problem.
The first term in the action (\ref{qm:action}) is also responsible for
the so-called geometrical phase \cite{Berry} acquired by the
wavefunction of a vortex when it traverses a closed path. Let us first
discuss the case of a charged particle moving adiabatically around a
close path $\Gamma_\alpha$. Its wavefunction picks up an extra
Aharonov-Bohm phase factor given by the Wilson loop:
\begin{equation} \label{qm:Berry}
W(\Gamma_\alpha) = \exp[i \gamma(\Gamma_\alpha)] = \exp\left(\frac{i
e_\alpha}{\hbar}
\oint_{\Gamma_\alpha} \mbox{d} {\bf x} \cdot {\bf A}\right) = \exp \left[2
\pi i v_\alpha \frac{H S(\Gamma_\alpha)}{\Phi_0}\right]
\end{equation}
where ${\bf A}$ is the vector potential describing the external magnetic
field and $H S(\Gamma_\alpha)$ is the magnetic flux through the area
$S(\Gamma_\alpha)$ spanned by the loop $\Gamma_\alpha$. The geometrical
phase $\gamma(\Gamma_\alpha)$ in (\ref{qm:Berry}) is seen to be ($2 \pi
v_\alpha$ times) the number of flux quanta enclosed by the path
$\Gamma_\alpha$.
On account of the above analogy, it follows that the geometrical phase
picked up by the wavefunction of a vortex when it is moved adiabatically
around a closed path in the superfluid film is ($2 \pi w_\alpha$ times) the
number of superfluid particles enclosed by the path \cite{HW}.
The second term in the action (\ref{qm:action}) represents the long-ranged
interaction between two vortices mediated by the exchange of Goldstone
quanta. The action yields the well-known equations of motion for point
vortices in an incompressible two-dimensional superfluid \cite{Lamb,Lund}:
\begin{equation}
\dot{X}_i^\beta(t) = \frac{\epsilon_{ij}}{2 \pi} \sum_{\alpha \neq \beta}
\gamma_\alpha \frac{X^\beta_j(t) - X^\alpha_j(t)}{| {\bf X}^\beta(t)-{\bf
X}^\alpha(t) | ^{2}} .
\end{equation}
Note that $\dot{X}_i^\beta(t) = v_i\left[{\bf X}^\beta(t)\right]$, where
${\bf v}(x)$ is the superfluid velocity (\ref{qm:vortices}) with the
time-dependence of the centers of the vortices included. This nicely
illustrates a result due to Helmholtz for ideal fluids, stating that
a vortex moves with the fluid, i.e., at the local velocity produced by the
other vortices in the system. Experimental support for this conclusion has
been reported in Ref.\ \cite{YP}.
\section{Kosterlitz-Thouless Phase Transition \label{sec:KT}}
Although we are interested mainly in quantum phase transitions in these
Lectures, there is one classical phase transition special to two
dimensions which turns out to be relevant for our discussion later
on---the so-called Kosterlitz-Thouless phase transition. It is well
known that a superfluid film undergoes such a phase transition at a
temperature well below the bulk transition temperature. The superfluid
low-temperature state is characterized by tightly bound
vortex-antivortex pairs which at the Kosterlitz-Thouless temperature
unbind and thereby disorder the superfluid state. The disordered state,
at temperatures still below the bulk transition temperature, consists of
a plasma of unbound vortices.
Since the phase transition is an equilibrium transition, we can ignore
any time dependence. The important fluctuations here, at temperatures
below the bulk transition temperature, are phase fluctuations so that we
can consider the London limit, where the phase of the $\phi(x)$-field is
allowed to vary in spacetime while the modulus is kept fixed, and take as
Hamiltonian
\begin{equation} \label{kt:HHe}
{\cal H} = \tfrac{1}{2} \rho_{\rm s} {\bf v}^2_{\rm s},
\end{equation}
where $\rho_{\rm s}$ is the superfluid mass density which we assume to be
constant and ${\bf v}_{\rm s}$ is the superfluid velocity
\begin{equation} \label{kt:vs}
{\bf v}_{\rm s} = \frac{1}{m} (\nabla \varphi - \bbox{\varphi}^{\rm P}),
\end{equation}
with the vortex gauge field $\bbox{\varphi}^{\rm P}$ included to account
for possible vortices in the system. We shall restrict ourselves to
vortices of unit winding number, so that $w_\alpha = \pm 1$ for a vortex
and antivortex, respectively.
The canonical partition function describing the equilibrium configuration of
$N_+$ vortices and $N_-$ antivortices in a superfluid film is given by
\begin{equation} \label{kt:Zorig}
Z_N = \frac{1}{N_+! N_-!} \prod_\alpha \int_{{\bf
X}^\alpha} \int \mbox{D} \varphi \, \exp\left(-\beta \int_{\bf x} {\cal
H}\right),
\end{equation}
with ${\cal H}$ the Hamiltonian (\ref{kt:HHe}) and $N = N_+ + N_-$ the
total number of vortices and antivortices. The factors $N_+!$ and
$N_-!$ arise because the vortices and antivortices are
indistinguishable, and $\prod_\alpha \int_{{\bf X}^\alpha}$ denotes the
integration over the positions of the vortices. The functional integral
over $\varphi$ is Gaussian and therefore easily carried out, with the
result
\begin{equation} \label{kt:Zx}
Z_N = \frac{1}{N_+! N_-!} \prod_\alpha \int_{{\bf X}^\alpha}
\exp\left[\pi \frac{\beta \rho_{\rm s}}{m^2} \sum_{\alpha, \beta}
w_\alpha w_\beta \ln\left(|{\bf X}^\alpha - {\bf
X}^\beta|/a\right) \right].
\end{equation}
Apart from an irrelevant normalization factor, Eq.~(\ref{kt:Zx}) is the
canonical partition function of a two-dimensional Coulomb gas with
charges $q_\alpha = q w_\alpha = \pm q$, where
\begin{equation}
q = \sqrt{2 \pi \rho_{\rm s} }/m.
\end{equation}
Let us rewrite the sum in the exponent appearing in (\ref{kt:Zx}) as
\begin{eqnarray}
\lefteqn{\sum_{\alpha, \beta} q_\alpha
q_\beta \ln\left(|{\bf X}^\alpha - {\bf X}^\beta|/a\right) =}
\nonumber \\ && \sum_{\alpha, \beta} q_\alpha q_\beta \left[
\ln\left(|{\bf X}^\alpha - {\bf X}^\beta|/a\right) - \ln(0) \right] +
\ln(0) \left(\sum_\alpha q_\alpha \right)^2,
\end{eqnarray}
where we isolated the self-interaction in the last term at the
right-hand side. Since $\ln(0) = -\infty$, the charges must add up to
zero so as to obtain a nonzero partition function. From now on we will
therefore assume overall charge neutrality, $\sum_\alpha q_\alpha = 0$,
so that $N_+ = N_- = N/2$, where $N$ must be an even integer. To
regularize the remaining divergence, we replace $\ln(0)$ with an
undetermined, negative constant $-c$. The exponent of (\ref{kt:Zx})
thus becomes
\begin{equation} \label{kt:reg}
\frac{\beta}{2} \sum_{\alpha, \beta} q_\alpha q_\beta \ln\left(|{\bf
X}^\alpha - {\bf X}^\beta|/a \right) = \frac{\beta}{2} \sum_{\alpha \neq
\beta} q_\alpha q_\beta \ln\left(|{\bf X}^\alpha - {\bf X}^\beta|/a
\right) - \beta \epsilon_{\rm c} N,
\end{equation}
where $\epsilon_{\rm c}= c q^2/2$ physically represents the core energy,
i.e., the energy required to create a single vortex. In deriving this we
used the identity $\sum_{\alpha \neq \beta} q_\alpha q_\beta = -
\sum_\alpha q_\alpha^2 = -N q^2$ which follows from charge neutrality.
Having dealt with the self-interaction, we limit the integrations
$\prod_\alpha \int_{{\bf X}^\alpha}$ in (\ref{kt:Zx}) over the location of
the vortices to those regions where they are more than a distance $a$ apart,
$|{\bf X}^\alpha - {\bf X}^\beta| >a$. The grand-canonical partition
function of the system can now be cast in the form
\begin{equation} \label{kt:coul}
Z = \sum_{N=0}^\infty \frac{z^{N}}{[(N/2)!]^2}
\prod_{\alpha} \int_{{\bf X}^\alpha} \exp\left[ \frac{\beta}{2}\sum_{\alpha
\neq \beta} q_\alpha q_\beta \ln\left(|{\bf X}^\alpha - {\bf X}^\beta|/a\right)
\right],
\end{equation}
where $z = \exp(-\beta \epsilon_{\rm c})$ is the fugacity. The system is
known to undergo a phase transition at the Kosterlitz-Thouless
temperature
\cite{Berezinskii,KT73}
\begin{equation} \label{jump}
T_{\rm KT} = \frac{1}{4} q^2 = \frac{\pi}{2} \frac{\rho_{\rm
s}}{m^2},
\end{equation}
triggered by the unbinding of vortex-antivortex pairs. It follows from this
equation that the two-dimensional superfluid mass density $\rho_{\rm
s}(T)$, which varies from sample to sample, terminates on a line with
universal slope as the temperature approaches the Kosterlitz-Thouless
temperature from below \cite{NeKo}.
\section{Dual Theory}
Let us proceed to represent the partition function (\ref{kt:coul}) by a
field theory---a so-called dual theory. The idea behind such a dual
transformation is to obtain a formulation in which the vortices are not
described as singular objects as is the case in the original
formulation, but by ordinary fields. To derive it we note that
$\ln(|{\bf x}|)$ is the inverse of the Laplace operator $\nabla^2$,
\begin{equation}
\frac{1}{2 \pi} \nabla^2 \ln(|{\bf x}|) = \delta({\bf x}).
\end{equation}
This allows us to represent the exponential function in (\ref{kt:coul})
as a functional integral over an auxiliary field $\phi$:
\begin{equation} \label{kt:aux}
\exp\left[ \frac{\beta}{2} \sum_{\alpha \neq \beta} q_\alpha q_\beta
\ln\left(|{\bf X}^\alpha - {\bf X}^\beta|/a\right) \right] =
\int \mbox{D} \phi \exp\left\{ - \int_{\bf x} \left[ \frac{1}{4 \pi \beta} (\nabla
\phi)^2 + i \rho_q \phi \right] \right\},
\end{equation}
where $\rho_q({\bf x}) = \sum_\alpha q_\alpha \delta({\bf x} - {\bf
X}^\alpha)$ is the charge density. In this way, the partition function
becomes
\begin{equation} \label{kt:phi}
Z = \sum_{N=0}^\infty \frac{z^{N}}{[(N/2)!]^2}
\prod_{\alpha=1}^N \int_{{\bf X}^\alpha}
\int \mbox{D} \phi \exp\left\{ - \int_{\bf x} \left[ \frac{1}{4 \pi \beta} (\nabla
\phi)^2 + i \rho_q \phi \right] \right\}.
\end{equation}
In a mean-field treatment, the functional integral over the auxiliary field
introduced in (\ref{kt:aux}) is approximated by the saddle point determined
by the field equation
\begin{equation} \label{kt:feq}
i T \nabla^2 \phi = - 2 \pi \rho_q.
\end{equation}
When we introduce the scalar variable $\Phi := i T \phi$, this equation
becomes formally Gauss' law, with $\Phi$ the electrostatic scalar
potential. The auxiliary field introduced in (\ref{kt:aux}) may
therefore be thought of as representing the scalar potential of the
equivalent two-dimensional Coulomb gas \cite{GFCM}.
On account of charge neutrality, we have the identity
\begin{equation}
\left[ \int_{\bf x} \left( {\rm e}^{iq \phi({\bf x})} + {\rm e}^{-iq \phi({\bf
x})} \right) \right]^N = \frac{N!}{[(N/2)!]^2} \prod_{\alpha=1}^{N}
\int_{{\bf X}^\alpha} {\rm e}^{-i \sum_{\alpha} q_\alpha \phi({\bf
X}^\alpha)},
\end{equation}
where we recall that $N$ is an even number. The factor $N!/[(N/2)!]^2$
is the number of charge-neutral terms contained in the binomial
expansion of the left-hand side. The partition function (\ref{kt:phi})
may thus be written as \cite{GFCM}
\begin{eqnarray} \label{kt:sG}
Z &=& \sum_{N=0}^\infty \frac{(2z)^{N}}{N!}
\int \mbox{D} \phi \exp\left[ - \int_{\bf x} \frac{1}{4 \pi \beta} (\nabla
\phi)^2 \right] \left[\cos\left(\int_{\bf x} q \phi \right)
\right]^N \nonumber \\ &=&
\int \mbox{D} \phi \exp\left\{ - \int_{\bf x} \left[ \frac{1}{4 \pi \beta}
(\nabla \phi)^2 - 2z \cos(q \phi) \right] \right\},
\end{eqnarray}
where in the final form we recognize the sine-Gordon model. This is the
dual theory we were seeking. Contrary to the original formulation
(\ref{kt:Zorig}), which contains the vortices as singular objects, the dual
formulation has no singularities. To see how the vortices and the
Kosterlitz-Thouless phase transition are represented in the dual theory we
note that the field equation of the auxiliary field now reads
\begin{equation} \label{kt:gauss}
i T \nabla^2 \phi = 2 \pi z q \left({\rm e}^{iq \phi} - {\rm e}^{-iq
\phi} \right).
\end{equation}
On comparison with the previous field equation (\ref{kt:feq}), it follows
that the right-hand side represents the charge density of the Coulomb gas.
In terms of the scalar potential $\Phi$, Eq.~(\ref{kt:gauss}) becomes the
Poisson-Boltzmann equation
\begin{equation} \label{kt:PB}
\nabla^2 \Phi = - 2 \pi q \left(z \, {\rm e}^{- \beta q \Phi} - z
\, {\rm e}^{\beta q \Phi} \right),
\end{equation}
describing, at least for temperatures above the Kosterlitz-Thouless
temperature, a plasma of positive and negative charges with
density $n_\pm$,
\begin{equation} \label{kt:spatiald}
n_\pm = z \, {\rm e}^{\mp \beta q \Phi},
\end{equation}
respectively. The fugacity $z$ is the density at zero scalar potential.
(It is to recalled that we suppress factors of $a$ denoting the core
size of the vortices.) Equation (\ref{kt:PB}) is a self-consistent
equation for the scalar potential $\Phi$ giving the spatial distribution
of the charges via (\ref{kt:spatiald}). It follows from this argument
that the interaction term $2z \cos(q \phi)$ of the sine-Gordon model
represents a plasma of vortices.
The renormalization group applied to the sine-Gordon model reveals that at
the Kosterlitz-Thouless temperature $T_{\rm KT} = \tfrac{1}{4}q^2$
there is a phase transition between a low-temperature phase of tightly bound
neutral pairs and a high-temperature plasma phase of unbound vortices
\cite{Schenker}. In the low-temperature phase, the (renormalized) fugacity
scales to zero in the large-scale limit so that the interaction term,
representing the plasma of unbound vortices, is suppressed. The
long-distance behavior of the low-temperature phase is therefore well
described by the free theory $(\nabla \phi)^2/4 \pi \beta$, representing
a gapless mode---the so-called Kosterlitz-Thouless mode. This is the
superfluid state. The expectation value of a single vortex vanishes
because in this gapless state its energy diverges in the infrared.
An important characteristic of a charged plasma is that it has no gapless
excitations, the photon being transmuted into a massive plasmon. To see
this we assume that $q \Phi << T$, so that $\sinh(\beta q \Phi)
\approx \beta q \Phi$. In this approximation, the Poisson-Boltzmann equation
(\ref{kt:PB}) can be linearized to give
\begin{equation} \label{kt:mpoi}
(\nabla^2 - m_{\rm D}^2) \Phi = 0, \;\;\; m_{\rm D}^2 = 4 \pi
\beta z q^2.
\end{equation}
This shows us that, in contradistinction to the low-temperature phase,
in the high-temperature phase, the scalar potential describes a massive
mode---the plasmon. In other words, the Kosterlitz-Thouless mode
acquires an energy gap $m_{\rm D}$. Since it provides the
high-temperature phase with an infrared cutoff, isolated vortices have a
finite energy now and accordingly a finite probability to be created.
This Debeye mechanism of mass generation for the photon should be
distinguished from the Higgs mechanism which operates in superconductors
(see below) and which also generates a photon mass.
Another property of a charged plasma is that it screens charges. This
so-called Debeye screening may be illustrated by adding an external
charge to the system. The linearized Poisson-Boltzmann equation
(\ref{kt:mpoi}) then becomes
\begin{equation} \label{kt:pois}
(\nabla^2 - m_{\rm D}^2) \Phi({\bf x}) = - 2 \pi q_0 \delta ({\bf x}),
\end{equation}
with $q_0$ the external charge which we have placed at the origin. The
solution of this equation is given by $\Phi ({\bf x}) = q_0
K_0(m_{\rm D}|{\bf x}|)$ with $K_0$ a modified Bessel function. The mass
term in (\ref{kt:pois}) is ($2 \pi$ times) the charge density induced by the
external charge, i.e.,
\begin{equation}
\rho_{\rm ind}({\bf x}) = - \frac{1}{2 \pi} q_0 m_{\rm D}^2
K_0(m_{\rm D}|{\bf x}|).
\end{equation}
By integrating this density over the entire system, we see that the total
induced charge $\int_{\bf x} \rho_{\rm ind} = -q_0$ completely screens the
external charge---at least in the linear approximation we are using here.
The inverse of the plasmon mass is the so-called Debeye screening length.
To see that the sine-Gordon model gives a dual description of a
superfluid film we cast the field equation (\ref{kt:feq}) in the form
\begin{equation}
i T \nabla^2 \phi = - m q \nabla \times {\bf v}_{\rm s},
\end{equation}
where we employed Eq.\ (\ref{qm:vort}). On integrating this
equation, we obtain up to an irrelevant integration constant
\begin{equation}
i T \partial_i \phi = - q \epsilon_{i j} (\partial_j \varphi -
\varphi_j^{\rm P}).
\end{equation}
This relation, involving the antisymmetric Levi-Civita symbol, is a typical
one between dual variables. It also nicely illustrates that although the
dual variable $\phi$ is a regular field, it nevertheless contains the
information about the vortices which in the original formulation are
described via the singular vortex gauge field $\bbox{\varphi}^{\rm P}$.
Given this observation it is straightforward to calculate the
current-current correlation function $\langle g_i ({\bf k}) g_j(-{\bf
k}) \rangle$, with
\begin{equation}
{\bf g} = \rho_{\rm s} {\bf v}_{\rm s}
\end{equation}
the mass current. We find
\begin{equation}
\langle g_i ({\bf k}) g_j(-{\bf k}) \rangle = - \frac{\rho_{\rm s}}{2
\pi \beta^2} \epsilon_{ik} \epsilon_{jl} k_k k_l \langle \phi({\bf k})
\phi(-{\bf k}) \rangle,
\end{equation}
where the average is to be taken with respect to the partition function
\begin{equation}
Z_0 = \int \mbox{D} \phi \exp\left[ - \frac{1}{4 \pi \beta} \int_{\bf x} (\nabla
\phi)^2 \right],
\end{equation}
which is obtained from (\ref{kt:sG}) by setting the interaction term to
zero. We obtain in this way the standard expression for a superfluid
\begin{equation} \label{kt:jj}
\langle g_i ({\bf k}) g_j(-{\bf k}) \rangle = - \frac{\rho_{\rm s}}{\beta}
\frac{1}{{\bf k}^2} \left( \delta_{ij} {\bf k}^2 - k_i k_j \right).
\end{equation}
The $1/{\bf k}^2$ reflects the gaplessness of the $\phi$-field in the
low-temperature phase, while the combination $\delta_{ij} {\bf k}^2 - k_i
k_j$ arises because the current is divergent free, $\nabla \cdot {\bf
g}({\bf x}) = 0$, or ${\bf k} \cdot {\bf g}({\bf k}) = 0$.
| proofpile-arXiv_065-8173 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The importance of understanding the relationships between
rotational velocity, evolutionary status and stellar temperature in
causing the
the Be phenomenon has long been understood (Slettebak \cite{s82}).
Recently Zorec \& Briot
(\cite{zb97}) presented evidence based on a careful evaluation of
the statistics of B and Be stars in the Bright Star Catalogue (Hoffleit \&
Jaschek \cite{bsc}) that after correction for various selection
effects there were no apparent differences in the
spectral type distribution and frequency of Be stars with respect
to luminosity class. In addition they showed the shape of
the $v \sin i$ distribution with spectral type was not luminosity
dependant, implying little or uniform angular momentum loss
from such objects over their lifetimes. Here we extend that work by
using the sample of Steele et al. (\cite{s98}) to
quantify
the evolution of angular momentum between the dwarf and giant
stages of Be stars.
We show that either conservation of angular momentum or an accumulated
loss
of up to (but no more than) 15\% of the stellar angular momentum
is allowed during the main sequence + giant lifetime
of Be stars.
\section{Distribution functions}
\subsection{Description of the sample}
In Steele et al. (\cite{s98}) we presented optical spectra of a
sample of 58 Be stars. The sample contains objects from
O9 to B8.5 and of luminosity classes III (giants) to V (dwarfs),
as well as three
shell stars (which we neglect for the purposes of this paper as
they have uncertain luminosity classes). A spectral type and value
of $v \sin i$ was derived for each object in the sample.
The sample is termed a ``representative'' sample, in that it
was selected in an attempt to contain several objects that were
typical of each spectral
and luminosity class in the above range. It therefore does {\em not}
reflect the spectral and luminosity class space distribution of Be stars,
but only the average properties of each subclass in temperature and
luminosity.
The distributions of $v \sin i$ within each temperature and
luminosity class were carefully investigated and the conclusion
drawn that there were no significant selection effects biasing the
average properties of the objects.
However it was apparent that for all
spectral sub-types the giants had significantly lower values
of $v \sin i$ than the dwarfs.
\def\epsfsize#1#2{0.8#1}
\begin{figure}
\setlength{\unitlength}{1.0in}
\centering
\begin{picture}(3.0,6.1)(0,0)
\put(-0.0,-0.0){\epsfbox[0 0 2 2]{1234.f1}}
\end{picture}
\caption{$v \sin i$ distribution for the three luminosity classes (solid areas)
compared with the all luminosity class distribution (hollow). A KS test shows
that the probability of the giant and dwarf distributions being
drawn from the same population is $<10^{-6}$}
\end{figure}
\subsection{$v \sin i$ and $\omega \sin i$ distributions}
In Fig. 1 we plot the binned distribution of $v \sin i$ values for the sample
for luminosity classes III, IV and V. The data have been binned into
bins of width 80 km/s, chosen to be considerably larger than the mean error
on any one $v \sin i$ measurement, which is $\sim 20$ km/s. It is immediately
apparent that the distributions are different, with the giants having
considerably lower $v \sin i$ than the dwarfs. A simple explanation of this
would be that the critical velocity for giants is lower than that
for dwarfs, so that the $v \sin i$ may be lower but still give a
sufficiently high $\omega \sin i (=v \sin i / v_{\rm crit})$ to
cause a disk to form. To investigate this we plot in Fig. 2 the
$\omega \sin i$ distributions for our sample. We calculated $v_{\rm crit}$
according to the prescription given by Porter (\cite{p96}):
\begin{equation}
v_{\rm crit} = \sqrt{0.67 \times GM/R}
\end{equation}
where $R$ is the polar radius. Values of $M$ and $R$ were obtained from
Schmidt-Kaler (\cite{sk82}), with interpolation between luminosity classes and
spectral sub-types where necessary.
From Fig. 2 it is apparent that this simple explanation of
the necessity of a certain fractional velocity to give the Be
phenomenon is insufficient to explain the discrepancy between the
giants and dwarfs.
\def\epsfsize#1#2{0.8#1}
\begin{figure}
\setlength{\unitlength}{1.0in}
\centering
\begin{picture}(3.0,6.1)(0,0)
\put(-0.0,-0.0){\epsfbox[0 0 2 2]{1234.f2}}
\end{picture}
\caption{$\omega \sin i$ distribution for
the three luminosity classes (solid areas)
compared with the all luminosity class distribution (hollow). A KS test shows
that the probability of the giant and dwarf distributions being
drawn from the same population is $<10^{-4}$}
\end{figure}
\subsection{Angular momentum distribution}
We now consider the rotational
velocity changes that result from angular momentum
conservation during the evolution from dwarfs to giants.
Assuming that the mass of a given star is fixed during this evolution
and that angular momentum is conserved, then velocity $v$ will simply
be inversely proportional to radius $R$. The quantity we therefore
consider is $v \sin i \times R/R_{g}$ where $R/R_{g}$ is the fractional
radius for luminosity class compared to the corresponding
giant radius.
From Schmidt-Kaler (\cite{sk82})
it is apparent that for dwarfs in
the range O9 to B9
for a constant mass ({\em not} spectral type)
the relationship $R/R_{g} = 1/1.8$
holds to within $\sim 5$ per-cent. Similarly for the subdwarfs we adopt
$R/R_g=1/1.4 $. The ratio is of course unity for the giants.
In Fig. 3 we
plot the distributions of
$v \sin i \times R/R_{g}$ for all three luminosity classes.
The similarity of the three distributions is striking.
In order to confirm their similarity we carried out a
Kolmogorov-Smirnov (KS) test
between the unbinned values of $v \sin i \times R/R_{g}$ for the
giants and the dwarfs. As noted in the captions of Figs. 1 and
2 the test was also carried out
on the $v \sin i$ and $\omega \sin i$ datasets to demonstrate that
they were significantly different.
For $v \sin i \times R/R_{g}$ the probability that the giant and dwarf
distributions are drawn from the same population is 0.83, confirming our
opinion of the similarity of the samples, and demonstrating that
the similarity was not an effect of our binning the data. It is therefore
apparent that conservation of angular momentum over the Be lifetime of
the object is entirely consistent with the observed angular momentum
distributions.
\def\epsfsize#1#2{0.8#1}
\begin{figure}
\setlength{\unitlength}{1.0in}
\centering
\begin{picture}(3.0,6.1)(0,0)
\put(-0.0,-0.0){\epsfbox[0 0 2 2]{1234.f3}}
\end{picture}
\caption{$v \sin i \times R/R_g$ (a measure of relative
angular momentum) distribution
for the three luminosity classes (solid areas)
compared with the all luminosity class distribution (hollow). A KS test shows
that the probability of the giant and dwarf distributions being
drawn from the same population is $0.83$}
\end{figure}
\section{Angular momentum evolution}
In Sect. 2.3 we demonstrated that angular momentum conservation was
consistent with the observed values of $v \sin i \times R/R_g$.
However it may be that a certain fraction of angular momentum may
be lost from the stars and the two distributions still remain consistent.
To investigate this we simulated the effect of changing the system angular
momentum of the giants by factors of between 0.01 and 2.0 in increments of
0.01 and redoing the KS test. The resulting distribution of probabilities
is shown in Fig. 4.
\def\epsfsize#1#2{0.8#1}
\begin{figure}
\setlength{\unitlength}{1.0in}
\centering
\begin{picture}(3.0,3.3)(0,0)
\put(-0.0,-0.5){\epsfbox[0 0 2 2]{1234.f4}}
\end{picture}
\caption{Variation in KS test null hypothesis probability between
giant and dwarf angular momentum distributions
versus amount of stellar angular momentum conserved.}
\end{figure}
From Fig. 4 it is apparent that a probability of greater than $\sim 5$\%
of the two distributions being consistent is obtained for fractional
changes of angular momentum during the main sequence + giant
lifetime of the star of between $\sim 0.85$ and $\sim 1.3$.
Neglecting the upper value as unphysical
we therefore conclude that any method of losing
angular momentum that purports to explain the Be phenomenon must
cause a loss of less than $\sim 15$ per cent of the stellar angular
momentum over the main sequence + giant lifetimes of the star.
From the analysis presented by Porter (\cite{p98})
of the spin down of Be stars due to
angular momentum transfer to the disk (i.e. a decretion disk -
e.g. Lee et al. 1991)
this implies that
(assuming the Be phenomenon is present for most of the main sequence
life of the star)
the disks around Be stars are in his terminology ``weak'' to ``medium''. This
means that for a typical disk opening angle of 15$^\circ$ and a density
of 2$\times10^{-11}$ g cm$^{-2}$ (Waters \cite{w86}),
the initial outflow velocity
must be less than 0.01 km/s. For a decretion disk this implies
the viscosity parameter $\alpha < 0.01$ (Porter \cite{p98}).
An alternative
of explanation is a much ``stronger'' disk that is only present for short
periods during the life of the star. For example if the disk were only
present for 10\% of the main-sequence lifetime, then we
derive $\alpha \sim 0.1$.
\section{Conclusions}
By using the distribution of $v \sin i$ values for giants and dwarfs in the
Be star sample of Steele et al. (\cite{s98})
we have shown that any angular momentum
loss in the system that would spin down the Be stars must cause
the loss of no more than 15\% of the stellar angular momentum. This
implies that either the Be phenomenon is only a short phase in the
life of such objects, or that any decretion disk in the system must
have a low outflow velocity ($<0.01$ km/s) and hence a low viscosity
($\alpha <0.01)$.
\begin{acknowledgements}
Thanks to Dr. John Porter for both his advice and his
careful reading of the first draft of this paper.
\end{acknowledgements}
| proofpile-arXiv_065-8175 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{introduction}
While brown dwarfs do not supply the missing mass, their properties continue to be
of considerable importance for our understanding of star formation and
stellar evolution. the discovery of Gl 229B (Nakajima et al, 1995) and of
a variety of sub-stellar mass objects in the Pleiades (Rebolo et al, 1995)
confirmed the existence of these objects, but, save for Kelu 1 (Ruiz et al, 1997),
isolated objects in the field remained elusive. Identifying these intrinsically faint,
cool objects requires deep, wide-field imaging at red or near-infrared wavelengths,
as emphasised by the initial results from $1-2 \mu m$ DENIS (Delfosse et al, 1997) and
2MASS (Kirkpatrick et al, 1998) surveys. \\
Prior to the availability of large-scale near-infrared photometry, photographic plates
offered the only viable method of surveying tens or hundreds of square degrees. Such media
are limited to wavelengths shortward of 1$\mu m$, but can achieve (single-plate)
detection limits of R$_C \sim$ 21 to 21.5 and I$_C \sim 19$ to 19.5 magnitudes.
Moreover, Hawkins has experimented with digital addition of plate scans, and finds
that the limiting magnitude can be extended significantly ($>2$ magnitudes) if
20-40 plates are available for a given field. Although expensive in telescope
time, this technique makes photography competitive with optical or
near-infrared CCD imaging for a number of specific projects. \\
Hawkins has concentrated analysis on a single field, ESO/SERC field 287 centred at
$\alpha = 21^h 28^m$, $\delta = -45^o$. Recently, Hawkins et al (1998) reported on
initial results from searching a combination of 65 IIIaF and 30 IVN plates for
candidate very low mass stars or brown dwarfs. They announce the discovery of
at least three brown dwarfs with somewhat unusual properties: the optical and
near-infrared colours match those of late-type M-dwarfs, but the absolute magnitudes,
calibrated using CCD-derived trigonometric parallaxes, place the objects $\sim 2.5$
magnitudes below the main-sequence. In contrast, theoretical evolutionary calculations
(e.g. Burrows et al, 1997) predict that cooling brown dwarfs should lie {\sl above}
the stellar main-sequence at these temperatures. Hawkins et al suggest that their
candidates may be either metal-poor or subject to unusual dust formation in the
atmosphere. \\
We present here optical spectroscopy of one of the three brown dwarf candidates, D04.
The following section describes our observations, while the final section summarises
our conclusions.
\begin{figure}
\psfig{figure=fig1.ps,height=10cm,width=8cm
,bbllx=8mm,bblly=57mm,bburx=205mm,bbury=245mm,rheight=12.0cm}
\caption{An i-band finding chart for the 2 arcminute field centred on D04}
\end{figure}
\begin{figure}
\psfig{figure=fig2.ps,height=11cm,width=8cm
,bbllx=8mm,bblly=57mm,bburx=205mm,bbury=245mm,rheight=13.0cm}
\caption{ The far-red optical spectrum of D04}
\end{figure}
\section{Spectroscopic Observations}
Our observations of D04 were obtained on August 11, 1998 as part of a service
allocation with the Low-Resolution Imaging Spectrograph (Oke et al, 1995)
on the Keck II telescope. D04 (R $\sim 22$, I$\sim 19.3$) was not directly visible on
the acquisition TV. However, we used the imaging capability of LRIS to
obtain a 60-second I-band frame which, combined with Hawkins et al's position and
the Digital Sky Survey scan, allowed unambiguous identification of the target
(figure 1). We then offset the telescope from a brighter star, placing D04 on
the 1-arcsecond slit for spectroscopic observation. \\
Spectroscopy was undertaken using a 400 l/mm grating, blazed at 8500\AA,
with a central wavelength of 8000 \AA. This provides coverage from $\lambda \sim 6200 \AA$\
to $\sim 9800 \AA$\ at a dispersion of 1.85 \AA\ pix$^{-1}$ and a resolution of
4.5 pixels. Wavelength calibration is provided by neon-argon arclamp exposures, taken
immediately after the stellar integrations. The data were bias-subtracted and
flatfielded, and the spectra extracted using standard IRAF software. \\
We obtained a single 1800-second exposure of D04, which we flux-calibrated
using an observation of the white dwarf standard G24-9 (Filippenko \& Greenstein, 1983).
Seeing was $\sim1$ arcsecond, even at an altitude of 25$^o$. However, the slit was
not aligned with the parallactic angle, and the overall shape of the D04 spectrum is
likely to be affected by differential chromatic refraction ($\sim0.6$ arcseconds between
6500 and 9500\AA\ at sec(z)=2.3). Despite the
faint apparent magnitude and the low altitude of the target, the extracted
spectrum has a signal-to-noise of at least 15 for $\lambda > 7300 \AA$.
\section {Discussion and conclusions}
Figure 2 plots the flux-calibrated spectrum of D04. The object is clearly of
spectral type M, with TiO bandheads at $\lambda\lambda 7666, 8432$ and 8859 \AA,
as well as strong absorption (EW $\sim 11.6$\AA) due to the sodium doublet at $\lambda 8183/8195$\AA.
Hawkins et al
suggested that the object might be metal-poor and/or subject to unusual dust
obscuration. However, the strength of the TiO absorption rules out the
possibility that D04 is a late-type subdwarf, comparable to LHS 1742a (Gizis, 1997).
Similarly, a comparison with the spectral standard M-dwarfs defined by
Kirkpatrick et al (1991, 1993) shows no evidence for unusual absorption, which might
be attributed to excessive dust formation.
The presence of significant VO absorption at $\lambda\lambda 7334$ and 7850 \AA\ indicates
that the spectral type is later than M5, and a visual comparison with the
Kirkpatrick et al standard sequence leads to classification as between M7 and
M8. A spectrum of the well-known M8 dwarf VB10 is plotted in figure 2 for comparison.
Extrapolating the results derived by Leggett et al (1996), d04 is likely to have
T$_{eff} \approx 2600$K. \\
VB8 (Gl 644C) is the best-calibrated M7 standard in the Kirkpatrick et al system.
Allowing for the uncertainties in Hawkins et al photographic photometry, D04
has an (R-I)$_C$ colour (2.63 mag.) consistent with that of VB8 (2.41 mag. - Leggett, 1992),
while the (I-K) colours of the two stars are nearly identical at 3.7 magnitudes. However,
VB8 has an absolute magnitude at 2.2$\mu m$ of M$_K = 9.76$, while Hawkins
et al deduce M$_K = 12.24$ for D04. Given the similarity in spectral types
and optical/IR colours, as well as the absence of evidence for any chemical
peculiarities in D04, it is reasonable to assume that the two objects have similar
effective temperatures and similar bolometric corrections. In that case,
$$ \Delta L \qquad \propto \quad \Delta R^2 $$
That is, the difference in luminosity of 2.5 magnitudes inferred by Hawkins et al
implies that D04 has a radius which is three times smaller than that of VB8. \\
Leggett et al (1996) have combined optical and infrared spectroscopy and
photometry with improved model atmospheres to derive effective temperatures,
luminosities and radii for a small number of M dwarfs. The lowest luminosity (and
lowest temperature) star in their sample is GJ 1111, spectral type M6.5, M$_K \sim 9.46$, for which
they estimate a radius of $0.8 \times 10^8$ metres. this corresponds to 0.11 R$_\odot$, or slightly
less than the radius of Jupiter (0.119 R$_\odot$). Assuming a similar radius
for the slightly later-type VB8, the luminosity deduced by Hawkins et al for D04
leads us to infer a radius of $\sim 0.035 R_\odot$, or one-third that of Jupiter. \\
This result is clearly at odds with predictions based on interior models of low-mass
stars and brown dwarfs. As the mass decreases towards 0.1 M$_\odot$, theoretical models
predict that the radius also decreases to close to 0.12 R$_\odot$ (Burrows \& Liebert,
1993, figure 1). However, electron degeneracy takes over as the main source of
pressure support in lower-mass objects, and, as a result, the radius is
predicted to vary by no more than $\sim 30\%$ as the mass decreases to one
Jupiter mass (Burrows et al, 1997). Moreover, substellar-mass objects ($M < 0.075 M_\odot$)
are predicted to have radii {\sl exceeding} that of Jupiter at effective
temperatures of 2600K. \\
Given that there is no evidence for unusual atmospheric opacities in D04, and
that the deduced radius is in strong contradiction with a basic premise of
stellar structure, an alternative explanation must be found for
the faint absolute magnitudes deduced by Hawkins et al. The simplest is
that the trigonometric parallax derived by Hawkins et al for at least D04
(and possibly D07 and D12) overestimates the true value. Each star was
observed at only three epochs, leading to astrometric solutions which
are poorly constrained against systematic errors (cf. Pinsonneault et al's
(1998) comments on the Hipparcos Pleiades astrometry). Moreover, there is
significant dispersion amongst the individual astrometric measurements at
a given epoch for each of the three faint (I$\sim 19.4$) candidate brown
dwarfs (Hawkins et al, figure 6). Finally, Tinney (priv. comm.) points out that
the differential chromatic refraction corrections are constrained poorly,
raising the possibility of systematic errors in the final astrometric
solution. \\
Further astrometry of these objects is clearly desirable, but for the present,
we favour interpreting the current data in alternative manner to the solution
espoused by Hawkins et al. We identify D04, D07 and D12
as M7/M8 main-sequence disk dwarfs, lying at distances of $\sim 150$ parsecs,
rather than as highly-unusual brown dwarfs at distances of $\sim 50$ parsecs.
\subsection*{Acknowledgements}
I would like to thank Greg Wirth and Gary Puniwai for assistance with
the observations. The Keck Observatory
is operated by the Californian Association for Research in Astronomy, and
was made possible by generous grants from the Keck W. M. Foundation.
| proofpile-arXiv_065-8181 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
It is well know that the transport properties of a system is directly related
with the formation of localized states in the system. Localized states
appear due to the presence of impurity or disorder (which breaks the
translational symmetry) in the system \cite{econ}.
There has been studies on the
formation of localized states due to linear impurities in various systems
\cite{econ}.
On the other hand, only a few look into the formation of localized states
due to nonlinear impurities. The discrete nonlinear Schr\"odinger equation,
used to study the formation of stationary localized (SL) states
\cite{mol1,mol2,mol3,hui1,hui2,acev,wein,bik2,bik3,kund,bik1,ghos} is
given by
\begin{equation}
i\frac{dC_n}{dt}=\epsilon_n C_n + V(C_{n+1} + C_{n-1})
- \chi_n |C_n|^{\sigma} C_n,
\end{equation}
here $C_n$ is the probability amplitude of the particle (exciton) to be
at the site $n$, $\epsilon_n$ and $\chi_n$ are the static site energy
and the nonlinear strength at site $n$ respectively and $V$ is the
nearest neighbor hopping element. The nonlinear term $|C_n|^\sigma C_n$
arises due to the interaction of the exciton with the lattice
vibration \cite{ken1,ken2}. The above eq. (1), has been used to study
the formation of SL states in one dimensional chain as well as in Cayley
tree with single and dimeric nonlinear impurities
\cite{mol1,mol2,mol3,hui1,hui2,bik2,bik3,kund}. For the case of a
perfectly nonlinear chain where $\chi_n=\chi$, it has been shown that
SL states are possible even though the translational symmetry of
the system is preserved \cite{acev,wein,kund,bik1,ghos}.
These results were remarkably different when
compared with that of the corresponding systems with linear impurities.
The equation (1) has been derived with the assumption
that the lattice oscillators in the system are local and oscillate
independently. A natural question to ask is, what happens to the formation of
SL states when the oscillators in the lattice are occupied with their nearest
neighbors. In this case the discrete nonlinear Schr\"odinger
equation takes the form
\begin{equation}
i\frac{dC_n}{dt}=\epsilon_n C_n + V(C_{n+1} + C_{n-1}) - \chi_n (|C_{n+1}|^2
+|C_{n-1}|^2 - 2|C_n|^2)C_n
\end{equation}
where $C_n$, $\epsilon_n$, $V$ and $\chi_n$ carries the same meaning as
in eq. (1). We notice that the eq. (2) has more nonlinear terms compared
to the eq. (1). To the best of our knowledge, this equation has not been
used to study the formation of SL states even though eq. (2) is
more important in condensed matter physics. Our intention is to look
for the formation of SL states in one dimensional system due to the
presence of nonlinear impurities (as described by eq. (2)) and further
to compare the results with those obtained from eq. (1) and to see which
one has more impact in the formation of SL states.
The organization of the paper is as follows. In sec. II we discuss the
effect on the formation of SL states due to a single impurity ({i.e.}
$\chi_n=\chi(\delta_{n,0})$). In sec. III we consider the case of
dimeric impurity ({\em i.e.} $\chi_n = \chi (\delta_{n,0} + \delta_{n,1})$)
and in sec IV we consider the perfectly nonlinear chain. In
sec V we discuss about the stability of the SL states. Finally in sec. VI
we summarize our findings.
\section{Single Nonlinear Impurity}
Consider the system of a one dimensional chain with a nonlinear impurity
at the central site. The time evolution of an exciton in the system is
governed by eq.(2) with $\chi_n = \chi \delta_{n,0}$. The Hamiltonian
which can produce the equation of motion for the exciton in the system
is given by
\begin{equation}
H=\sum_n (C_n^\star C_{n+1} + C_n C_{n+1}^\star) - \frac{\chi}{2} [|C_1|^2
+ |C_{-1}|^2 - 2 |C_0|^2] |C_0|^2.
\end{equation}
As $\sum_n |C_n|^2$ is a constant of motion, we suitably renormalized so that
$\sum_n |C_n|^2$=1. We call it normalization constant. Therefore, $|C_n|^2$
can be treated as the probability for the exciton to be at site $n$. Since
we are interested in finding the stationary localized states, we consider
the ansatz
\begin{equation}
C_n=\phi_n exp(-iEt);~~~~~\phi_n=\phi_0 \eta^{|n|}
\end{equation}
where $0 < \eta <1$ and $\eta$ can be asymptotically defined as $\eta =
\frac{|E|-\sqrt{E^2-4}}{2}$. $E$ is the energy of the localized state which
appears outside the host band. Since in a one dimensional system, states
appearing out side the host band are exponentially localized, the ansatz
(in eq. (4)) is justified as can also be readily derived from the Greens
function analysis \cite{bik2,bik3}. Substituting the ansatz in the
normalization condition we get
\begin{equation}
|\phi_0|^2 = \frac{1-\eta^2}{1+\eta^2}.
\end{equation}
Direct substitution for $\phi_0$, $\phi_n$ and hence $C_n$ in terms of
$\eta$ in eq. (3), yields an effective Hamiltonian,
\begin{equation}
H_{eff} = \frac{4\eta}{1+\eta^2} + \frac{\chi (1-\eta^2)^3}{(1+\eta^2)^2}.
\end{equation}
The fixed point solutions of the reduced dynamical system described by
$H_{eff}$ will give the values of $\eta$ (which correspond to
the localized state solutions) \cite{wein}. Note that
the effective Hamiltonian is a function of
only one dynamical variable, namely, $\eta$ as $\chi$ is constant.
Thus fixed point solutions are readily obtained from the condition
$\partial{H_{eff}} /\partial{\eta}$ = 0, {\em i.e.},
\begin{equation}
\frac{4}{\chi}=\frac{\eta (1-\eta^2) (10+2\eta^2)}{(1+\eta^2)} =f(\eta).
\end{equation}
Thus the different values of $\eta \in$ [0,1] satisfying the eq. (7) will
give the possible SL states for a given value of $\chi$. It is clear from
the expression for $f(\eta)$ that $f(\eta)\rightarrow 0$ as $\eta\rightarrow
0$ and $\eta \rightarrow 1$.
Therefore it is expected that $f(\eta)$ will have at least one maximum,
which is indeed the case as can be seen from Fig. (1) where $f(\eta)$
is plotted as a function of $\eta$.
Notice that there will be no graphical solution if $\frac{4}{\chi} >
f(\eta_{max})$, one solution if $\frac{4}{\chi}=f(\eta_{max})$ and two
solutions if $\frac{4}{\chi}<f(\eta_{max})$. Thus there is a critical
value of $\chi$, say, $\chi_{cr}$ below which no localized states are
possible and is given by
\begin{equation}
\chi_{cr}=\frac{\eta_{max} (1-\eta_{max}^2) (10 + 2\eta_{max}^2)}{4
(1+\eta_{max}^2) } = 1.2696.
\end{equation}
Thus for $\chi$=1.2696, we get one SL state and two for $\chi >$ 1.2696.
For a system described by eq. (1) (with $\sigma$=2), it has been shown in ref.\cite{mol2,bik2}
that the corresponding critical value for $\chi$ is 2. Also the maximum
number of SL states possible was 1. Thus we see that the nonlinearity
arising in eq. (2) reduces the critical strength and produces more number
of SL states. Hence, the eq. (2) is indeed more effective in the formation
of SL states.
\section{Dimeric Nonlinear Impurity}
We consider the case where the one dimensional lattice has two nonlinear
impurities at site 0 and 1 respectively, {\em i.e.}, $\chi (\delta_{n,0}
+ \delta_{n,1})$. As in the case of single impurity, it is easily verified
that the Hamiltonian for the system is given by eq. (2) with $\chi$ as
defined above. For stationarity condition we assume that
$C_n = \phi_x exp(-iEt)$. Furthermore, for localized states we assume
the following form for $\phi_n$.
\begin{eqnarray}
\phi_n = [sgn(E) \eta]^{n-1} \phi_1 ; ~~~~~~~~~~~~~~n \ge 1 \nonumber \\
{\rm and} \nonumber \\
\phi_{-|n|} = [sgn(E) \eta]^{|n|} \phi_0 ; ~~~~~~~~~~~~n \le 0
\end{eqnarray}
with $\eta$ as defined earlier. The ansatz is justified as those
states which appear outside the host band are exponentially localized
(which can be derived exactly from the Greens function analysis
\cite{bik2}). Three different possibilities arise.
(i) $\phi_1 = \phi_0$ (symmetric case), (ii) $\phi_1 = -\phi_0$
(antisymmetric case) and (iii) $\phi_1 \ne \phi_0$ (asymmetric case).
It is possible to encompass all the different cases by introducing a
variable $\beta = \frac{\phi_0}{\phi_1}$. The value of $\beta$ is
confined between 1 and -1 if $|\phi_0| \le \phi_1$. Else we inverse
the definition of $\beta$. $\beta = 1, -1, {\rm and} \ne 1$ correspond
to the symmetric, antisymmetric and the asymmetric state respectively.
Substituting the ansatz as well as the definition of $\beta$ in the
normalization condition, $\sum_{-\infty}^{\infty}|C_n|^2 = 1$ we get
\begin{equation}
|\phi_0|^2 = \frac{1-\eta^2}{1+\beta^2}
\end{equation}
and the reduced Hamiltonian
\begin{equation}
H_{eff} = 2 \beta \frac{1-\eta^2}{1+\beta^2} + 2 sgn(E) \eta -
\frac{\chi (1-\eta^2)^3}{2 (1+\beta^2)} .
\end{equation}
If $\beta = \pm 1$ we get,
\begin{equation}
H_{eff}^{\pm} = \mp 2\eta + 2 sgn(E) + 3 \chi_{\pm}
\eta \frac{(1-\eta^2)^2}{2}
\end{equation}
Here '+' sign corresponds to the symmetric case and '-' sign corresponds to
the antisymmetric case. The number of fixed point solutions of the reduced
dynamical system described by $H_{eff}$ gives the the possible number of SL
states. The fixed point solutions satisfy the equation,
\begin{equation}
\frac{1}{\chi_{\pm}} = 3 \eta (1 \mp \eta) (1 \pm \eta^2).
\end{equation}
From eq.(13) it is clear that there exists two critical values of
$\chi$ namely, 0.7149 and 1.6525. There is no SL state for $\chi < 0.7149$,
one symmetric SL state at $\chi$ = 0.7149, two symmetric SL states for $0.7149
< \chi <1.6525$, two symmetric and one antisymmetric SL state at $\chi$ =
1.6525 and two symmetric and two antisymmetric SL states for $\chi > 1.6525$.
Now let us consider asymmetric case where $\beta \ne 1$. The
effective Hamiltonian is function of two dynamical variables namely,
$\beta$ and $\eta$. Therefore the fixed point solutions will obey the
equations given by
\begin{equation}
\frac{\partial{H_eff}}{\partial\eta} = 0 ~~{\rm and} ~~\frac{\partial
H_{eff}}{\partial{\beta}} = 0.
\end{equation}
After a little algebra we obtain the desired equation,
\begin{equation}
\frac{1}{\chi} = \frac{\beta (9 - 7 \beta^2 - \beta^4 - \beta^6)^2} {2
(1-\beta^2) (3-\beta^2)^4} = f(\beta).
\end{equation}
The function $f(\beta)$ monotonically increases with $\beta$ and it goes
to infinity as $\beta$ goes to 1. From this we can immediately see that
there always exists one SL state no matter how small $\chi$ may be.
Combining all the possible states we find that there is one SL state for
$\chi < 0.7149$, two at $\chi = 0.7149$, three for
$0.7149 < \chi < 1.652$, four at $\chi = 1.6525$ and five
for $\chi > 1.6525$. Hence the maximum number of SL states is five.
We further note that the critical values for nonlinear strength is lower
and the number of SL states are more compared to the results
obtained by eq. (1) (with $\sigma$=2) \cite{bik3}. Thus it is again
confirmed that eq. (2)
is more effective in the formation of SL states compared to eq. (1).
\section{Fully Nonlinear Chain}
We now consider perfectly nonlinear chain {\em i.e.}, $\chi_n=\chi$. The
Hamiltonian for this system is given by eq.(2) with $\chi_n = \chi$.
Using the stationarity condition, we can
obtain the Hamiltonian in terms of $\phi_n$. In this case it is not possible
to find the exact
ansatz for the localized states, but there are a few rational choices.
For example, a single site peaked as well as inter-site peaked and
dipped solutions are possible. We will consider these cases
subsequently. Let us first consider the on-site peaked solution. Without
any loss of generality we can assume that the exciton profile is peaked at
the central site. Therefore, using the ansatz $\phi_n = \phi_0 \eta^{|n|}$
and the normalization condition we get the effective Hamiltonian,
\begin{equation}
H_{eff}=\frac{4\eta}{1+\eta^2} + \chi \frac{(1-\eta^2)^3}{(1+\eta^2)^3}.
\end{equation}
From the fixed points equation, $\partial{H_{eff}}/\partial{\eta}$ = 0,
we obtain
\begin{equation}
\frac{1}{\chi} = \frac{3\eta(1-\eta^2)}{(1+\eta^2)^2}.
\end{equation}
After analyzing this equation we find that there is a critical value of $\chi$
= 1.333 below which there is no SL state and
above it there are two states. At the critical value of $\chi$ there is
one state.
For the inter-site peaked and dipped solutions we use the ansatz
of the dimeric impurity nonlinear impurity. carrying out the calculation
involved we obtain the effective Hamiltonian of the reduced dynamical
system to be
\begin{equation}
H_{eff} = 2\beta \frac{1-\eta^2}{1+\beta^2} + 2 sgn(E) \eta -
\chi \frac{(1-\eta^2)^2}{(1+\beta^2)^2} [\beta^2 + \frac{\eta^2 + \beta^2
\eta^2 - 1-\beta^4}{1-\eta^4}]
\end{equation}
where $\beta$ is defined earlier.
We first consider the case
$\beta = \pm 1$. Substituting $\beta = \pm 1$ into the Hamiltonian
and from the fixed points equations we obtain
\begin{equation}
\frac{1}{\chi_{\pm}}= \frac{\eta (1-\eta^2)^2 (2+\eta^2)}{2 (sgn(E) \mp \eta)
(1+\eta^2)^2}.
\end{equation}
Here '+' sign corresponds to the symmetric case and the '-' sing
to the antisymmetric case.
From eq. (19) it is clear that there will be two critical values of $\chi$
namely, $\chi_{cr}^+ = 2.4653$ and $\chi_{cr}^- = 5.9178$. There is no SL state
for $\chi < \chi_{cr}^+$, one SL state for $\chi = \chi_{cr}^+$, two SL
states for $\chi_{cr}^+ < \chi < \chi_{cr}^-$, three SL states at $\chi =
\chi_{cr}^-$ and four SL states for $\chi > \chi_{cr}^-$.
On the other hand for $\beta \ne 1$ we find that
$\beta \in [0,1]$ and $\eta \in [0,1]$ satisfy the
following equations.
\begin{eqnarray}
-4\beta\eta (1+\beta^2) (1+\eta^2)^2 + 2 sgn(E) (1+\beta^2)^2 (1+\eta^2)^2
+ 2 \chi \eta [-3 + \beta^2 -\beta^2 \eta^4] \nonumber \\
- \chi [2 \eta^2 - 4 \beta^2 \eta^2
-2 \beta^4 + \eta^4 - 2 \beta^2 \eta^6] = 0 \nonumber \\
2 (1+\eta^2) (1+\beta^2)^2 - 4 \beta^2 (1+\eta^2) (1+\beta^2) -
\chi [(1+\beta^2) (2 \beta - 2 \beta \eta^4 _ 2 \beta \eta^2 - 4 \beta^3]
\nonumber \\
+4 \chi \beta [\beta^2 - \beta^2 \eta^4 + \eta^2 + \beta^2 \eta^2 -1 -\beta^4]
=0
\end{eqnarray}
As it is not possible to decouple the equations, we have obtained
numerically the possible values of $\beta$ and $\eta$ for various values of
$\chi$. It is found that there is always exists one SL state for any nonzero
value of $\chi$.
Now combining all the possibilities, we obtain the following result for
the fully nonlinear chain. There will be only one SL states for $\chi <
2.4653$, two for $\chi = 2.4653$, three for $2.4653 < \chi < 5.9178$,
four for $\chi = 5.9178$ and five for
$\chi > 5.9178$. Hence the maximum number of SL states is five.
We further note that SL state appears even if the system is perfect
(the translational symmetry is preserved). Therefore, we may call these
states to be {\it self-localized} states.
\section{Stability}
The stability of the SL states can be understood from a simple graphical
analysis. For this purpose, consider the case of single impurity with
$\chi = 1.3$ (for which two SL states appear). The fixed point equation
for the single impurity case with $\chi = 1.3$ is given by
\begin{equation}
G(\eta)= 1 - \frac{\eta (1-\eta^2) (10 + 2 \eta^2)}{(1+\eta^2)} = 0.
\end{equation}
The flow diagram of the dynamical system described by the $H_{eff}$
given in eq. (6) is constructed in the following manner. We treat $G(\eta)$
as the velocity and $\eta$ as the coordinate of the dynamical system.
$G(\eta)$ is plotted as a function of $\eta$ in fig. (2). 'A' and 'B' are
the fixed points corresponding to the SL states. If $G(\eta) > 0$, the
flow of the dynamical variable is in right direction else it is in
left. The direction of flow is shown by arrows in
different regions. It is clear from the flow diagram
'A' is a stable fixed point whereas 'B' is unstable.
Therefore, for the case of single impurity, one state is stable and the
other one is unstable.
The energy of the SL states as a function of $\chi$ is plotted in fig. (3).
Once again we confine to the single impurity case. It is clear that energy of
one state increases and that of the other decreases. (Note that the points
'A' and 'B' of fig.(3) gets mapped to points '$A^\prime$' and '$B^\prime$'
respectively.) Thus we conclude that the states in the upper branch of
the energy diagram are stable SL states and those of the lower branch are
unstable SL states. In other words, if the energy of SL state increases
with the increase of nonlinear strength, the state is stable otherwise,
unstable.
\section{conclusion}
DNLS given by eq. (2)
is used to study the formation of stationary localized states
in a one dimensional system with single and a dimeric
nonlinear impurity. It is found
that the number of SL states are more than the number of impurities in the
system. Maximum number of SL states due to single nonlinear impurity is
two and that due to dimeric nonlinear impurity is five. It is further
found that SL states may appear even in a perfectly nonlinear system.
Thus one may call these SL states as {\it self-localized}
states. It is also interesting to note that eq. (2) is more effective in
the formation of SL states compared to eq. (1). The stability of the SL
states is discussed
and the connection of the stability of a state with its energy variation
as a function of nonlinear strength is presented. For a clearer
understanding on the effect of nonlinear impurities on the formation of SL
states, one needs to consider the presence of a finite nonlinear clusters
in a linear host lattice. Investigation in this aspect is in progress and
will be reported elsewhere.
\section{Acknowledgement}
The author acknowledges the help from S. Seshadri during the preparation
of the manuscript and the financial support from the Department of Science
and Technology, India.
| proofpile-arXiv_065-8187 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In studies of the dynamics of spatially homogeneous cosmological models
it is usual to choose a perfect fluid with linear equation of state to
describe the matter. The book \cite{wainwright97} provides an excellent
guide to the
subject. In view of the fact that this restriction is made so
frequently in the literature, it is natural to pose the question to what
extent the conclusions obtained would change if the matter model were chosen
differently. In \cite{rendall96} it was shown that in the case of
collisionless matter
described by the Vlasov equation significant changes can occur in comparison
with the case of a perfect fluid. More specifically, it was shown that a
solution of Bianchi type I exists whose qualitative behaviour near the
initial singularity is different from that of any spacetime of that Bianchi
type whose matter content is a fluid with a physically reasonable equation of
state, linear or nonlinear. In the following this analysis will be
generalized to show just how different models with collisionless matter
can be from models with perfect fluid having the same symmetry. Differences
are found in models of Bianchi type II (Theorem 4.2), Bianchi type III
(Theorem 5.2) and Kantowski-Sachs models (Theorem 5.1). These concern both
the initial singularity and phases of unlimited expansion. Perhaps the most
striking case is that of the initial singularity in the Bianchi type II
models, where we find persistent oscillatory behaviour near the singularity.
This is quite different from the known behaviour of type II perfect fluid
models.
Our results will also illuminate another matter. In \cite{lukash74} Lukash
and
Starobinski gave a heuristic analysis of a locally rotationally symmetric
(LRS) model of Bianchi type I with collisionless matter consisting of
massless particles. Their conclusion was that in the expanding direction
the model would isotropize so that at large times it would look like a
Friedman-Robertson-Walker model. On the one hand we are able to prove
rigorously that the heuristic analysis of \cite{lukash74} gives the correct
result.
On the other hand we show that this result depends essentially on the
assumption of a symmetry of Bianchi type I. If this symmetry type is
replaced by Bianchi type II (keeping the LRS assumption and massless
collisionless particles) then the anisotropy tends to a constant non-zero
value at large times.
The cosmological models studied in this paper are LRS spatially homogeneous
spacetimes with matter described by the Vlasov equation for massless
particles. The reason for imposing the LRS condition is that it allows the
Vlasov equation to be solved explicitly so that the Einstein-Vlasov equations
reduce to a system of ordinary differential equations, albeit with
coefficients which are not explicitly known and depend on the chosen
initial data. The reason for choosing the particles to be massless is
that this allows a reduction of the system of ODE similar to that
carried out for perfect fluids with a linear equation of state by
Wainwright and Hsu \cite{wainwright89}. It has not proved possible to analyse
the global
behaviour of solutions to our system of ODE completely. However a number
of partial results have been obtained which show that there is considerable
variety in the asymptotic behaviour of solutions near an initial
singularity or during a phase of unlimited expansion. In particular,
the reflection symmetric LRS Bianchi type I solutions with massless particles
are analysed completely with respect to their asymptotic behaviour, thus
improving markedly on the results obtained on that class of spacetimes in
\cite{rendall96}.
The matter model used in the following will now be described. The matter
consists of particles of zero rest mass which propagate through
spacetime freely without collisions. Each particle is affected by the
others only by the gravitational field which they generate collectively.
The wordline of each particle is a null geodesic. Each geodesic has a
natural lift to the tangent bundle of spacetime. Thus the geodesic equation
defines a flow on the tangent bundle. By means of the metric this may if
desired be transported to the cotangent bundle and here it will be convenient
to do so. The subset of the cotangent bundle consisting of all covectors
obtained by lowering the index of future-pointing null vectors,
which will be denoted by $P$, is invariant under the flow and thus the flow
may be restricted to it. The basic matter field used to describe the
collisionless particles is a non-negative real-valued function $f$ on $P$
which represents the density of particles with given position and momentum
at a given time. Choosing appropriate coordinates $x^\alpha$ on spacetime
and letting $(x^\alpha,p_\alpha)$ be the corresponding coordinates on the
cotangent bundle, the manifold $P$ can be coordinatized by $(x^\alpha,p_a)$.
Here the convention is used that Greek and Roman indices run from $0$ to
$3$ and $1$ to $3$ respectively. We write $t$ for $x^0$ and it is assumed
that $t$ increases towards the future. The field equation for $f$, the Vlasov
equation, says geometrically that $f$ is constant along the geodesic flow.
In the coordinates just introduced its explicit form is:
\begin{equation}\label{vlasov}
\d f/\d t+(p^a/p^0)\d f/\d x^a+(\Gamma^\alpha_{b\gamma}
p_\alpha p^\gamma/p^0)\d f/\d p_b=0
\end{equation}
where $p^0$ is to be determined from $p^a$ by the relation
$g_{\alpha\beta}p^\alpha p^\beta=0$ and indices are raised and lowered
using the spacetime metric $g_{\alpha\beta}$ and its inverse. In order to
couple the Vlasov equation to the Einstein equation, it is necessary to
define the energy-momentum tensor. It is given by
\begin{equation}\label{energymomentum}
T_{\alpha\beta}=-\int fp_\alpha p_\beta |g|^{-1/2}/p_0 dp_1 dp_2 dp_3
\end{equation}
In fact for Bianchi models it is more useful to replace the coordinate
components of the momentum used in these equations by components in a
suitable frame. The only change in the equations is that the Christoffel
symbols in the Vlasov equation are replaced by the connection coefficients
in the given frame. For more information about the Vlasov equation in general
relativity the reader is referred to \cite{ehlers73} and \cite{rendall97a}.
Spatially homogeneous spacetimes fall into three broad classes, known as
Bianchi class A, Bianchi class B and Kantowski-Sachs
(see \cite{wainwright97}). Each of
the two Bianchi classes can be further divided into Bianchi types. A
spatially homogeneous spacetime in one of the Bianchi classes
is called locally rotationally symmetric
if it has, in addition to the three Killing vector fields needed for
spatial homogeneity, a fourth one. This can only happen for certain
symmetry types. In class A the Bianchi types which allow an LRS
special case are I, II, VII${}_0$, VIII and IX. In class B it is types
III, V and VII${}_h$ which allow this \cite{maartens85}. The Kantowski-Sachs
spacetimes
automatically have a fourth Killing vector. There exist solutions of the
Einstein-Vlasov equations with $k=-1$ Robertson-Walker symmetry and these
have, in particular, Bianchi type V and Bianchi type VII${}_h$ symmetry with
any non-zero $h$. We did not
attempt to ascertain whether there are other examples of solutions of these
Bianchi types with LRS symmetry, and these types are not considered further
in this paper. A spatially homogeneous solution
of the Einstein-Vlasov equations has by definition the property that both the
geometry and the phase space density of particles are invariant under
the group action defining the symmetry type. A similar remark applies
to an additional LRS symmetry. It would be nice if the invariance of
$f$ under the group in a Bianchi model could be expressed by the condition
that $f$ depends only on time and momentum
when expressed with respect to a left-invariant frame on the group defining
the symmetry. Unfortunately, as
discussed in \cite{maartens90}, this does not work in general. It does work
for all
LRS Bianchi models of class A and type III and for Kantowski-Sachs models
\cite{maartens85}. This is the reason why LRS models are relatively tractable.
In the
following we consider LRS models which are of Kantowski-Sachs type, or
of Bianchi type I, II, III, VII${}_0$, VIII or IX.
In the next section it is shown how in the class of spacetimes of
interest the Einstein-Vlasov equations with given initial data can be
reduced to a system of ordinary differential equations. In fact two
systems are needed. The first includes the solutions of types I, II, VII${}_0$,
VIII and IX while the second includes those of types I and III and the
Kantowski-Sachs models. Note that the solutions of type I are represented
in both systems and understanding the Bianchi I case is central to
analysing the general case. The analysis of the Bianchi I system is
carried out in the third section. This is then used in Sections 4 and 5 to
obtain results on the first and second systems of ODE respectively. In
the last section the results are summarized and their wider significance
is examined. An appendix collects together some results from the theory of
dynamical systems used in the body of the paper.
\section{Reduction to an ODE problem}
In a spacetime with Bianchi symmetry the metric can be written in the
form
\begin{equation}\label{bianchi}
ds^2=-dt^2+g_{ab}(t)\theta^a\otimes \theta^b
\end{equation}
where $\{\theta^a\}$ is a left-invariant coframe on the Lie group $G$ which
defines the symmetry. The particular Bianchi type is determined by the
structure constants of the Lie algebra of $G$. The extra symmetry which is
present in the LRS case implies that the metric $g_{ab}(t)$ is diagonal, with
two of the diagonal elements being equal \cite{maartens85}. Thus
(\ref{bianchi}) simplifies to
\begin{equation}\label{lrs}
ds^2=-dt^2+a^2(t)(\theta^1)^2+b^2(t)((\theta^2)^2+(\theta^3)^2)
\end{equation}
for two functions $a(t)$ and $b(t)$ of one variable. If $k^\alpha$ is any
Killing vector field then the function $p_\alpha k^\alpha$ on the cotangent
bundle is constant along geodesics and hence satisfies the Vlasov equation.
Any function of quantities of this type for different Killing vectors also
satisfies the Vlasov equation. The Killing vectors on a spacetime with
Bianchi symmetry include those defined by right-invariant vector fields on
the Lie group $G$ but the result of evaluating a left-invariant one-form on
one of these is not in general constant. Thus we cannot simply solve the
Vlasov equation by choosing an arbitrary function of the components $p_a$ with
respect to a left-invariant basis. However for the LRS spacetimes of Bianchi
class A or type III considered here a function of the form
$f(t,p_1,p_2,p_3)=f_0(p_1,p_2^2+p_3^2)$
does satisfy the Vlasov equation and in fact is the most general solution
with the full LRS symmetry \cite{maartens85}. Here $p_1$, $p_2$ and $p_3$ are
the components
of the momentum in the coframe $\{\theta^a\}$. Since $f$ does not depend
explicitly on time in this representation, the function $f_0$ can be
identified with the initial datum for the solution of the Vlasov equation at
a fixed time. A similar statement holds for Kantowski-Sachs spacetimes. The
metric can be written in the form (\ref{lrs}) where $\theta^1$ is invariant
under
the symmetry group and $\theta^2$ and $\theta^3$ make up any (locally defined)
orthonormal coframe on the two-sphere. The expression $p_2^2+p_3^2$ is not
changed by a change in orthonormal coframe and so it makes sense to consider
the above form of $f$ in terms of $f_0$ in Kantowski-Sachs spacetimes as well.
If $f$ is of this form it satisfies the Vlasov equation. Thus the Vlasov
equation has been solved explicitly in the class of spacetimes to be studied.
It remains to determine the form of the Einstein equations. In fact, one
further restriction will be imposed. The distribution function given above
is automatically an even function of $p_2$ and $p_3$. However it need not be
even in $p_1$. If it is even in $p_1$ we say, as in \cite{rendall96}, that
the solution
is reflection symmetric. Only reflection symmetric solutions will be
considered in the following. For convenience we say that a function of
$p_1$, $p_2$ and $p_3$ which depends only on $p_1$ and $p_2^2+p_3^2$ and
which is even in $p_1$ has special form.
If the Einstein equations are split as usual into constraints and evolution
equations then it turns out that in this class of spacetimes the momentum
constraint is automatically satisfied. Only the Hamiltonian constraint and
the evolution equations are left. The former is an algebraic relation
between $a$, $b$ and their time derivatives $da/dt$, $db/dt$. The latter
provide ordinary differential equations for the evolution of $a$ and $b$
which are second order in time. It will be convenient to write these
equations in terms of some alternative variables. Consider first the
mean curvature of the homogeneous hypersurfaces:
\begin{equation}\label{meancurv}
{\rm tr}k=-[a^{-1}da/dt+2b^{-1}db/dt]
\end{equation}
A new time coordinate $\tau$ can be defined by $\tau(t)=-\int_{t_0}^t
{\rm tr}k(t) dt$ for some arbitrary fixed time $t_0$. In the following a dot
over a quantity denotes its derivative with respect to $\tau$. Now define:
\begin{eqnarray}\label{dimensionless}
q&=&b/a, \nonumber\\
N_1&=&-\epsilon_1 (a/b^2)({\rm tr k})^{-1}, \nonumber\\
N_2&=&-\epsilon_2 a^{-1}({\rm tr k})^{-1}, \\
\Sigma_+&=&-3(b^{-1}db/dt)({\rm tr k})^{-1}-1, \nonumber\\
B&=&-b^{-1}({\rm tr} k)^{-1}\nonumber
\end{eqnarray}
where $\epsilon_1$ and $\epsilon_2$ will be $-1$, $0$ or $1$, depending on
the symmetry type considered. The variables $N_1$, $N_2$ and $\Sigma_+$ are
closely related to the variables of the same names used by Wainwright and
Hsu \cite{wainwright89}. (Note that we adopt the conventions of
\cite{wainwright89} rather than those of \cite{wainwright97}, which differ
by a factor of three in some places.)
Two systems of ODE will now be considered, which between them are equivalent
to the evolution part of the Einstein-Vlasov equations for all the
relevant symmetry types.
The first system is:
\begin{eqnarray}\label{bianchiA}
\dot q&=&\Sigma_+ q \nonumber\\
\dot N_1&=&[-\f{1}{4} N_1(N_1-4N_2)
+\f{1}{3} (1-4\Sigma_++\Sigma_+^2)]N_1
\nonumber \\
\dot N_2&=&[-\f{1}{4} N_1(N_1-4N_2)+\f{1}{3}
(1+2\Sigma_++\Sigma_+^2)]N_2
\\
\dot \Sigma_+&=&\f{3}{2} \{\f{1}{2} N_1^2
+\f{1}{6} N_1(N_1-4N_2)(1-2\Sigma_+) \nonumber\\
&+&[-\f{1}{4} N_1(N_1-4N_2)
+\f{1}{3}(1-\Sigma_+^2)][\f{1}{3}(1-2\Sigma_+)-Q]\}\nonumber
\end{eqnarray}
Here $Q$ is defined to be
\begin{equation}\label{Q}
Q(q)=q^2\left[{
\int f_0(p_i)p_1^2(q^2 p_1^2+p_2^2+p_3^2)^{-1/2} dp_1dp_2dp_3}
\over
\int f_0(p_i)(q^2 p_1^2+p_2^2+p_3^2)^{1/2} dp_1dp_2dp_3
\right]
\end{equation}
where $f_0$ is a fixed smooth function of special form and compactly
supported on ${\bf R}^3$. The Hamiltonian constraint is
\begin{equation}\label{hamiltonian}
16\pi\rho/({\rm tr} k)^2=-\f{1}{2}N_1(N_1-4N_2)+\f{2}{3} (1-\Sigma_+^2)
\end{equation}
where $\rho$ is the energy density and to take account of the positivity of
$\rho$, only the region satisfying the inequality
\begin{equation}\label{physical}
-\f{1}{2}N_1(N_1-4N_2)+\f{2}{3} (1-\Sigma_+^2)\ge 0
\end{equation}
is considered. Define submanifolds of this region by the following
conditions:
\begin{eqnarray*}
&&S_1:\ \ \ N_1=N_2=0 \\
&&S_2:\ \ \ N_1\ne 0, N_2=0 \\
&&S_3:\ \ \ N_1=0, N_2\ne 0 \\
&&S_4:\ \ \ N_1\ne 0, N_2\ne 0, N_2=-q^2 N_1 \\
&&S_5:\ \ \ N_1\ne 0, N_2\ne 0, N_2=q^2 N_1
\end{eqnarray*}
The submanifolds $S_1$, $S_2$, $S_3$, $S_4$ and $S_5$
correspond to Bianchi types I, II, VII${}_0$, VIII and IX respectively.
To make the correspondence with spacetime quantities in these different
cases $(\epsilon_1,\epsilon_2)$ should be chosen to be $(0,0)$, $(1,0)$,
$(0,1)$, $(-1,1)$ and $(1,1)$ respectively. Note that if $q$ is replaced by
$\tilde q=q^{-1}$ in (\ref{bianchiA}) an almost identical system is obtained,
with the
sign in the first equation being reversed.
The second system is:
\begin{eqnarray}\label{surfacesymm}
\dot q&=&\Sigma_+ q \nonumber\\
\dot B&=&[\epsilon B^2+\f{1}{4}+\f{1}{12}(1-2\Sigma_+)^2]B \\
\dot\Sigma_+&=&\f{3}{2}\{-\f{2}{3}\epsilon B^2 (1-2\Sigma_+)
+[\epsilon B^2+\f{1}{3}
(1-\Sigma_+^2)][\f{1}{3}(1-2\Sigma_+)-Q]\}\nonumber
\end{eqnarray}
where $\epsilon$ belongs to the set $\{-1,0,1\}$. Only the region satisfying
the inequality
\begin{equation}\label{physicalsurf}
2\epsilon B^2+\f{2}{3} (1-\Sigma_+^2)\ge 0
\end{equation}
is considered. The cases $\epsilon=-1$, $\epsilon=0$ and $\epsilon=1$
correspond to Bianchi type III, Bianchi type I and Kantowski-Sachs
respectively. Note that the restriction of the system (\ref{bianchiA}) to
$S_1$ is
identical to the system consisting of the first and third equations of
(\ref{surfacesymm})
for $\epsilon=0$. This restricted system
will be referred to in the following as the Bianchi I system. It was
introduced in section 6 of \cite{rendall96} with slightly different variables.
If a solution of (\ref{bianchiA}) and a fixed $f_0$ are given, it is possible
to
construct a spacetime as follows. Suppose that $\tau=0$ is contained in the
domain of definition of the solution. Since the system is autonomous this is
no essential restriction. Choose a negative number $H_0$. Define
\begin{equation}\label{density}
\rho=(1/16\pi)H_0^2[-\f{1}{2}N_1(0)(N_1(0)-4N_2(0))
+\f{2}{3} (1-\Sigma_+^2(0))]
\end{equation}
Let
\begin{equation}\label{densityint}
I=\int f_0(p_i)[(q(0))^2 p_1^2+p_2^2+p_3^2]^{1/2}dp_1 dp_2 dp_3
\end{equation}
and define
\begin{eqnarray}\label{scalefactors}
a_0&=&\rho^{-1/4}I^{1/4}(q(0))^{-3/4} \nonumber\\
b_0&=&\rho^{-1/4}I^{1/4}(q(0))^{1/4}
\end{eqnarray}
In terms of these quantities we can define an initial metric by
\begin{equation}\label{initialmetric}
a_0^2(\theta^1)^2+b_0^2((\theta^2)^2+(\theta^3)^2)
\end{equation}
Similarly, we can define an initial second fundamental form by
\begin{equation}\label{sff}
-\f{1}{3}(1-2\Sigma_+(0))H_0a_0^2(\theta^1)^2
+\f{1}{3}(1+\Sigma_+(0))H_0b_0^2((\theta^2)^2+(\theta^3)^2)
\end{equation}
These data satisfy the constraints by construction.
Consider now the spacetime which evolves from these initial data. It is of the
form (\ref{bianchi}). For the Einstein-Vlasov system in a spacetime of the
form (\ref{bianchi})
with a fixed time-independent distribution function is a system of second
order ODE which has solutions corresponding to data for $(a,b,da/dt,db/dt)$.
These data can be chosen so as to reproduce the data of interest for the
Einstein-Vlasov system by choosing $da/dt=\f{1}{3}(1-2\Sigma_+(0))H_0a$ and
$db/dt=-\f{1}{3}(1+\Sigma_+(0))H_0b$ for $t=t_0$. This spacetime defines a
solution
of (\ref{bianchiA}) via (\ref{dimensionless}). (Note that $t=t_0$ corresponds
to $\tau=0$.)
Thus the two solutions are identical. In this way a spacetime
has been constructed which gives rise to the solution of (\ref{bianchiA}) we
started with.
This spacetime may be obtained more explicitly if desired. In order to do
this, first solve the equation:
\begin{equation}\label{meanevolution}
\d_\tau ({\rm tr} k)=-[-\f{1}{4}N_1(N_1-4N_2)+\f{1}{3}
(2+\Sigma_+^2)]{\rm tr} k
\end{equation}
with initial data $H_0$. Then $\rho$ can be obtained from the
Hamiltonian constraint (\ref{hamiltonian}). The definition of $\rho$ in terms
of $f_0$ can
then be combined with $q$ to give $a$ and $b$ as in (\ref{scalefactors}).
Finally $t$ can be
obtained from ${\rm tr} k$. All the considerations here in the case of
(\ref{bianchiA}) are
equally applicable in the case of (\ref{surfacesymm}). The analogue of
equation (\ref{meanevolution}) is
\begin{equation}\label{meanevolutionsurf}
\d_\tau ({\rm tr} k)=-[\epsilon B^2+\f{1}{3}(2+\Sigma_+^2)]{\rm tr} k
\end{equation}
Solutions of the Einstein equations with matter described by a perfect fluid
with a linear equation of state $p=(\gamma-1)\rho$ which belong to one of the
symmetry types studied in the case of collisionless matter in the following
can be described by equations very similar to (\ref{bianchiA}) and
(\ref{surfacesymm}). The similarity
is particularly great in the case $\gamma=\f{4}{3}$ (radiation fluid). In
that
case the only difference is that the function $Q(q)$ should be replaced by
the constant value $\f{1}{3}$. This leads to a decoupling of the first
equation in each system, so that it is possible to restrict attention to the
remaining equations when investigating the dynamics. (This last remark also
applies to the system obtained for other values of $\gamma$.) The equation for
$q$ can be integrated afterwards if desired.
In \cite{rendall96} it was proved that $Q(q)$ as defined in (\ref{Q}) tends to
zero as $q$
tends to zero and that if $Q(0)$ is defined to be zero the resulting
extension of $Q$ is $C^1$ with $Q'(0)=0$. This means in particular that the
dynamical system (\ref{bianchiA}) has a well-defined $C^1$ extension to $q=0$.
In a
similar way it can be shown that if a function $\tilde Q$ is defined by
$\tilde Q(\tilde q)=Q(q)$ then $\tilde Q$ can be extended in a $C^1$ manner
to $\tilde q=0$ in such a way that $\tilde Q(0)=1$ and $\tilde Q'(0)=0$. For
$1-\tilde Q=(\rho-T^1_1)/\rho=2T^2_2/\rho$ and this last expression is
$O(\tilde q^{4/3})$ as $\tilde q\to 0$ by Lemma 4.2 of \cite{rendall96}. By
using a coordinate $\hat q=q/(q+1)$ it is possible to map the system
(\ref{bianchiA})
with $q$ ranging from zero to infinity onto a region with $\hat q$ ranging
from zero to one. Moreover the system extends in a $C^1$ manner to the
boundary components $\hat q=0$ and $\hat q=1$. The coordinate $\hat q$
has been introduced purely to demonstrate that the system (\ref{bianchiA})
can be
smoothly compactified in the $q$-direction. For computations it is more
practical to use the local coordinates $q$ and $\tilde q$. In particular,
these considerations allow us to regard the Bianchi I system as being defined
on a compact set.
The compactification of the system (\ref{bianchiA}) is defined on a region
with boundary.
Different parts of the boundary are given by $\hat q=0$, $\hat q=1$ and the
case of equality in (\ref{physical}). The complement of the boundary will be
called
the interior region in the following. A solution which lies in the interior
region corresponds to a smooth non-vacuum solution of the Einstein-Vlasov
equations. A solution which lies in the part of the boundary where
(\ref{physical})
becomes an equality corresponds to a solution of the vacuum Einstein
equations. A solution which lies in the part of the boundary given by
$\hat q=0$ or $\hat q=1$ corresponds to a distributional solution of the
Einstein-Vlasov equations, as will be explained in more detail below. The
system (\ref{surfacesymm}) can be compactified in a way very similar to that
taken in the case of (\ref{bianchiA}). The comments on the interpretation of
different types of solutions of the compactification of (\ref{bianchiA}) just
made also apply to the compactification of (\ref{surfacesymm}), with
(\ref{physical}) being replaced by (\ref{physicalsurf}).
Consider now the stationary points of the system (\ref{bianchiA}), or rather
of its
compactification. (This distinction will not always be made explicitly in
what follows.) In section 4 it will be shown that all stationary points where
$q$ has a finite non-zero value belong to the subset $S_1$ corresponding to
solutions of type I. In particular, they correspond to stationary points of
the Bianchi I system, which will be studied in detail in the next section.
\section{The Bianchi I system}
It turns out that the Bianchi I system plays a central role in the dynamics
of solutions of the systems (\ref{bianchiA}) and (\ref{surfacesymm}). In this
section the asymptotic
behaviour of solutions of this system is determined, both for $\tau\to-\infty$
(approach to the singularity) and for $\tau\to\infty$ (unlimited expansion).
The first step in analysing the Bianchi I system is to determine the
stationary points. This can be done using the fact, proved in
\cite{rendall96}, that
$Q$ is strictly monotone for $q>0$ so that there is a unique $q_0$ with
$Q(q_0)=\f{1}{3}$. With this information it is straightforward to show that
the coordinates of the stationary points in the $(q,\Sigma_+)$ plane are
$(q_0,0)$, $(0,-1)$, $(0,\f{1}{2})$, $(0,1)$, $(\infty,-1)$ and
$(\infty,1)$.
Here $q=\infty$ is to be interpreted as $\tilde q=0$ or $\hat q=1$. Call
these points $P_1,\ldots,P_6$ respectively (see figure 1).
\begin{figure}
\begin{center}
\includegraphics[width=8cm,height=5cm,angle=270]{fig1.eps}
\end{center}
\label{Figure 1}
\caption{The $(\hat{q},\Sigma_+)$ plane and the fixed points for Bianchi
type I.}
\end{figure}
The next step is to linearize the system about the stationary points. Recall
that a stationary point is called hyperbolic if none of its eigenvalues are
purely imaginary. In the following we call a stationary point degenerate if it
is not hyperbolic. The point $P_1$ is a hyperbolic sink while $P_4$ is a
hyperbolic source. The points $P_2$, $P_3$ and $P_6$ are hyperbolic saddles
while $P_5$ is degenerate, with one zero eigenvalue.
Before proceeding further, we state the main result of this section.
\vskip 10pt\noindent
{\bf Theorem 3.1} If a smooth non-vacuum reflection-symmetric LRS solution of
Bianchi type I of the Einstein-Vlasov equations for massless particles is
represented as a solution of (\ref{bianchiA}) with $N_1=N_2=0$ then for
$\tau\to\infty$
it converges to the point $P_1$. For $\tau\to -\infty$ either
\noindent
(i) it converges to $P_1$ and in that case it stays for all time at the point
$P_1$ or
\hfil\break\noindent
(ii) it converges to the point $P_3$ and it belongs to the unstable manifold
of $P_3$ or
\hfil\break\noindent
(iii) it converges to $P_4$
\hfil\break\noindent
All of these cases occur, and (iii) is the generic case in the sense that it
occurs for an open dense set of initial data.
This will be proved in a series of lemmas. Terminology from the theory of
dynamical systems which may be unfamiliar to the reader is explained in
the appendix.
\noindent
{\bf Lemma 3.1} If a solution of the Bianchi I system in the interior
enters the region $\Sigma_+>1/2$ then for $\tau\to -\infty$ it belongs to
case (iii) of Theorem 3.1. A solution of the Bianchi I system in the interior
has no $\omega$-limit points with $\Sigma_+\ge 1/2$.
\noindent
{\bf Proof} A solution of the Bianchi I system satisfies $\dot\Sigma\le
-\f{1}{2}(1-\Sigma_+^2)Q$
when $\Sigma_+>1/2$ and so for any solution which enters the given region,
$\Sigma_+$ is nondecreasing towards the past and it is in the region for all
earlier times. If $\Sigma_+$ did not tend to $1$ as $\tau\to -\infty$ then
we would have $\dot\Sigma_+\le -C<0$ at early times, a contradiction. Once we
know that $\Sigma_+\to 1$ as $\tau\to -\infty$ it follows immediately that
$q\to 0$. Thus the solution converges to $P_4$. Consider now the forward
time direction. Since $\Sigma_+$ is positive, $q$ is increasing. This means
that $Q$ is increasing. The inequality $\dot\Sigma\le-CQ$ for a constant
$C>0$ then shows that the solution must leave the region of interest in
finite time, so that there can be no $\omega$-limit point with
$\Sigma_+\ge 1/2$.
\vskip 10pt\noindent
For convenience an interior solution which does not tend to $P_4$ as
$\tau\to -\infty$ will be called exceptional. Thus Lemma 3.1 says that an
exceptional solution cannot intersect the region $\Sigma_+>1/2$.
\noindent
{\bf Lemma 3.2} The $\alpha$-limit set of an exceptional solution
cannot intersect the boundary at any point except $P_3$. If it does
intersect the boundary at $P_3$ it belongs to case (ii) of Theorem 3.1
as $\tau\to -\infty$. The $\omega$-limit set of any interior solution cannot
intersect the boundary at all.
\noindent
{\bf Proof} Let $(q,\Sigma_+)$ be a point of the $\alpha$-limit set of an
exceptional solution which lies on the boundary. If $q=\infty$ then the whole
orbit passing through that point belongs to the $\alpha$-limit set. This
implies that the solution must intersect the region $\Sigma_+> 1/2$, a
contradiction. Thus in fact $q<\infty$. If $\Sigma_+=-1$ then all points
with $\Sigma_+=-1$ must be in the $\alpha$-limit set, in particular $P_5$.
By Lemma A2 of the appendix, it follows that a point of the centre manifold
of $P_5$ lies in the $\alpha$-limit set. However, this centre manifold is
given by $q=\infty$ and so we again obtain a contradiction. Hence
$\Sigma_+>-1$. If $q=0$ and $\Sigma_+<1/2$ then all points satisfying these
conditions must be in the $\alpha$-limit set, in particular $P_2$. But then
an application of Lemma A1 of the appendix leads to a contradiction. Thus no
point on the boundary other than $P_3$ is possible. A further application of
Lemma A1 shows that in this case the solution must lie on the unstable
manifold of $P_3$. In a similar way it is
possible to show that if any point of the boundary belonged to the
$\omega$-limit set of an interior solution then some point with
$\Sigma_+>1/2$ would do so. However we know from Lemma 3.1 that this is
impossible.
\noindent
{\bf Proof of Theorem 3.1} First the Poincar\'e-Bendixson theorem will be
applied to the restriction of the Bianchi I system to the interior with
the point $P_1$ removed. In general the $\alpha$- and $\omega$-limit sets of
an orbit of a dynamical system can be very complicated, but in two dimensions
(and the Bianchi I system is two-dimensional) things are a lot simpler.
Complicated situations are still possible and these play an important role
in Hilbert's sixteenth problem (see e.g. \cite{arnold88}, p. 104). However
many
pathologies are ruled out by the Poincar\'e-Bendixson theorem, which is
stated in the Appendix (Theorem A2).
Given an interior solution, suppose that $P_1$ does not belong
to the $\omega$-limit set. By Lemma 3.2 no point of the boundary belongs to
the $\omega$-limit set either. Since $P_1$ is a hyperbolic sink
it follows that there must be a neighbourhood of $P_1$ which does not
intersect the $\omega$-limit set. Thus the solution remains in a compact set
of the interior with the point $P_1$ removed as $\tau\to\infty$. Then Theorem
A2 implies the existence of a non-stationary periodic orbit of the Bianchi I
system. In fact the existence of periodic solutions of the Bianchi I system
can be ruled out by the presence of a Dulac function. (For a discussion of
this concept see \cite{wainwright97}.) Define a function $F(q)$ by
\begin{equation}\label{primitive}
F(q)=\int f_0(p_i)(q^2 p_1^2+p_2^2+p_3^2)^{1/2} dp_1dp_2dp_3
\end{equation}
Then $Q=(q/F)F'$. For $q>0$ and $|\Sigma_+|<1$ let
\begin{equation}\label{dulac}
G(q,\Sigma_+)=q^{-1}F^{1/2}(1-\Sigma_+^2)^{-3/2}
\end{equation}
and denote the vector field defining the Bianchi I system by $X$. Then
${\rm div}(GX)$ is negative. In fact it is a constant multiple of
$q^{-1}F^{1/2}(1-\Sigma_+^2)^{-1/2}(2-\Sigma_+)$. This means that
$G$ is a Dulac function. It follows that the Bianchi I system has no periodic
solutions. It can be concluded that $P_1$ does lie in the $\omega$-limit set.
But since $P_1$ is a hyperbolic sink, this implies, via the Hartman-Grobman
theorem (cf. Theorem A1), that the $\omega$-limit set consists of $P_1$
alone, which proves the first part of the theorem.
To prove the remainder of the theorem we can assume without loss of generality
that the solution is exceptional and that it does not lie on the unstable
manifold of $P_3$. If $P_1$ were not in the $\alpha$-limit set then we
would get a contradiction by the Poincar\'e-Bendixson theorem and the
absence of periodic orbits. Hence $P_1$ must belong to the $\alpha$-limit
set, and since $P_1$ is a hyperbolic sink, the only possibility left is
case (i) of the theorem.
\vskip 10pt\noindent
The conclusion of this theorem can be summarized in words as follows. All
solutions isotropize in the expanding direction. The initial singularity
is generically a cigar singularity but there are exceptional cases where
it is a barrel or point singularity. (For this terminology see
\cite{wainwright97}, p. 30.)
Note for comparison that if the Vlasov equation is replaced by the Euler
equation for a fluid satisfying a physically reasonable equation of state
then there are no barrel singularities and all solutions which are not
isotropic have cigar or pancake singularities (see \cite{rendall96}). The
pancake singularities are as common as the cigar singularities. In
particular this means that for fluid solutions of Bianchi type I cigar
singularities are {\it not} generic. All solutions
isotropize in the expanding direction. Note that the \lq reasonable\rq\
equations of state include those of the form $p=k\rho$ with $0\le k<1$. The
solutions of the Einstein-Vlasov equations approach an isotropic fluid
solution with equation of state $p=\f{1}{3}\rho$ in the sense that the
tracefree
part of the spatial projection $T_{ij}$ of the energy-momentum tensor divided
by the energy density $\rho$ approaches zero, while ${\rm tr} T/\rho=1$. The
latter relation is always true for kinetic theory with massless particles
and for a radiation fluid (equation of state $p=\f{1}{3}\rho$).
\section{Other class A models}
This section is concerned with the models of class A, as described by the
system (\ref{bianchiA}). Only limited statements will be made about types VIII
and IX.
Even in the a priori simpler case of a perfect fluid with linear equation of
state it is difficult to analyse LRS models of type VIII and IX. (For
information on what is known about that case, see
\cite{uggla90}, \cite{uggla91} and section 8.5
of \cite{wainwright97}). A major difficulty is that in these cases
the domain of definition of the dynamical system is non-compact. This
allows the possibility that there may be anomalous solutions similar to
those encountered in \cite{rendall97b}. Type I was analysed in the previous
section
and we will see that the analysis of type VII${}_0$ can be reduced to that
case in a relatively straightforward way. The most interesting results are
obtained for Bianchi type II.
We start with a theorem on Bianchi type VII${}_0$, which is a close analogue
of Theorem 3.1.
\vskip 10pt\noindent
{\bf Theorem 4.1} If a smooth non-vacuum reflection-symmetric LRS solution of
Bianchi type VII${}_0$ of the Einstein-Vlasov equations for massless particles
is represented as a solution of (\ref{bianchiA}) with $N_1=0$ then for
$\tau\to\infty$
the pair $(q,\Sigma_+)$ converges to $(q_0,0)$ while $N_2$ increases without
limit. $N_2$ tends to zero as $\tau\to -\infty$ while the pair $(q,\Sigma_+)$
either
\noindent
(i) converges to $P_1$ and in that case it stays for all time at the point
$P_1$ or
\hfil\break\noindent
(ii) converges to the point $P_3$ and belongs to the unstable manifold
of $P_3$ or
\hfil\break\noindent
(iii) converges to $P_4$
\hfil\break\noindent
All of these cases occur, and (iii) is the generic case in the sense that it
occurs for an open dense set of initial data.
\noindent
{\bf Proof} When $N_1=0$ the third equation in (\ref{bianchiA}) becomes
\begin{equation}\label{ntwo}
\dot N_2=\f{1}{3} (1+\Sigma_+)^2 N_2
\end{equation}
while the equations for $\Sigma_+$ and $q$ do not involve $N_2$. The latter
equations form a subsystem which is identical to the equations for Bianchi
type I, so that the situation is again as in figure 1. The qualitative
behaviour of their solutions has been analysed in
Theorem 3.1. All that remains to be done is then to put that information
into equation (\ref{ntwo}) and read off the behaviour of $N_2$. The
expression $(1+\Sigma_+)^2$ is strictly positive for a non-vacuum solution
of type VII${}_0$, due to (\ref{physical}). Moreover it is bounded by four.
Thus the
solution has the property that the sign of $N_2$ remains constant and the
solution exists globally in $\tau$. It is also clear that $N_2\to\infty$
as $\tau\to\infty$ and that $N_2\to 0$ as $\tau\to -\infty$.
\vskip 10pt\noindent
There is a simple explanation for the close relation between the Bianchi
I and Bianchi VII${}_0$ solutions. They are in fact the same spacetimes
parametrized in two different ways. The full four-dimensional isometry group
has a subgroup of Bianchi type I and a one-parameter family of subgroups of
Bianchi type VII${}_0$.
Next we turn to the solutions of type II. It will be shown that the
stationary points of (the compactification of) (\ref{bianchiA}) which lie in
the closure
of $S_2$ are the points $P_1,\ldots,P_6$ which we know already together with
one additional point $P_7$, which has coordinates $(\infty,\f{2\sqrt2}{5},
\f{1}{5})$ (see figure 2). The corresponding distributional solution of the
Einstein-Vlasov equations will be discussed in detail below.
\begin{figure}
\begin{center}
\includegraphics[width=10cm,height=12cm,angle=270]{fig2.eps}
\end{center}
\label{Figure 2}
\caption{The $(\hat{q},\Sigma_+,N_1)$ space and the fixed points for Bianchi
type II; the lower two figures show the phase portraits on the
end-faces $\hat{q}=0,1$}
\end{figure}
\vskip 10pt\noindent
{\bf Theorem 4.2} If a smooth non-vacuum reflection symmetric LRS solution of
Bianchi type II of the Einstein-Vlasov equations for massless particles is
represented as a solution of (\ref{bianchiA}) with $N_2=0$ then for
$\tau\to\infty$ the
solution converges to $P_7$. For $\tau\to -\infty$ either:
\noindent
(i) the solution converges to $P_1$ or
\hfil\break\noindent
(ii) the $\alpha$-limit set of the solution consists of the points $P_2$,
$P_4$, $P_5$ and $P_6$ together with the orbits connecting $P_2$ to $P_5$,
$P_5$ to $P_6$ and $P_6$ to $P_4$ in the set $N_1=0$ and the stable manifold
of $P_4$ which connects $P_4$ with $P_2$ via the vacuum boundary. In
particular $\liminf_{\tau\to-\infty}\Sigma_+=-1$,
$\limsup_{\tau\to-\infty}\Sigma_+=+1$,
$\liminf_{\tau\to -\infty}q(\tau)=-\infty$ and
$\limsup_{\tau\to -\infty}q(\tau)=\infty$.
\noindent
Both of these cases occur and (ii) is the generic case.
\vskip 10pt\noindent
This theorem shows that while models of Bianchi type II have simple behaviour
in the expanding phase, all tending to a single attractor, the behaviour
near the initial singularity is in general oscillatory, and quite different
from the Bianchi type I case. Note also that the models of type II do not
isotropize as $\tau\to\infty$, which is another important difference from the
type I models.
A first important step in proving Theorem 4.2 is to use the identity
\begin{equation}\label{liapunov}
\d/\d\tau (q^{4/3}N_1)=q^{4/3}N_1[-\f{1}{4}N_1(N_1-4N_2)
+\f{1}{3} (1-\Sigma_+^2)
+\f{2}{3}\Sigma_+^2]
\end{equation}
which holds for any solution of (\ref{bianchiA}). This is done in the
following lemma.
\noindent
{\bf Lemma 4.1} For any solution in the interior of $S_2$ the following
statements hold. As $\tau\to\infty$ $q$ tends to $\infty$. Either the
$\alpha$-limit set is contained in the set $q=0$ or it contains one of the
points $P_1,\ldots,P_6$.
\noindent
{\bf Proof} For a non-vacuum solution strict inequality holds in
(\ref{physical}) and
hence $q^{4/3}N_1$ is strictly increasing where it is non-zero. This means
that as long as $q$ is finite and the Bianchi type is II this quantity is
always increasing. Since $S_2$ is compact, solutions exist globally in $\tau$.
As $\tau$ tends to plus or minus infinity the solution must go to the
boundary of $S_2$. Equation (\ref{liapunov}) shows that if $q^{4/3}N_1$ tends
to a
finite non-zero limit in either time direction then $\Sigma_+^2$ is integrable
on a half-infinite time interval. The derivative of this quantity is
bounded and these two facts together imply that it must tend to zero
in the limit. The same argument applies to the quantity appearing in
(\ref{physical})
and so it must also tend to zero in the limit. Under these conditions
$N_1\to \f{2}{\sqrt 3}$ and $\dot\Sigma_+\to \f{4}{3}$, a contradiction.
It can be concluded that $\lim_{\tau\to\infty}(q^{4/3}N_1)(\tau)=\infty$ and
$\lim_{\tau\to -\infty}(q^{4/3}N_1)(\tau)=0$. {}From the first of these
statements and the boundedness of $N_1$ it follows that $q$ tends to $\infty$
as $\tau$ tends to $\infty$. The fact that $q^{4/3}N_1$ tends to zero in the
contracting direction implies that the $\alpha$-limit set is contained in the
union of the sets $q=0$ and $N_1=0$. Suppose the $\alpha$-limit set contains
some point for which $q\ne 0$. This must belong to the Bianchi I set. Thus
the $\alpha$-limit set contains a solution of Bianchi type I. Using Theorem
3.1, we conclude that the $\alpha$-limit set contains one of the points $P_1,
\ldots,P_6$.
\vskip 10pt
The next lemma gives information about the nature of the stationary points
on $S_2$. We already know from Lemma 4.1 that these stationary points can
only occur for $N_1=0$, $q=0$ or $q=\infty$. The stationary points $P_1,
\dots,P_6$ will be investigated first. The equations for $q=0$ and $q=\infty$
will be studied in detail later.
\noindent
{\bf Lemma 4.2} The stationary points $P_1,\ldots,P_4$ and $P_6$ of the
restriction of the system (\ref{bianchiA}) to $S_2$ are hyperbolic saddles,
while $P_5$
is degenerate. The stable manifold of $P_1$ is given by $N_1=0$.
The stable and unstable manifolds of $P_2$ are given by $\Sigma_+=-1$, $N_1=0$
and $q=0$ respectively. The stable manifold of $P_3$ is given by
$q=0$. The stable and unstable manifolds of $P_4$ are given
by $q=0$, $N_1^2=\f{4}{3}(1-\Sigma_+^2)$ and $N_1=0$ respectively. The
stable and
unstable manifolds of $P_6$ are given by $N_1^2=\f{4}{3}(1-\Sigma_+^2)$ and
and $q=\infty$, $N_1=0$ respectively. The unstable manifold of $P_5$ is given
by $N_1^2=\f{4}{3}(1-\Sigma_+^2)$. The set $q=\infty$, $N_1=0$ is a centre
manifold for $P_5$.
\noindent
{\bf Proof} All that needs to be done is to compute the linearizations of
the system about the given points and to note that the manifolds named in
the statement of the theorem are all invariant. Linearizing the restriction
of (\ref{bianchiA}) to $N_2=0$, and setting $N_1=0$ in the result, gives the
system
(a bar denotes a linearized quantity):
\begin{eqnarray}\label{linearized}
d\bar q/d\tau&=&\Sigma_+\bar q+q\bar\Sigma_+ \nonumber\\
d\bar N_1/d\tau&=&\f{1}{3}(1-4\Sigma_++\Sigma_+^2)\bar N_1 \\
d\bar \Sigma_+/d\tau&=&[-\f{1}{3}(1+\Sigma_+-3\Sigma_+^2)+\Sigma_+Q(q)]
\bar\Sigma_+
-\f{1}{3} Q'(q)(1-\Sigma_+^2)\bar q\nonumber
\end{eqnarray}
The linearization about $P_1$ has eigenvalues $\f{1}{3}$ and
$-\f{1}{6}\pm\f{1}{2}\sqrt{\f{1}{9}
-\f{4}{3} q_0 Q'(q_0)}$. The invariant subspace of the
linearization corresponding to the eigenvalues with negative real parts is
the tangent space to $N_1=0$. The linearizations about $P_2$, $P_3$ and $P_4$
are diagonal with diagonal entries $(-1,2,1)$, $(\f{1}{2},
-\f{1}{4},-\f{1}{4})$ and
$(1,-\f{2}{3},\f{1}{3})$ respectively. Since $P_5$ and $P_6$ lie at
$q=\infty$, we must change to the coordinate $\tilde q$ to study the
linearizations at these points. They are diagonal with diagonal elements
$(1,2,0)$ and $(-1,-\f{2}{3},\f{4}{3})$.
\vskip 10pt\noindent
Next the limiting systems for $q=0$ and $q=\infty$ will be examined.
\noindent
{\bf Lemma 4.3} Consider the restriction of the system (\ref{bianchiA}) to
the set given by the equations $N_2=q=0$. For any solution which does not
belong to the vacuum boundary and which does not satisfy $N_1=0$, the
$\alpha$-limit set is the point $P_2$ and the $\omega$-limit set is the
point $P_3$.
\noindent
{\bf Proof} First it will be shown that the solution cannot be stationary.
The equation for $\Sigma_+$ shows that $\dot\Sigma_+>0$ if
$\Sigma_+<\f{1}{2}$.
Thus at a stationary point $\Sigma_+\ge\f{1}{2}$. On the other hand, the
equation
for $N_1$ shows that at a stationary point
\begin{equation}\label{sigmaplus}
(\Sigma_+-2)^2=3+N_1^2\ge 3
\end{equation}
Using the fact that $\Sigma_+\le 1$ it follows that $\Sigma_+\le 2-\sqrt 3
<\f{1}{2}$. Next, it follows from Theorem 3.1 on p. 150 of
\cite{hartman82} that the
solution cannot be periodic. It can be concluded using the
Poincar\'e-Bendixson theorem (Theorem A2 of the appendix) that the $\alpha$-
and $\omega$-limit sets are contained in the boundary of the region. The
behaviour of solutions on the boundary is easily determined. The nature
of the stationary points on the boundary can be read off from Lemma 4.2.
$P_2$ is a hyperbolic source, $P_3$ is a hyperbolic sink and $P_4$ is a
hyperbolic saddle. The last fact means, using Lemma A1 of the appendix,
that $P_4$ cannot be in the $\alpha$- or $\omega$-limit set unless $P_2$ or
$P_3$ is also. Thus it can be concluded that the $\alpha$- and $\omega$-limit
sets must contain either $P_2$ or $P_3$ and then the conclusion follows
easily.
\vskip 10pt\noindent
The system given by $N_2=0$ and $q=\infty$ is more complicated.
\noindent
{\bf Lemma 4.4} Consider the restriction of the system (\ref{bianchiA}) to
the set
given by the equations $N_2=0$ and $q=\infty$. For any solution which does not
belong to the vacuum boundary and which does not satisfy $N_1=0$, the
$\alpha$-limit set consists of all points which are either on the vacuum
boundary or satisfy $N_1=0$. The $\omega$-limit set is the point $P_7$.
\noindent
{\bf Proof} Define a function $Z$ by
\begin{equation}\label{liapunovII}
Z=N_1^{1/2}[-\f{1}{2} N_1^2+\f{2}{3}(1-\Sigma_+^2)]^{3/4}
(1-\f{1}{5}\Sigma_+)^{-2}
\end{equation}
This function is well-defined and continuous on $S_2$ and smooth away from
$N_1=0$ and $-\f{1}{2} N_1^2+\f{2}{3}(1-\Sigma_+^2)=0$. Its
derivative is given by
\begin{equation}\label{liapunovIIevolution}
\d_\tau Z=\f{1}{10}Z(1-\f{1}{5}\Sigma_+)^{-1}[\f{1}{3}
(5\Sigma_+-1)^2+
(-3N_1^2+\Sigma_+^2-15\Sigma_++4)(1-Q)]
\end{equation}
The restriction of $Z$ to the set $q=\infty$ is non-decreasing along solutions
as a consequence of (\ref{liapunovIIevolution}). Moreover, it is strictly
increasing unless
$\Sigma_+=\f{1}{5}$. When $\Sigma_+=\f{1}{5}$ it follows from
(\ref{bianchiA})
that $\dot\Sigma_+
\ne 0$ unless $N_1=\f{2\sqrt2}{5}$. Thus apart from the stationary
solution at
the point $P_7$ with coordinates $(\infty,\f{2\sqrt2}{5},\f{1}{5})$,
the function
$Z$ is strictly increasing along any solution with $q=\infty$. It follows that
$P_7$ is the $\omega$-limit point of all solutions. The function $Z$ attains
its minimum precisely on the boundary of the region where the system is
defined and hence the $\alpha$-limit set of any solution is contained in this
boundary. The only stationary points on the boundary are $P_5$ and $P_6$.
{}From Lemma 4.2 it follows that both are saddle points of this system. ($P_5$
is degenerate while $P_6$ is non-degenerate.) This suffices to show, using
Lemma A1 and Lemma A2 of the appendix, that the $\alpha$-limit set consists
of the entire boundary.
\vskip 10pt\noindent
{\bf Proof of Theorem 4.2} By Lemma 4.1 the $\omega$-limit set of any
solution consists of points with $q=\infty$. Then Lemma 4.4 shows that either
$P_7$ belongs to the $\omega$-limit set or that the $\omega$-limit set
consists entirely of points with $q=\infty$ and $N_1=0$ or
$-\f{1}{2} N_1^2+\f{2}{3}(1-\Sigma_+^2)=0$. A calculation of the
linearization
of (\ref{bianchiA}) around $P_7$ shows that this point is a hyperbolic sink.
Hence if
$P_7$ belongs to the $\omega$-limit set this set consists of $P_7$ alone.
It remains to rule out the other possibility where the solution has an
$\omega$-limit point on the boundary of the intersection of $S_2$ with
$q=\infty$. In that case Lemma A1 applied to the point $P_6$ and Lemma A2
applied to the point $P_5$ show that the $\omega$-limit set contains the
whole of this boundary. It will now be shown using (\ref{liapunovIIevolution})
that this leads to a contradiction.
There exist $\delta_1>0$ and $M>0$ such that if $|\Sigma_+
-\f{1}{5}|>\delta_1$ and
$q>M$ the right hand side of (\ref{liapunovIIevolution}) is positive. This is
because the first
term dominates the second. By reducing $\delta_1$ and increasing $M$ if
necessary it can be ensured that there exist positive constants $\eta_1$,
$\eta_2$ and $\delta_2$ such that $Z^{-1}\d_\tau Z$ can be bounded below by
$\eta_1$ as long as $|\Sigma_+-\f{1}{5}|>\delta_1$ and $q>M$ and
$\dot\Sigma_+>\eta_2$ for $|\Sigma_+-\f{1}{5}|<\delta_1$,
$|N_1-\f{2\sqrt2}{5}|>\delta_2$ and $q>M$. Finally, given $\eta_3>0$ there
exists
$\delta_3>0$ so that $\dot\Sigma<\eta_3$ for $|\Sigma|>1-\delta_3$ and $q>M$.
At sufficiently late times the solution lies in the region $q>M$. Moreover,
under the present assumption on the $\omega$-limit set it cannot enter the
neighbourhood of $P_7$ defined by $\delta_1$ and $\delta_2$. Each time it
crosses the strip defined by $|\Sigma_+-\f{1}{5}|\le \delta_1$ at a
sufficiently
late time it must enter the
region $\Sigma_+>1-\delta_3$ before it can return to the strip. It must spend
a long time in the region $\Sigma_+>1-\delta_3$ (due to the smallness of
$\eta_3$). This time can be bounded below by $C\eta_3^{-1}$ for a constant
$C>0$. During that time $\log Z$ must increase by at least $C(\eta_1/\eta_3)$.
On the other hand $\log Z$ can only decrease while it is in
the strip. It stays there for a time at most $\delta_1/\eta_2$ and can decrease
by at most $C\delta_1/\eta_2$. Thus the net change of $Z$ for each time it
enters the strip is at least $C(\eta_1/\eta_3-\delta_1/\eta_2)$. If $\eta_3$
is chosen small enough this will be bounded below by a positive quantity.
Since the solution must, under the given assumptions, enter the strip
infinitely often, this gives a contradiction. The proof of the statement
about the $\omega$-limit set is now complete.
Suppose that the $\alpha$-limit set contains a point with $q=0$ and
$N_1\ne 0$. Then by Lemma 4.3 it contains $P_2$ and $P_3$ or $P_4$.
Lemma 4.1 then shows that at least one of $P_1,\ldots,P_6$ is contained in
the $\alpha$-limit set. If the $\alpha$-limit set contains $P_1$ then either
the solution lies in the unstable manifold of $P_1$, which gives case (i) of
the theorem, or the $\alpha$-limit set contains points on that unstable
manifold other than $P_1$ itself. But since these satisfy neither $N_1=0$ or
$q=0$ this is a contradiction. If the $\alpha$-limit set contained points
with $N_1=0$ with $q$ finite and $|\Sigma_+|<1$ it would contain $P_1$,
leading once again to a contradiction. If it contains $P_4$ it follows from
Lemma A1 and what has just been said that it must contain either $P_3$ or
$P_6$. It must also contain $P_2$. However, if it contained $P_3$ it would,
by another application of the same lemma contain $P_1$, a contradiction.
On the other hand, if it contains $P_2$ it must contain $P_4$ and $P_5$.
If it contains $P_5$ it must contain $P_6$ and vice versa, by Lemma A1 and
Lemma A2. On the other hand, Lemma A1 shows that if the $\alpha$-limit set
contains $P_5$ or $P_6$ it must contain $P_2$ or $P_4$. It also follows
from these applications of the lemmas of the appendix that the relevant
connecting orbits are contained in the $\alpha$-limit sets.
\vskip 10pt\noindent
Now a spacetime corresponding to the point $P_7$ will be
determined (in figure 2, this space-time follows a straight line at
constant $(\Sigma_+,N_1)$ into $P_7$.). {}From
equation (\ref{meanevolutionsurf}) it follows that
${\rm tr} k=H_0 e^{-\f{3}{5} \tau}$. Putting this in
the equation relating $t$ and $\tau$ shows that ${\rm tr} k=-\f{5}{3} t^{-1}$.
Putting this in the third equation of (\ref{dimensionless}) gives
$b=b_0 t^{2/3}$. Equation (\ref{meancurv})
implies that $a=a_0 t^{1/3}$. Finally, the second equation of
(\ref{dimensionless}) leads
to the relation $a_0=\f{2\sqrt2}{5} b_0^2$. Choosing an explicit
representation
of a Bianchi type II frame leads to the metric:
\begin{equation}\label{explicit}
ds^2=-dt^2+ \f{8}{9} B^2t^{2/3} (dx+zdy)^2+Bt^{4/3}(dy^2+dz^2)
\end{equation}
where $B$ is a constant. This metric is invariant under the homothety
$t\mapsto At$, $x\mapsto A^{2/3}x$, $y\mapsto A^{1/3}y$,
$z\mapsto A^{1/3}z$. It follows that $t\d/\d t+\f{2}{3}x\d/\d x
+\f{1}{3}(y\d/\d y
+z\d/\d z)$
is a homothetic vector field and that this metric is self-similar. It
satisfies the Einstein equations with an energy-momentum tensor whose only
non-vanishing components are $\rho$ and $T_{11}$. These two are equal and
are proportional to $t^{-2}$. This can be interpreted as a distributional
solution of the Einstein-Vlasov equations with massless particles where the
distribution function is of the form $f(p_1,p_2,p_3)=f_1(p_1)\delta(p_2)
\delta(p_3)$. (Note that a distributional $f$ of this kind defines a
dynamical system just as a smooth $f$ does so that the solution can be
represented in figure 2.)
The exact form of the function $f_1$ is unimportant. Only
the integrals $\int f_1(p_1)p_1 dp_1$ and $\int f_1(p_1)p_1^2 dp_1$
influence the energy-momentum tensor. Related to this fact is that the same
spacetime can be interpreted as a solution of the Einstein equations coupled
to two streams of null dust moving in opposite senses in the
$x^1$-direction. This corresponds to choosing
$f_1=(\delta(p_1)+\delta(-p_1))$ instead of a smooth
function. The sum of two Dirac measures is necessary to preserve the
reflection symmetry. This spacetime has previously been considered by
Dunn and Tupper\cite{dunn80} in the context of cosmological models with
electromagnetic
fields, although it had to be rejected for their purposes since no
consistent electromagnetic field existed.
The monotone function $Z$ which plays a crucial role in the proof of Theorem
4.2 is rather complicated and so is unlikely to be found by trial and error.
We found it by means of a Hamiltonian formulation of the equations for
$q=\infty$. Once the function was found for $q=\infty$ it was extended so
as to be independent of $q$. In developing the Hamiltonian formulation we
followed the treatment of Uggla in chapter 10 of \cite{wainwright97}. A key
point is that
the energy density of a distributional solution of the Einstein-Vlasov system
where the pressure is concentrated in one direction can be related in a simple
way to $q$. The function $Z$ is the Hamiltonian for the (time dependent)
Hamiltonian system. It was also the construction of $Z$ which led us to
discover the self-similar solution corresponding to the point $P_7$.
The picture obtained in Theorem 4.2 is quite different from that seen in
LRS Bianchi type II solutions with a perfect fluid with linear equation of
state as matter model (see \cite{wainwright97}, chapter 6).
There generic solutions are approximated near the
singularity by a vacuum solution (the type II NUT solution) and there is
no oscillatory behaviour. In the expanding direction the fluid solutions
are also all asymptotic to a self-similar solution (the Collins-Stewart
solution) but this solution has a different ratio of shear to expansion
than the solution corresponding to the point $P_7$. Moreover the pressure
is highly anisotropic in the latter solution.
\section{Kantowski-Sachs and Bianchi type III models}
In this section information will be obtained on Kantowski-Sachs models and
models of Bianchi type III which is as complete as that obtained on models
of Bianchi type I in section 3.
\vskip 10pt\noindent
{\bf Theorem 5.1} If a smooth non-vacuum reflection symmetric
Kantowski-Sachs type solution of the Einstein-Vlasov equations for massless
particles
is represented as a solution of (\ref{surfacesymm}) with $\epsilon=1$ then for
$\tau\to -\infty$ either
\noindent
(i) it converges to $P_1$
\hfil\break\noindent
(ii) it converges to the point $P_3$ and it belongs to the unstable manifold
of $P_3$ or
\hfil\break\noindent
(iii) it converges to $P_4$
\hfil\break\noindent
All of these cases occur, and (iii) is the generic case in the sense that it
occurs for an open dense set of initial data.
\noindent
{\bf Proof} The inequality $\d_\tau B\ge\f{1}{4} B$ shows that $B$ decreases
towards the past. It follows that as $\tau$ decreases the solution remains
in a compact set and hence that the solution exists for all sufficiently
negative $\tau$. Using the inequality again shows that $B\to 0$ exponentially
as $\tau\to -\infty$ and the $\alpha$-limit set lies in the set $B=0$. The
latter can be identified with the Bianchi I system. The $\alpha$-limit set
contains the image of a solution of the Bianchi I system and hence, by
Theorem 3.1 contains either $P_1$ or some point of the boundary of the
Bianchi I system. Each of the stationary points $P_1,\dots,P_6$, considered
as stationary points of (\ref{surfacesymm}), has a linearization which differs
from its
linearization within the Bianchi I system by the addition of an extra
eigenvector with a positive eigenvalue. It can be concluded from this
that $P_4$ is a hyperbolic source. Moreover, by Lemma A1 and Lemma A2, if any
point of the boundary other than $P_3$ lies in the $\alpha$-limit set, then
$P_4$ must also lie in the $\alpha$-limit set. Hence in this case the
$\alpha$-limit set consists of $P_4$ alone. Moreover, if $P_3$ lies in the
$\alpha$-limit set then the solution must lie on its unstable manifold. The
only remaining possibility is that the $\alpha$-limit set consists of $P_1$
alone, and that the solution lies on the unstable manifold of $P_1$.
\vskip 10pt\noindent
No statement is made here about the behaviour as $\tau\to\infty$. In fact
any solution of (\ref{surfacesymm}) with $\epsilon=1$ tends to infinity in
finite time.
However this is not a problem from the point of view of understanding the
spacetime. It is known that the Kantowski-Sachs models recollapse
\cite{burnett91}.
Thus there is no infinitely expanding phase and a final singularity
looks like the time reverse of an initial singularity. One interesting
question which we do not attempt to tackle here is whether there is an
interesting correlation between the behaviour near the initial and final
singularities. For each individual singularity the picture is essentially
identical to that seen in the singularity of Bianchi I models. The system
for a radiation fluid can be analysed in the same way, reducing the dynamics
near the singularity to that of the corresponding Bianchi I system. The
differences between radiation fluid and kinetic models are similar in both
cases.
\vskip 10pt\noindent
{\bf Theorem 5.2} If a smooth non-vacuum reflection symmetric LRS solution of
Bianchi type III of the Einstein-Vlasov equations for massless particles is
represented as a solution of (\ref{surfacesymm}) with $\epsilon=-1$ then for
$\tau\to \infty$ it converges to the the point $P_9$ with coordinates
$(\infty,\f{1}{2},\f{1}{2})$ and for $\tau\to -\infty$ either
\noindent
(i) it converges to $P_1$ or
\hfil\break\noindent
(ii) it converges to the point $P_3$ and it belongs to the unstable manifold
of $P_3$ or
\hfil\break\noindent
(iii) it converges to $P_4$
\hfil\break\noindent
All of these cases occur, and (iii) is the generic case in the sense that it
occurs for an open dense set of initial data.
\noindent
{\bf Proof} The inequality (\ref{physicalsurf}) with $\epsilon=-1$ implies
that a solution of (\ref{surfacesymm}) of Bianchi type III remains in a
compact set
and hence exists
globally in $\tau$. The quantity $\dot B$ is positive in the region
where $B^2<\f{1}{4}+\f{1}{12}(1-2\Sigma_+)^2$. Call this region $G$.
In the
complement of the closure of $G$ the inequality $\dot B<0$ holds. Thus any
stationary point with $B>0$ must lie on the boundary of $G$. A stationary
point with a finite non-zero value of $q$ must satisfy $\Sigma_+=0$ and this
implies that $\dot\Sigma=\f{1}{3}$, a contradiction. Thus the only
stationary
points occur for $B=0$ (these are the well-known Bianchi type I stationary
points), $q=0$ or $q=\infty$. In fact the only stationary points which are
not of type I are those with coordinates $(0,\f{1}{2},\f{1}{2})$ and
$(\infty,\f{1}{2},
\f{1}{2})$. Call these $P_8$ and $P_9$ respectively (see figure
3). \\
\noindent The boundary of $G$ is connected and so
$\dot\Sigma$ has a constant sign there. Checking at one point shows that
this sign is positive. As a consequence, a solution can never leave $G$
as $\tau$ increases or enter $G$ as $\tau$ decreases. A solution which lies
on the boundary of $G$ at some time (with $q$ non-zero and finite) must
immediately enter $G$ to the future and enter the interior of its complement
to the past. Consider now the behaviour of a given solution as $\tau$
decreases. If it stayed in $G$ for ever then $B$ would have to increase
as $\tau$ decreases. On the other hand, any $\alpha$-limit point would have
to be in the boundary of $G$ due to the monotonicity properties of $B$. This
is not consistent. Thus as $\tau$ decreases the solution must reach the
boundary of $G$ and, as a consequence the interior of the complement of $G$.
In the latter region $B$ is strictly monotone and so the $\alpha$-limit set
must be contained in $B=0$. Then the same analysis as in the proof of Theorem
5.1 shows that the solution belongs to one of the cases (i)-(iii) of the
theorem. Next consider the behaviour as $\tau$ increases. As $\tau$ tends to
infinity the solution must tend to the boundary of $G$. If it stays in the
interior of the complement of $G$ then it must tend to the boundary of $G$
as $\tau$ tends to infinity and, more precisely, to one of the points $P_8$
or $P_9$. Since $\Sigma_+$ is positive at these points, $q\to\infty$ and so
only $P_9$ is possible. Now suppose that the solution does meet the boundary
of $G$ and hence enters $G$ itself. Then it remains in $G$ and $B$ is once
again strictly monotone. As before, it can be concluded that the solution
converges to $P_9$ as $\tau\to\infty$.
\begin{figure}
\begin{center}
\includegraphics[width=10cm,height=8cm,angle=270]{fig3.eps}
\end{center}
\label{Figure 3}
\caption{The $(\hat{q},\Sigma_+,B)$ space and the fixed points for Bianchi
type III.}
\end{figure}
\vskip 10pt\noindent
The point $P_9$ corresponds to a self-similar solution of the vacuum Einstein
equations much as does $P_7$ (this time the trajectory is the
horizontal straight line in figure 3 from $P_8$ to $P_9$). This is the Bianchi III form of flat space (see p. 193 of
\cite{wainwright97}).
Once again the nature of the initial singularity is similar to that in
solutions of type I. On the other hand the final singularity is
qualitatively different from any we have seen so far. In this case the
solution is approximated at large times by a vacuum solution in the sense
that the dimensionless quantity $\rho/({\rm tr} k)^2$ tends to zero as $t\to\infty$.
A very similar analysis applies to the system for a radiation fluid.
LRS Bianchi type III fluid solutions with equation of state $p=\f{1}{3}
\rho$
behave like solutions of Bianchi type I near the initial singularity. They
are approximated at large times by the same vacuum solution as in the case of
kinetic theory. The approach of \cite{hewitt93} should allow similar
statements to be
proved for other fluids with a linear equation of state, but this does not
seem to have been worked out explicitly in the literature.
\section{Conclusions}
The above theorems show that solutions of the Einstein-Vlasov equations
with high symmetry exhibit a wide variety of asymptotic behaviour near a
singularity and in a phase of unlimited expansion. They can have a point
singularity, barrel singularity or cigar singularity or they can show
oscillatory behaviour near a singularity. In an expanding phase they can
resemble a fluid solution (Bianchi type I and VII${}_0$), a vacuum solution
(Bianchi type III) or a solution of the Einstein equations with null dust
(Bianchi type II).
There are notable differences in comparison with a fluid model, and this
includes the radiation fluid, which is often used as an effective model of
massless particles in cosmology. The most striking qualitative difference
is the appearance of oscillatory behaviour in type II solutions. It is
interesting to compare this with the analysis of spacetime singularities
by Belinskii, Khalatnikov and Lifschitz \cite{belinskii82}. They do not say
precisely what
they assume about matter but it seems that they do assume, at least
implicitly, that pressures cannot approach the energy density. This
assumption is not necessarily satisfied in a kinetic description. The mean
pressure cannot exceed one third of the energy density but if it all
concentrates in one direction the pressure in that direction can approach
the energy density. This leads to a source of oscillations beyond those
taken into account in \cite{belinskii82}.
While the oscillatory behaviour of cosmological models near a singularity
has often been observed numerically and explained heuristically, it has
rarely been captured in rigorous theorems. To our knowledge the only
example where this had been done previous to Theorem 4.2 of this paper
is in a class of solutions of the Einstein-Maxwell equations of Bianchi type
VI${}_0$ analysed in \cite{leblanc95}.
The results of this paper concern only massless particles. One may ask
what would change in the results if the case of massive particles is
considered. In one case the answer is known, namely in Bianchi type I.
There the solution approaches a dust solution in the expanding phase.
It is reasonable to expect that this happens more generally. As the model
expands in all directions the pressures should become negligible with
respect to the energy density, leading to a dust-like situation.
However, the techniques necessary to prove this are not yet known.
Near the initial singularity, the equations for massive particles look
like those for massless particles and it may be conjectured that the
behaviour near the singularity is similar in both cases. Unfortunately
that has also not yet been proved.
It is interesting to note that matter seems to have the effect of making
the evolution of the geometry under the Einstein equations less extreme
in a phase of unlimited expansion.
In Bianchi type I the vacuum solutions (Kasner solutions) are such that
some spatial direction is contracting or unchanging in the expanding
time direction (the time direction in which the volume is increasing). This
is no longer the case when perfect fluid or kinetic matter is added, since
then the model isotropizes. In type II there is no complete isotropization
but it is still the case that with fluid or kinetic matter all directions are
eventually expanding, in contrast to the vacuum case. The type III case is
borderline, since there solutions with collisionless matter are asymptotic,
in the sense of the variables used in this paper, to a vacuum solution in the
expanding time direction. The vacuum solution is such that the scale factor
$b$ is time independent. In the other LRS Bianchi type III vacuum spacetimes
this scale factor is asymptotically constant as $t\to\infty$ and for a
radiation fluid this is also the case (cf. \cite{wainwright97} p. 203). On
the other hand
for dust models which also converge to the same vacuum model in terms of the
Wainwright-Hsu variables, this scale factor grows without bound, although
much more slowly than the other scale factors (\cite{wainwright97}, p. 202).
It is difficult
to decide what happens in the case of collisionless matter with massless
particles, since the point $P_9$ is a degenerate stationary point of the
system (\ref{surfacesymm}). In the corresponding system for a radiation fluid
the point
with these coordinates is also a stationary point but is non-degenerate.
For the Einstein-Vlasov equations with
massless particles the LRS reflection symmetric solutions of Bianchi
types I, II, III, VII${}_0$ and Kantowski-Sachs type have now been analysed
as far as to give a full description of their general behaviour near the
singularity and in a phase of unlimited expansion.
There are still plenty of open questions related to this. What
happens with LRS solutions of types VIII and IX? What happens if reflection
symmetry is dropped? Does this lead to a new kind of oscillatory behaviour?
What happens if the LRS condition is dropped? (This is still open even in
the Bianchi I case.) Can the Hamiltonian formulation of the equations, which
played an important role at one point in our arguments, usefully be applied
in some of these more general cases? Answers to these questions could help
to deepen our understanding of the dynamics of solutions of the Einstein
equations with matter in general.
\vskip 10pt\noindent
{\it Acknowledgements} We wish to thank Malcolm MacCallum for his comments
on the exact solution in section 4. Paul Tod gratefully acknowledges the
hospitality and financial support of the Max-Planck-Institut f\"ur
Gravitationsphysik while this work was being done.
\section*{Appendix: Some background on dynamical systems}
First some terminology will be introduced. We use the phrase \lq dynamical
system\rq\ as a synonym for \lq system of ordinary differential equations\rq.
The difference between the two is then only one of point of view. A {\it
stationary point} of a dynamical system is a time-independent solution. An
{\it orbit} of a dynamical system is the image of a solution. A point $x_*$
is an $\alpha$-limit point of a solution $x(t)$ if there is a sequence of
times $t_n$ with $t_n\to -\infty$ such that $x(t_n)\to x_*$. The set of all
$\alpha$-limit points of a solution is called its $\alpha$-limit set. The
analogous notions of $\omega$-limit point and $\omega$-limit set are obtained
by replacing $t$ by $-t$ in these definitions. Basic properties are that the
$\alpha$-limit set is closed and that, if the solution remains in a compact
set as $t\to -\infty$, it is connected. If $x_*$ is a point of the
$\alpha$-limit set of an orbit then the orbit through $x_*$ lies in the
$\alpha$-limit set of the original orbit. Analogous statements hold for the
$\omega$-limit set. For details and proofs see e.g. \cite{hartman82},
chapter 7.
If $x_0$ is a stationary point of a dynamical system we can linearize the
system about $x_0$. The linearized system is of the form
$d\tilde x/dt=A\tilde x$ for a matrix $A$. Associated to $A$ is a direct sum
decomposition $E_1\oplus E_2\oplus E_3$ where the vector spaces $E_1$, $E_2$
and $E_3$ are spanned by generalized eigenvectors of $A$ corresponding to
eigenvalues with positive, zero and negative real parts, respectively. These
spaces are called the unstable, centre and stable subspaces. For each of
these three subspaces there is a manifold which is tangent to the
corresponding subspace at $x_0$ and is left invariant by the
dynamical system. These manifolds are called the unstable, centre, and
stable manifolds respectively. The unstable and stable manifolds are
unique while the centre manifold need not be. For details see the appendix
of \cite{abraham67}.
The behaviour of solutions of a dynamical system near a stationary point
is described by the reduction theorem.
\noindent
{\bf Theorem A1} (Reduction theorem) Let $x_0$ be a stationary point of
a $C^1$ dynamical system. Then the system is topologically equivalent near
$x_0$ to the Cartesian product of a standard saddle with the restriction of
the flow to any centre manifold.
\vskip 10pt\noindent
This theorem is proved in \cite{kirchgraber90}. Topological equivalence means
that there is a
homeomorphism which takes one system to the other. A standard saddle is the
dynamical system on ${\bf R}^{n_1+n_2}$ given by $dy/dt=y$, $dz/dt=-z$, where
$y\in{\bf R}^{n_1}$ and $z\in{\bf R}^{n_2}$. The special case (hyperbolic case) where
the centre manifold is trivial is the Hartman-Grobman theorem
\cite{hartman82}.
The next result is intuitively rather obvious, but since we do not
know a published proof we will provide one here.
\noindent
{\bf Lemma A1} Let $p$ be a hyperbolic stationary point of a dynamical
system which belongs to the $\alpha$-limit set of a given orbit. Then either
each neighbourhood of $p$ contains a segment of the orbit which is contained
in the unstable manifold of $p$, or the $\alpha$-limit set contains a
point of the stable manifold of $p$ other than $p$ itself. The analogous
statement with the roles of the stable and unstable manifolds interchanged
also holds.
\noindent
{\bf Proof} By the reduction theorem we can assume that in a neighbourhood
of $p$ the system takes the form:
\begin{equation}\label{alinearized}
dx/dt=x,\ \ \ dy/dt=-y
\end{equation}
with solution
\begin{equation}\label{alinearizedsol}
x=Ae^t,\ \ \ y=Be^{-t}
\end{equation}
The unstable and stable manifolds are given by $y=0$ and $x=0$ respectively.
Suppose that there is a neighbourhood of $p$ where there is no
segment of the orbit contained in the unstable manifold. Then there exists
a sequence of points $p_n$ on the orbit with non-vanishing $y$ coordinate
which converges to $p$. If we denote the coordinates of corresponding segments
of the solution by $(x_n,y_n)$, then $y_n=B_ne^{-t}$ for some $B_n\ne 0$.
Consider now a coordinate closed ball contained in a neighbourhood of $p$
where the reduction can be carried out. As $t$ decreases each of the solutions
$(x_n,y_n)$ must leave this ball and so must, in particular contain a
point of the boundary sphere. Call the resulting sequence of points of the
sphere $q_n$. By compactness $q_n$ has a subsequence converging to a point
$q$. The point $q$ belongs to the $\alpha$-limit set. Now $x_n=A_ne^t$ for
a sequence with $A_n\to 0$. Hence the $x$ coordinate of $q$ is zero and $q$
belongs to the stable manifold of $p$. The proof in the case that the roles
of the stable and unstable manifolds are interchanged is very similar, using
the points where the solution exits the ball in the positive time direction.
\vskip 10pt\noindent
The following variant of Lemma A1 allows a centre manifold of a certain
type.
\noindent
{\bf Lemma A2} Let $p$ be a stationary point of a dynamical system
which belongs to the $\alpha$-limit set of a given orbit. Suppose that the
centre manifold is one-dimensional and that there is a punctured
neighbourhood of $p$ in the centre manifold which contains no stationary
points and such that the solutions on the centre manifold approach $p$
as $t\to\infty$ on one side of $p$ and as $t\to -\infty$ on the other
side. Suppose further that the stable manifold is trivial. The boundary
between points on orbits which converge to $p$ while staying in a small
neighbourhood of $p$ as $t\to -\infty$ and points on orbits which do not is
the unstable manifold. The analogue of Lemma A1 holds, where the stable
manifold is replaced by the half of the centre manifold on one side of the
unstable manifold. This half of the centre manifold is unique.
\noindent
{\bf Proof} By the reduction theorem we can assume that in a neighbourhood
of $p$ the system takes the form:
\begin{equation}\label{reduction}
dx/dt=F(x),\ \ \ dy/dt=y
\end{equation}
for some function $F$ which vanishes together with its derivative at the
origin, and is positive otherwise. The boundary hypersurface is given by
$x=0$. The half of the centre manifold referred to in the statement of the
theorem corresponds to $x<0$ and $y=0$. In the half-plane $x<0$ the system
is topologically equivalent to a hyperbolic saddle and so it is possible
to obtain the conclusion as in the proof of Lemma A1.
Next we state the Poincar\'e-Bendixson theorem. The form of this theorem
which we will use is the following (cf. \cite{hartman82}, p. 151):
\noindent
{\bf Theorem A2} (Poincar\'e-Bendixson) Let $U$ be an open subset of
${\bf R}^2$ and consider a dynamical system on $U$ without stationary points.
Let $x(t)$ be a solution which exists globally and remains in a compact
subset of $U$ as $t\to -\infty$. Then the $\alpha$-limit set of the given
solution is a periodic orbit.
\vskip 10pt\noindent
The analogous statement holds for the the $\omega$-limit set. A periodic
orbit is, of course, just the image of a periodic solution.
| proofpile-arXiv_065-8188 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
We know that the laws of classical mechanics describe with a high
degree of
accuracy the behavior of macroscopic systems. And yet, it is
believed that
phenomena on all scales, including the entire Universe,
follow the
laws of quantum mechanics. So, if we want to reconcile our two last
statements, it is essential to understand the transition from the
quantum to
the classical regime. One of the scenarios where this problem is
relevant is
quantum cosmology, in which one attempts to apply quantum
mechanics to
cosmology. This involves a problem that has not been solved;
namely,
quantizing the gravitational field. Therefore as a first attempt, it is
an important issue to
predict the conditions under which the gravitational field may be
regarded
as classical.
The quantum to classical transition is a very old and interesting
problem relevant in many branches of physics. It involves the
concepts of {\it correlations}, i.e., the Wigner function of the
quantum system should have a peak at the classical trajectories \cite{9},
and
{\it decoherence}, that is, there should be no interference between
classical trajectories \cite{6}. The density matrix should be
approximately diagonal.
In order to understand the emergence of classical
behaviour, it is esencial to consider the interaction between system and
environment, since both the decoherence process and the onset
of classical correlations depend strongly on this interaction.
Both ingredients are not independent and excess of decoherence can destroy
the correlations \cite{8if}.
In a previous work \cite{1}, one of us has studied the problem of
choosing an alternative
mathematical structure, based on a new spectral decomposition with generalized
unstable states, which is useful to explain time asymmetry of
differents models. Following \cite{1}, we will show that this unstable
quantum states
satisfy correlation conditions and also produces decoherence
between different cosmological branches.
>From this work, we know that if we want to retain the time-
symmetric laws of
nature and at the same time explain the time asymmetry of the
universe, we
must choose a space of solutions which is not time-symmetric. A
convenient
choices of time-asymmetric spaces was already proposed in
Ref. \cite{casta2}.
The scheme is based in the existence of a physically admissible quantum
superspace $\Phi_-$ and therefore also the existence of a superspace of
time inverted states of $\Phi_-$, namely a physically forbidden
quantum superspace
$\Phi_+$. Thus, the time invertion that goes from $\Phi_-$ to $\Phi_+$ is
also forbidden \cite{1}. If the generalized states in $\Phi_-$ are restricted
to be included in the superspace of regular states ${\cal S}$ (and the same
for $\Phi_+$ with ${\cal S}^{\times}$ where ${\cal S}^{\times}$ is the space
of an (anti)linear functional over ${\cal S}$), our real mathematical
structure is the Gel'fand triplet (or rigged hilbert space) \cite{1}:
\begin{equation} {\cal S} \subset {\cal H}\subset {\cal S}^{\times}.
\end{equation}
If $K$ is the Wigner time-reversal
operator we have
\begin{equation} K : \Phi_- \rightarrow \Phi_+ ~~~ ; ~~~ K:
\Phi_+ \rightarrow \Phi_-.\label{k1}
\end{equation}
Using these spaces of ``generalized" states we can
also find time-asymmetry for the generalized states. If we choose $\Phi_-$ as
in Ref. \cite{1}, Eq. (\ref{k1}) means that these generalized states will
be (growing or decaying) Gamow vectors. Decaying states are transformed into
growing states (or vice-versa) by time-invers\-ion.
As we have said \cite{1}, the choice of $\Phi_-$
(or $\Phi_+$) as our space of quantum states implies that $K$ is not
defined inside
$\Phi_-$ (or $\Phi_+$), so that time-asymmetry
naturally appears.
But, in the cosmological case, the choice between $\Phi_-$ or $\Phi_+$ (or
between the periods $t>0$ or $t<0$, or between the two semigroups) is
conventional and irrelevant, since these objetcs are identical (namely one
can be obtained
from the other by a mathematical transformation), and therefore the
universes, that we will obtain with one choice or the other, are also
identical and not distinguishable. Only the names ${\it past}$ and {\it
future} or ${\it decaying}$ and ${\it growing}$ will change but physics is
the same, i.e., we will always have equilibrium, decoherence,
growing of entropy, etc. toward, what we would call
the future. But once the choice is made, a substantial difference is
established in the
model: using $\Phi_-$ it can be proved that the time evolution operator is
just $U(t)=e^{-iHt}$, $t>0$, and
cannot be inverted (if the choice would be $\Phi_+$ the condition would change
to $t<0$). Therefore even if we continue using the same reversible
evolution equations,
the choice of $\Phi_-$ (or which is the same
$\Phi_+$) introduces time-asymmetry, since now we
are working in a space where future is substantially different than past. Thus
the arrow of time is not put {\it by hand} since the choice between the
period $t>0$ and $t<0$ or between $
\Phi _{-}$ and $\Phi _{+}$ is trivial and unimportant (namely to chose the
period $t>0$ as the physical period and consider $t<0$ as non-existent,
because the period before the ``creation of the Universe" is physically
unaccesible to us or viceversa). The important choice
is between ${\cal H}$ (the usual Hilbert space) and $\Phi _{-}$
(or $\Phi _{+})$ as
the space of our physical states. And we are free to make this choice, since
a good physical theory begins by the choice of the best mathematical
structure that mimic nature in the most accurate way.
As far as we know the new formalism is mathematically rigorous and the physical
results of both ones are the same. Two of us have shown this method applied
to a semiclassical Robertson-Walker metric coupled to a quantum field
\cite{5}. In this article we have shown how to implement this formalism
in a semiclassical cosmological model in order to prove tha validity
of the semiclassical approximation. Decoherence
and correlations are two necesary ingredients to obtain classical behaviour.
In Ref. \cite{5} we have proved that the model satisfies both requirements for
classicality. However, paper \cite{5} was the first step to prove
our mathematical structure in a simple cosmological model; we can rise two
relevant observations about the validity of the semiclassical approximation:
1) considering the infinite set of unstable modes leads to perfect
decoherence, destroying correlations\cite{6,7}, as we will prove here.
2) the existence of correlations was proved for only one mode of the scalar
field and not for the entire density matrix.
In the present article we complete and improve our previous work in order
to obtain the semiclassical limit as a consequence of the real ``balance''
between decoherence and correlations.
In the context of semiclassical cosmology from a fully quantized
cosmological model, the cosmological scale factor can be defined as
$a=a\left(\eta\right) $, with $\eta$ the conformal time.
When $\eta \rightarrow \infty $ we will obtain a classical
geometry g$_{\mu \nu }^{out}$ for the Universe. In the semiclassical
point of view, the Wheeler-De-Witt equation splits in a classical
equation for the spacetime metric and in a
Schr\"{o}dringer equation for the scalar field modes,
with the corresponding hamiltonian $h\left( a_{out}\right) $. Using
$h\left(
a_{out}\right) $ and the classical geometry g$_{\mu \nu }^{out}$ we
can find
a semiclassical vacuum state $\left| 0,out\right\rangle $ which
diagonalizes
the hamiltonian; and the creation and annihilation operators related to
this vacuum and the corresponding Fock spaces.
In this paper, we choose time-asymmetric Fock spaces to study a
simple
cosmological model; we analyze how this model fulfills the two
requirements for classicality.
The organization of this paper in the following. In section II we
introduce
the cosmological model and we summarize our previous results of Refs. \cite{1}
and \cite{5}. In section III we analyze the
conditions for the existence of decoherence and correlations in this
model.
Since we achieve perfect decoherence, in Section IV we need to introduce a
cutoff. We
suggest a particular value for the cutoff using a relevant
physical
scale that ensures the validity of the semiclassical approximation, namely
the Planck scale. In section V we briefly discuss our results.
\section{The model and previous results}
In this Section we will only extract the main results of Ref. \cite{5}.
Let us consider a flat Robertson-Walker spacetime coupled to a massive
conformally coupled scalar field. In the specific model of \cite{5} we
have considered a graviatational action given by
\begin{equation}
S_g=M^2\int d\eta \,\left[ -%
{\textstyle {1 \over 2}}
\stackrel{.}{a}^2-V\left( a\right) \right], \label{accion grav}
\end{equation}
where $M$ is Planck's mass, $\stackrel{.}{a}\ =\frac{da}{d\eta }$
and $V(a)$
is the potential function that arises from a spatial curvature, a
possible
cosmological constant and, eventually a classical matter field.
In this paper we will consider the potential function used by Birrell
and
Davies \cite{2} to illustrate the use of the adiabatic approximation in
an
asymptotically non-static four dimensional cosmological model:
\begin{equation}
V\left( a\right) =\frac{B^2}2\left( 1-\frac{A^2}{a^2}\right), \label{pot}
\end{equation}
where A and B are arbitrary constants.
The Wheeler-DeWitt equation for this model is:
\begin{equation}
H\Psi \left( a,\varphi \right) =\left( h_g+h_f+h_i\right) \Psi \left(
a,\varphi \right) =0, \label{h}
\end{equation}
where
\begin{equation}
h_g=\frac 1{2M}\partial _a^2+M^2V\left( a\right), \label{h1}
\end{equation}
\begin{equation}
h_f=-%
{\textstyle {1 \over 2}}
\int_k\left( \partial _{\varphi _k}^2-k^2\varphi _k^2\right) dk, \label{h2}
\end{equation}
\begin{equation}
h_i=\frac{m^2a^2}2\int_k\varphi _k^2dk, \label{h3}
\end{equation}
and $m$ is the mass of the scalar field.
In the semiclassical approximation, where the geometry is
considered as classical, and only the scalar field is quantized, we
propose a WKB solution to the Wheeler-DeWitt equation:
\begin{equation}
\Psi \left( a,\varphi \right) =\chi \left( a,\varphi \right) \exp \left[
iM^2S\left( a\right) \right],
\end{equation} where $S$ is the classical action for the geometry.
To leading order (i.e. $M^2$), we get:
\begin{equation}
\left[ \frac{dS\left( a\right) }{da}\right] ^2=2V\left( a\right), \label{v1}
\end{equation}
which is essentially the Hamilton-Jacobi equation for the variable
$a\left(
\eta \right) $. Fron this equation we can find the classical solutions
\begin{equation}
a\left( \eta \right) =\pm \left( A^2+B^2\eta ^2\right) ^{%
{\textstyle {1 \over 2}}
}+C, \label{potencial}
\end{equation}
where C is a constant.
Taking the following order in the WDW equation, we obtain a
Schr\"{o}dringer
equation for $\chi \left( a,\varphi \right) :$
\begin{equation}
i\frac d{d\eta }\chi \left( a,\varphi \right) =-%
{\textstyle {1 \over 2}}
\int_k\left[ \partial _k^2-\Omega _k^2\varphi _k^2\right] dk\chi \left(
a,\varphi \right), \label{hamil}
\end{equation}
where $\Omega _k^2=m^2a^2+k^2$
Since the coupling is conformal we will have well-defined vacua \cite{2}.
So, we
consider now two scales $a_{in}$ and $a_{out}$ such that
$0<a_{in}<<a_{out}$%
. Next, we define the corresponding $\left| 0,in\right\rangle ,\left|
0,out\right\rangle $ vacua there, where $\left| 0,in\right\rangle $ is
the
adiabatic vacuum for $\eta \rightarrow -\infty $ and $\left|
0,out\right\rangle $ is the corresponding for $\eta \rightarrow +\infty
$.
It is well known \cite{2,3} that, in the
case we are considering, we can diagonalize the time-dependent
Hamiltonian (Eq. (\ref{hamil})) at $a_{in}$ and a$_{out}$, define the
corresponding creation and annihilation operators, and the
corresponding Fock spaces.
Thus, following Eqs. $\left[ 37-43\right] $ from Ref. \cite{1} we can
construct the Fock space and find the eigenvector of $h\left(
a_{out}\right)
,$ as follows:
\begin{equation}
h\left( a_{out}\right) \left| \left\{ k\right\} ,out\right\rangle =h\left(
a_{out}\right) \left| \varpi ,\left[ k\right] ,out\right\rangle =\Omega
\left( a_{out}\right) \left| \left\{ k\right\} ,out\right\rangle
=\sum_{k\varepsilon \left\{ k\right\} }\Omega _\varpi \left(
a_{out}\right)
\left| \varpi ,\left[ k\right] ,out\right\rangle
,\end{equation}
where $\left[ k\right] $ is the remaining set of labels necessary to
define
the vector unambiguously and $\left| \varpi ,\left[ k\right]
,out\right\rangle $ is an ortonomal basis \cite{1}.
In the same way we can find the eigenvectors of $h\left( a_{in}\right)
$.
Thus we can also define the S matrix between the in and out
states $\left(
\text{Eq. 44 of Ref. \cite{1}}\right) $:
\begin{equation}
S_{\varpi ,\left[ k\right] ;\varpi ^{\prime },\left[ k^{\prime }\right]
}=\left\langle \varpi ,\left[ k\right] ,in\right| \varpi ^{\prime },\left[
k^{\prime }\right] ,out\rangle =S_{\varpi ,\left[ k\right] ;\left[ k^{\prime
}\right] }\,\delta \left( \varpi -\varpi ^{\prime }\right)
\end{equation}
As we have explained in the Introduction, we will choose time-asymmetric
spaces in order to get a better description of time asymmetry of the
universe. Therefore we make the following choice: for the in Fock
space we
will use functions $\left| \psi \right\rangle \in \Phi _{+,in}$ namely,
such
that $\left\langle \varpi ,in\right| \psi \rangle \in S\mid _{R_{+}}$ and
$%
\left\langle \varpi ,in\right| \psi \rangle \in H_{+}^2\mid
_{R_{+}}$ where $%
H_{+}^2$ is the space of Hardy class functions from above;\ and for
the out
Fock space we will use functions $\left| \varphi \right\rangle \in \Phi
_{-,out}$ such that $\left\langle \varpi ,out\right| \varphi \rangle \in
S\mid _{R_{+}}$and $\left\langle \varpi ,out\right| \varphi \rangle \in
H_{-}^2\mid _{R_{+}}.$ So we can obtain a spectral
decomposition
for the $h\left( a_{out}\right) $ (in a weak sense) \cite{1,5}:
\begin{equation}
h\left( a_{out}\right) =\sum_n\Omega _n\left| \bar{n}\right\rangle
\left\langle \bar{n}\right| +\int dz\,\Omega _z\left| \bar{z}\right\rangle
\left\langle \bar{z}\right|, \label{ham2}
\end{equation}
where $\Omega _n=m^2a^2+z_n$ and $z_n$ are the poles of the S
matrix.
>From references \cite{1} and \cite{5} it can be seen that S matrix
corresponding to this model has infinite poles and the mode $k$,
corresponding to each pole reads:
\begin{equation}
k^2=mB\,\left[ -\frac{m\,A^2}B-2i\,\left( n+%
{\textstyle {1 \over 2}}
\right) \right]\label{k2}.
\end{equation}
Thus we can compute the squared energy of each pole:
\begin{equation}
\Omega _n^2=m^2a^2+mB\,\left[ -\frac{m\,A^2}B-2i\,\left( n+%
{\textstyle {1 \over 2}}
\right) \right]. \label{energia compleja}
\end{equation}
The mean life of each pole is:
\begin{equation}
\tau _n=%
{\textstyle {\sqrt{2} \over 2}}
\frac{\left[ m^2\left( a_{out}^2-A^2\right) +\left( m^4\left(
a_{out}^2-A^2\right) ^2+4m^2B^2\left( n+%
{\textstyle {1 \over 2}}
\right) ^2\right) ^{\frac 12}\right] ^{\frac 12}}{%
\mathop{\rm Im}
\,\ B\ \left( n+%
{\textstyle {1 \over 2}}
\right) }. \label{vida media}
\end{equation}
Using the spectral
decomposition (\ref{ham2}) we will show, in the next section, how decoherence
produces the
elimination of all quantum interference effects. But we must notice that we
can introduce this spectral
decomposition only using the unstable ideal states.
We believe that our results can be
generalized to other models, since essencially they are based in the
existence of an infinite set of poles in the scattering matrix. Nevertheless
the model considered in this paper will allow us to complete
all the calculations, being therefore a good example of what can be done
with our method.
\section{Perfect decoherence and no correlations}
In this section we will show how the complete set of unstable modes
destroy quantum interference, but also demolish classical correlations.
The appearence of decoherence coming from the spectral decomposition of
Eq. $\left(
\text{%
\ref{ham2}}\right) $ shows the importance of the unstable modes in the
quantum to classical process. It has been proved \cite{12}
that decoherence
is closely related to another dissipative process, namely, particle
creation
from the gravitational field during universe expansion. In Eq. $\left(
\text{%
\ref{ham2}}\right) $ we obtain as in \cite{5} a set of discrete unstable
states, namely, the unstable particles, and a set of continuous
stable
states (see Eq. (\ref{ham2})), the latter corresponding to the stable
particles.
As the modes do not interact between themselves we can write:
\begin{equation}
\chi \left( a,\varphi \right) =\prod_{n=1}^\infty \chi _n\left( \eta
,\varphi _n\right),
\end{equation}
the Schr\"{o}dringer equation for each mode is
\begin{equation}
i\frac d{d\eta }\chi _n\left( a,\varphi _n\right) =-%
{\textstyle {1 \over 2}}
\left[ \partial _n^2-\Omega _n^2\varphi _n^2\right] \ \chi _n\left(
a,\varphi _n\right). \label{s}
\end{equation}
As usual, we now assume the gaussian ansatz for $\chi _n\left( \eta
,\varphi
_n\right) :$
\begin{equation}
\chi _n\left( \eta ,\varphi _n\right) =A_n\left( \eta \right) \,\exp \left[
i\,\alpha _n\left( \eta \right) -B_n\left( \eta \right) \,\varphi _n^2\right]
,\label{agaussiano}
\end{equation}
where $A_n\left( \eta \right) $ and $\alpha _n\left( \eta \right) $ are
real, while $B_n\left( \eta \right) $ may be complex, namely,
$B_n\left(
\eta \right) =B_{nR}\left( \eta \right) +i\,B_{ni}\left( \eta \right) .$
After integration of the scalar field modes, we can define the reduced density
matrix $\rho _r as:$
\begin{equation}
\rho _r^{\alpha \beta }\left( a,a^{\prime }\right) =\prod_{n=1}^\infty \rho
_{rn}^{\alpha \beta }\left( \eta ,\eta ^{\prime }\right)
=\prod_{n=1}^\infty
\int d\varphi _n\,\chi _n^\alpha \left( \eta ,\varphi _n\right) \ \chi
_n^\beta \left( \eta ,\varphi _n\right). \label{matriz2}
\end{equation} where $\alpha $ and $\beta $ symbolizes the two different
classical geometries.
It is convenient to introduce the following change of variable in order
to characterize the wave function of each mode:
\begin{equation}
B_m=-%
{\textstyle {1 \over 2}}
\ \frac{\dot{g}_m}{g_m}.
\end{equation}
where $g_N$ is the wave function that represents the quantum state of the
universe being also the solution of
the differential equation
\begin{equation}\ddot g_m+\Omega_m^2 g_m=0,\end{equation}
$\Omega_m$ can be the complex energy $\Omega_n$ in our treatment.
In the more general case we use an arbitrary initial state $\vert 0,0\rangle$,
instead of $\vert 0,in\rangle$. From the discussion presented in the
Introduction, and from Ref. \cite{gamow} we know that,
in a generic case, an infinite set of complex poles does exist. Then we must
change (\ref{k2}) by $k^2=k_n^2$ ($n=0, 1, 2, .....$), where these are the
points where the infinite poles are located in the complex plane $k^2$;
thus, $\Omega_n^2$ now reads as
\begin{equation}\Omega_n^2=m^2a^2+k_n^2.\end{equation}
We will consider the asymptotic (or adiabatic) expansion of function $g_N$
when $a\rightarrow +\infty$ in the basis of the out modes. $g_N$ is the wave
function
that represents the state of the universe, corresponding to the arbitrary
initial state; its expansion reads
\begin{equation}g_m=\frac{P_m}{\sqrt{2\Omega_m}}\exp [-i\int_0^\eta \Omega_m
d\eta]+\frac{Q_m}{\sqrt{2\Omega_m}} \exp [i \int_0^\eta \Omega_m
d\eta],\label{gN}\end{equation}
where $P_m$ and $Q_m$ are arbitrary coefficients showing that $\vert
0,0\rangle$ is really arbitrary.
It is obvious that if all the $\Omega_m$ are real, like in the case of the
$\Omega_k$, (\ref{gN}) will have an oscillatory nature, as well as its
derivative.
This will also be the behaviour of $B_k$. Therefore the limit of $B_k$
when
$\eta \rightarrow +\infty$ will be not well defined even if $B_k$ itself is
bounded.
But if $\Omega_m$ is complex the second term of (\ref{gN}) will have a damping
factor and the first a growing one. In fact, the complex extension of Eq.
(\ref{gN}) (with $m=n$) reads
\begin{equation}g_n=\frac{P_n}{\sqrt{2\Omega_n}}\exp [-i\int_0^\eta \Omega_n
d\eta]+\frac{Q_n}{\sqrt{2\Omega_n}} \exp [i \int_0^\eta \Omega_n
d\eta].\end{equation}
Therefore when $\eta \rightarrow +\infty$ we have
\begin{equation}B_n
\approx -\frac{i}{2}\frac{\dot{g}_m}{g_m}=\frac{1}{2}\Omega_m.
\end{equation}
Then we have two cases:
i) $\Omega_N=\Omega_k$ $\in {\cal R}^+$ for the real factors. Then we see that
when
$\eta \rightarrow +\infty$, the r.h.s. of (\ref{matriz2}) is an oscillatory
function
with no limit in general. We only have a good limit for some particular
initial conditions \cite{7}(as $Q_m=0$ or $P_m=0$).
ii) $\Omega_m=\Omega_n=E_n-\frac{i}{2}\tau_n^{-1}$ $\in {\cal C}$ for the
complex factors. If we choose the lower Hardy class space $\Phi_-$ to define
our
rigged Hilbert space we will have a positive imaginary part, and there will
be a growing factor in the first term of (\ref{gN}) and a damping factor in
the second
one. In this case, for $a\rightarrow +\infty$, we have a definite limit:
\begin{equation}B_n={1\over{2}}\Omega_n.\label{c4}\end{equation}
>From equations $\left( \text{\ref{potencial}}\right) $, $\left( \text{\ref
{energia
compleja}}\right) $ and $\left( \text{\ref{c4}}\right) $ we can
compute the expression for $B_n$ for both semiclassical solutions
$\alpha $
and $\beta :$
\begin{eqnarray}
B_n\left( \eta ,\alpha \right) &=&B_n\left( \eta ,\beta \right) =%
{\textstyle {\sqrt{2} \over 4}}
\left[ m^2B^2\eta ^2+\left( m^4B^4\eta ^4+4m^2B^2\left( n+%
{\textstyle {1 \over 2}}
\right) ^2\right) ^{\frac 12}\right] ^{\frac 12} \label{e1} \\
&&-i\quad \frac{%
{\textstyle {\sqrt{2} \over 2}}
mB\left( n+%
{\textstyle {1 \over 2}}
\right) }{\left[ m^2B^2\eta ^2+\left( m^4B^4\eta ^4+4m^2B^2\left( n+%
{\textstyle {1 \over 2}}
\right) ^2\right) ^{\frac 12}\right] ^{\frac 12}}. \nonumber
\end{eqnarray}
Now we will see, making the exact calculations, that in the limit
$\eta
\rightarrow \infty $ there is necessarily decoherence for:
a) different classical geometries ($\alpha $ $\neq $ $\beta
), $i.e.$ \left|
\rho _r^{\alpha \beta }\left( \eta ,\eta ^{\prime }\right) \right|
\rightarrow 0$ when $\eta \rightarrow \infty $.
b) for the same classical geometry if the times $\eta $ and $\eta
^{\prime }$
are different, namely $\left| \rho _r^{\alpha \alpha }\left( \eta ,\eta
^{\prime }\right) \right| \rightarrow 0$ and $\left| \rho _r^{\beta \beta
}\left( \eta ,\eta ^{\prime }\right) \right| \rightarrow 0$ when $\eta
\rightarrow \infty .$
>From equations $\left( \text{\ref{agaussiano}}\right) \ $and $\left(
\text{%
\ref{matriz2}}\right) $ we obtain:
\begin{equation}
\rho _{rn}^{\alpha \beta }\left( \eta ,\eta ^{\prime }\right) =\left( \frac{%
4\,B_{nR}\left( \eta ,\alpha \right) \ B_{nR}\left( \eta ^{\prime },\beta
\right) }{\left[ B_n^{*}\left( \eta ,\alpha \right) +B_n\left( \eta ^{\prime
},\beta \right) \right] ^2}\right) ^{\frac 14}\exp \left[ -i\alpha _n\left(
\eta ,\alpha \right) +i\alpha _n\left( \eta ^{\prime },\beta \right) \right]
.\label{matriz}
\end{equation}
First, we will study decoherence for a) the same semiclassical
solution but
for different conformal times. Therefore we will calculate the
asymptotic
behavior $\left( \eta ,\eta ^{\prime }\rightarrow \infty \right) $ of $%
\,\left| \rho _{rn}^{\alpha \alpha }\left( \eta ,\eta ^{\prime }\right)
\right| $, that reads :
\begin{equation}
\left| \rho _{rn}^{\alpha \alpha }\left( \eta ,\eta ^{\prime }\right)
\right| \cong \left[ \frac{4\,\eta \,\eta ^{\prime }}{\left[ \eta +\eta
^{\prime }\right] ^2}\right] ^{\frac 14}. \label{m}
\end{equation}
Making the following change of variable :\ $\frac{\eta -\eta ^{\prime
}}2%
=\Delta $ ;\ $\frac{\eta +\eta ^{\prime }}2=\bar{\eta}$ \ with $\Delta
\ll 1
$ we obtain:
\begin{equation}
\left| \rho _{rn}^{\alpha \alpha }\left( \eta ,\eta ^{\prime }\right)
\right| \cong \left[ 1-\left( \frac \Delta {\bar{\eta}}\right) ^2\right] ^{%
\frac 14}. \label{n}
\end{equation}
Since $\left| \rho _{rn}^{\alpha \alpha }\left( \eta ,\eta ^{\prime }\right)
\right| \leq 1$ with the equality only if $\eta =\eta ^{\prime }$, it is
easy to see from Eq. $\left( \text{\ref{matriz2}}\right) $ that $\left|
\rho
_r^{\alpha \alpha }\left( \eta ,\eta ^{\prime }\right) \right| $ is equal to
zero if $\eta \neq \eta ^{\prime }.$ This means that the reduced
density
matrix has diagonalized perfectly, i.e. we have achieved perfect
decoherence. However, it is known \cite{6,8if,7} that perfect
decoherence
also implies that the Wigner function has an infinite spread, so we
cannot
say that the system is classical.
On the other hand, in Refs. \cite{10,11} working
with the
consistent histories formalism made the assumption that exactly
consistent
sets of histories must be found very close to an approximately
consistent
set. In fact we have found the exact consistent set of histories, so it
would be reasonable to say that there are many approximate
consistent sets
near of it. Although we are not working with this formalism, we can
consider
geometries that this statement is also valid in our case. Then,
having an
exact consistent set of histories means in our formalism exact
decoherence.
So, we can try to find the approximate decoherence (i.e. the appoximate
consistent sets) near the exact one.
\section{Approximate decoherence and classical correlations}
If we introduce a cutoff, $N$ in Eq.$\,\left( \text{\ref{matriz2}}\right) $\ at
some very large value of $n$, the reduced density matrix is
not diagonal
anymore, i.e. we obtain an approximate decoherence. Let us
postpone
for the next section the discussion about the value and nature of $N$. Thus we
obtain
if $\eta \approx \eta ^{\prime }:$%
\begin{equation}
\left| \rho _r^{\alpha \alpha }\left( \eta ,\eta ^{\prime }\right) \right|
=\left| \prod_{n=1}^N\rho _{rn}^{\alpha \alpha }\left( \eta ,\eta ^{\prime
}\right) \right| \approx \exp -\left[ \frac N4\left( \frac \Delta
{\bar{\eta}%
}\right) ^2\right]. \label{a}
\end{equation}
>From the last equation, we observe that the reduced density matrix
turns out
to be a gaussian of width $\sigma _d$ where :
\begin{equation}
\sigma _d=\frac{2\ \bar{\eta}}{N^{\frac 12}}. \label{dec}
\end{equation}
Thus, it must be $\sqrt{N}>>1$ in order to obtain decoherence.
>From equations (\ref{e1}) and (\ref{matriz}) we compute $\left| \rho
_r^{\beta \beta }\left( \eta ,\eta ^{\prime }\right) \right| $ and b) $%
\left| \rho _r^{\alpha \beta }\left( \eta ,\eta ^{\prime }\right) \right| $
and obtain for $\eta \rightarrow \infty $ as in eq. (\ref{m}):
\begin{equation}
\left| \rho _{rn}^{\beta \beta }\left( \eta ,\eta ^{\prime }\right) \right|
=\left| \rho _{rn}^{\alpha \beta }\left( \eta ,\eta ^{\prime }\right)
\right| \cong \left[ \frac{4\,\eta \,\eta ^{\prime }}{\left[ \eta +\eta
^{\prime }\right] ^2}\right] ^{\frac 14}.
\end{equation}
So, following the same steps we did for $\left| \rho _{rn}^{\alpha
\alpha
}\left( \eta ,\eta ^{\prime }\right) \right| \left[ \text{Eqs. (\ref{m}) to
(\ref{dec})}\right] $ we can see that the ''decoherence conditions''$\,\left(
\text{Eq. \ref{dec}}\right) $ are the same for a) case: different conformal
times,
and b): for different classical geometries. It is easy to see that we
can
follow the same steps for $\left| \rho _{rn}^{\alpha \beta }\left( \eta
,\eta ^{\prime }\right) \right| $ since from eq. (\ref{e1}) $B_n\left( \eta
,\alpha \right) =B_n\left( \eta ,\beta \right) $.
At this point we will analyze the existence of correlations between
coordinates and
momenta using Wigner function criterion \cite{9}. Since
correlations between
coordinates and momenta should be examined ``inside'' each
classical branch,
we compute Wigner function associated with each
semiclassical solution. The Wigner function associated with the
reduced density
matrix given by equations $\left( \text{\ref{matriz2}}\right) $ and
$\left(
\text{\ref{matriz}}\right) $ is \cite{7}:
\begin{equation}
F_W^{\alpha \alpha }\left( a,P\right) \cong C^2\left( \eta \right)
\,\sqrt{%
\frac \pi {\sigma _c^2}}\exp \left[ -\frac{\left( P-M^2\dot{S}%
+\sum_{n=1}^N\left( \dot{\alpha}_n-\frac{\dot{B}_{ni}}{4B_{nR}}\right)
\right) ^2}{\sigma _c^2}\right],
\end{equation}
where
\begin{equation}
\sigma _c^2=\sum_{n=1}^N\frac{\left| \dot{B}_n\right| ^2}{4B_{nR}^2}.
\end{equation}
We can predict strong correlation when the centre of the peak of
Wigner
function is large compared to the spread, i.e., when:
\begin{equation}
\left( M^2\dot{S}-\sum_{n=1}^N\left( \dot{\alpha}_n-
\frac{\dot{B}_{ni}}{%
4B_{nR}}\right) \right) ^2\gg \sigma _c^2. \label{correlaciones}
\end{equation}
Using the same approximation we made for calculating the reduced
density
matrix, we obtain the following expression for the width of Wigner
function:
\begin{equation}
\sigma _c^2\left( \eta ,\alpha \right) \cong \frac N{4\,\eta ^2}
.\label{cor1}
\end{equation}
We can see that the $\sigma _c$ is the inverse of $\sigma _d$ (Eq.
$\left(
\text{\ref{a}}\right) $), showing the antagonic relation of
decoherence and
correlations \cite{7}.
We also calculate the centre of the peak of Wigner function,
namely:
\begin{equation}
\left( M^2\dot{S}-\sum_{n=1}^N\left( \dot{\alpha}_n-
\frac{\dot{B}_{ni}}{%
4B_{nR}}\right) \right) ^2\cong m^2B^2N^2\eta ^2. \label{cor2}
\end{equation}
>From equations (\ref{cor1}) and (\ref{cor2}) we it is posible to see the
behavior
of the
centre of the peak and the width of Wigner's function in the limit
$\eta
\rightarrow \infty .$ Thus the condition for the existence of
correlations
turns out to be:
\begin{equation}
N>>\frac 1{m^2B^2\eta ^4}. \label{correlaciones2}
\end{equation}
So, if the value of the cutoff is such that $N>>1$ and $N>>\frac 1{%
m^2B^2\eta ^4}$ we can say that the sistem behaves classically: the
off-diagonal terms of the reduced density matrix are exponentially smaller than
the diagonal terms while we can predict strong correlations between $a\left(
\eta \right) $ and its conjugate momenta.
\subsection{Decoherence and Correlations with a specific value for the
cutoff}
In this subsection we propose and discuss a particular value for the cutoff
$N$, using a relevant physical scale of the theory, namely, the
Planck scale.
As we already have mentioned, it has been studied that stable and unstable
particles are created in universe expansion\cite{2,5,12}. But,
in this work, we have used only the contribution of the unstable particles
(the poles of the S matrix) to verify the emergence of the classical
behavior. Thus, a reasonable choice for the value of $N$ might be to
consider in Eq. $\left( \text{\ref{a}}\right) $ only those unstable
particles (poles) whose mean life is bigger than Planck's time ($t_p=M^{-1}$
in our units). This implies that particles with smaller life time will be
considered to be outside the domain of our semiclassical quantum gravity model.
In order to calculate the mean life of each pole we have to transform
equations $\left( \text{\ref{energia compleja}}\right) $, $\left( \text{\ref
{vida
media}}\right) \ $and $\left( \text{\ref{e1}}\right) $ to the
non-rescaled case, namely the physical energy is $\frac{\Omega _n}a$ and the
physical decaying time is $\tau _n^{\prime }=a\tau _n.$ Thus from $\
\left( \text{\ref{vida media}}\right) $ we obtain for $\eta \rightarrow
\infty $ the mean life of the unstable state n:
\begin{equation}
\tau _n^{\prime }=\frac{B\,\eta _{out}^2}{\left( n+\frac 12\right) }.
\end{equation}
Thus, with this choice, we consider in Eq. $\left( \text{\ref{a}}\right) $
only those unstable particles with mean life:
\begin{equation}
\tau _n^{\prime }=\frac{B\,\eta ^2}{\left( n+\frac 12\right) }>\frac 1M=t_p
.\end{equation}
Therefore the value of the cutoff turns out to be $N\,=\,M\ B\ \eta ^2$.
It could be argued that this particar value of $N$ depends of the conformal
time $\eta ,$ but it should be noted that $\frac N{a^2\left( \eta \right) }$
does not depend on $\eta $ anymore. Therefore, $N=N\left( \eta
\right) $
should be regarded as a consequence of the universe
expansion. The
reduced density matrix (Eq. (\ref{a})) turns out to be a Gaussian of
width $%
\sigma _d$ where:
\begin{equation}
\sigma _d=\frac{2\ \eta }{N^{\frac 12}}=\frac 2{\left( M\,B\right) ^{\frac
12%
}}; \label{sigma}
\end{equation} and, as $\eta =\left( \frac{2\ t}B\right) ^{\frac 12}$, we
obtain the following expression for the ratio $%
{\displaystyle {\sigma \over \eta}}
$ as a function of t.
\begin{equation}
\frac{\sigma _d}\eta =\sqrt{\frac 2{M\ t}}\approx \sqrt{\frac{\ t_p}t}
.\label{dec2}
\end{equation}
Therefore the off-diagonal terms will be exponentially smaller than
the
diagonal terms for $t\ >>\
{\displaystyle {1 \over M}}
=t_p.$
With $N=M\,B\,\eta ^2$,we obtain the following expression for eq.
$\left(
\text{\ref{correlaciones}}\right) $:
\begin{equation}
m^2M\,B^3\eta ^6>>1
.\end{equation}
Writing the last equation as a function of the physical time $t$, we
obtain
the condition for the existence of strong correlations :
\begin{equation}
t>>\left( \frac{t_p\ }{8\ m^2\ }\right) ^{\frac 13}
.\end{equation}
\section{Conclusions}
We have shown that the S-matrix of a quantum
field theory in
curved space model has an infinite set of poles. The presence of
these singularities produce the appearance of unstable ideal generalized
states (with
complex eigenvalues) in the Universe evolution. The corresponding eigenvectors
are Gamow vectors and produce exponentially decaying terms. The best feature
of these decaying terms is
that they simplify and clarify calculations. The Universe expansion
leads to decoherence if this expansion produces
particles creation as well. Our unstable states enlarge the set of
initial conditions where we can prove that decoherence occurs. In fact, the
damping factors allow that the interference elements of the reduced
density matrix dissapear for almost any non-equilibrium initial
condition of the matter fields. Following the standard procedures, we have
also shown that the unstable ideal generalized states satisfy the correlation
conditions, which, with the decoherence phenomenon, are the origin of
the semiclassical Einstein equations.
The conditions about decoherence and correlations were imposed by
means of an ultraviolet cutoff, $N$, related with the energy scale
where the semiclassical approximation is taken as valid. The introduction
of this cutoff in relevant in order to preserve both necesary conditions for
calssicality: decoherence plus correlations. Without the presence of the
cutoff the infinite set of unstable codes destroy the classical correlattion
and the semiclassical limit would be untanable.
Decoherence is the key to understanding the relationship between the arrows
of time in cosmology. In the context of quantum open systems, where the
metric is viewed as the ``system'' and the quantum fields as the
``environment,'' decoherence is produced by the continuous interaction between
system and environment. The non-symmetric transfer of information from system
to environment is the origin of an entropy increase (in the sense of von
Neumann), because there is loss of information in the system, and of the time
asymmetry in cosmology, because growth of entropy, particle creation and
isotropization show a tendency towards equilibrium. However, decoherence is
also a necessary condition for the quantum to classical transition. In the
density matrix formulation, decoherence appears as the destruction of
interference terms and, in our model, as the transition from a pure to a mixed
state in the time evolution of the density matrix associated with the RW
metric; the interaction with the quantum modes of the scalar fields is the
origin of such a non-unitary evolution.
It is interesting to note that, in the cosmological model we considered,
unstable particle
creation and decoherence are the effect of resonances between the evolutions
of the scale factor $a$ and the free massive field, which is, on
the other hand, the origin
of the chaotic behaviour in the classical evolution of the cosmological model
\cite{fer1}. This observation opens a new and interesting path in the study
of the relationship between classical chaotic models and the decoherence
phenomena.
\section*{Acknowledgments}
This work was supported by Universidad de Buenos Aires, CONICET and
Fundaci\'on Antorchas.
| proofpile-arXiv_065-8189 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
This paper is concerned with the critical dynamics of
crystals undergoing a second-order phase transition
from a high-temperature normal ($N$) phase to a structurally incommensurate
($IC$) modulated phase.
In the $IC$ phase, the translational symmetry of the lattice is broken by a
modulation in such a way that the characteristic wave vector ${\vc q}_I$
is a non-rational multiple of a basic lattice vector.
The occurrence of incommensurate modulations is in general understood as a
consequence of competing interactions. \cite{Sel92}
The most important characteristic of these systems
is that the ground state does not depend
on the actual phase of the incommensurate modulation at a lattice site.
This implies that the initial phase of the modulation wave is arbitrary
and one must take into account a phase shift degeneration of the ground
state energy.
Consequently, not only the amplitude of the modulating vector is
required to characterize each configuration, but
in addition the phase at a arbitrary lattice site must be fixed.
Therefore, a two-dimensional order parameter ($OP$) has to be
employed in order to describe the phase transition from a $N$ phase to an
$IC$ modulated phase. \cite{Bru78a}
Interesting static properties, e.g.,
the very rich phase diagrams of systems with competing interactions,
emerge. \cite{Cum90}
However, in this work we concentrate on dynamical properties.
Considering fluctuations of the $OP$,
the normal modes can be expressed in terms of the
transverse and longitudinal components $\psi^\perp$ and $\psi^\parallel$
in the two-dimensional $OP$ space. \cite{Bru78a}
The fluctuations of $\psi^\perp$ and $\psi^\parallel$ can be
identified with the fluctuations of
the phase and the amplitude of the modulation in the crystal. \cite{Bru78b}
As a consequence of the $OP$ being two-dimensional,
the lattice dynamics of structurally incommensurate phases
shows some peculiar effects which are different from ordinary
crystalline phases.
Namely, below the transition temperature $T_I$
two non-degenerate branches of modes appear in the dynamical
spectrum. \cite{Bru78b}
The ``amplitudon'' branch, connected with the fluctuations of the
amplitude of the incommensurate modulation, exhibits common soft-mode
behavior.
In addition the ``phason'' branch represents the massless Goldstone
modes of the system, here originating of the invariance of the crystal energy
with respect to a phase shift.
Because of the massless Goldstone modes \cite{Gol61,Wag66} present
in the entire $IC$ phase, new types of anomalies may occur.
Examples of such anomalies were discussed in the literature
before. \cite{Nel76,Maz76,Sch92,Sch94}
Thus we expect some peculiar features of the dynamics in the $IC$
modulated phase stemming
from the Goldstone modes and their coupling to the other $OP$ modes.
The purpose of this paper on the one hand is to provide
a general framework for the
analysis of the critical dynamics above and below the $N$/$IC$
phase transition.
The theoretical description of such systems is based on an $O(2)$
symmetric time-dependent Ginzburg-Landau model, with purely
relaxational behavior of the non-conserved order parameter. \cite{Bru78b}
The more general $O (n)$-symmetric model
has been widely studied above the
critical temperature by means of the dynamical renormalization group.
\cite{Hal72,Dom75}
Below the critical temperature, the $O(n)$ symmetry
is spontaneously broken;
as mentioned above parallel and perpendicular fluctuations have to be
distinguished.
We will start from the field-theoretical model of
incommensurate phase transitions and derive the corresponding dynamical
Janssen-De~Dominicis functional, \cite{Dom76,Jan76}
which provides us with the framework to calculate
interesting theoretical properties and correlations functions, which are
required for the interpretation of experimental data.
Furthermore, we intend to give a comprehensive theoretical
description that goes beyond the mean-field or quasi-harmonic approach
for the $IC$ phase and which is missing to date.
We present an explicit renormalization-group analysis to one-loop
order above and below the critical temperature $T_I$.
The renormalization group theory will lead us beyond the
mean-field picture and provide some new insight on the effects
of the Goldstone modes on the dynamical properties below $T_I$.
Some specific features of the
Goldstone modes were discussed for the statics by Lawrie
\cite{Law81} and for the dynamics in Refs. \onlinecite{Tae92} and
\onlinecite{Sch94}.
In the present paper, we extend the analysis of the
$O(n)$-symmetric model, specifically
for the case $n=2$.
We also consider the crossover behavior from the
classical critical exponents to the non-classical ones in detail, both
above and below the critical temperature (see also Ref.
\onlinecite{Tae93}).
Furthermore our model is employed to analyze specific experiments.
Quadrupole-perturbed nuclear magnetic resonance ($NMR$) is an
established method
to investigate $IC$ phases in a very accurate way. \cite{Bli86}
In this probe, the interaction of the nuclear quadrupole moment
($Q$) of the
nucleus under investigation with the electric field gradient ($EFG$) at its
lattice site is measured.
The fluctuations of the normal modes give rise to a fluctuating $EFG$, which is
related to the transition probabilities between the nuclear spin levels.
As a consequence, the relaxation rate $1/T_1$ of the spin-lattice relaxation is
given by the spectral density of the $EFG$ fluctuations at the Larmor
frequency.
We calculate the $NMR$ relaxation rate with our theoretical model,
and compare our
findings with the experimental data.
Our results may be used to interpret a variety
of experimental findings; however, here we
will restrict ourselves to the analysis of $NMR$ experiments.
The theory presented here
is appropriate for the universality class containing, e.g.,
the crystals of the $\mbox{A}_{2}\mbox{B}\mbox{X}_4$ family.
Some very precise $NMR$ experiments on these crystals were
performed over the past years. \cite{Bli86,Wal94}
Above $T_I$, these data can be used to analyze the critical
dynamics in a temperature range of $T-T_I = 100 K$ more closely.
Below $T_I$, an identification of relaxation rates,
caused by fluctuations of the amplitude and the phase, respectively, at
special points of the $NMR$ frequency distribution is possible.
Therefore the relaxation rates
$1/T_A$ and $1/T_\phi$, referring to the
critical dynamics of the two distinct excitations (``amplitudons" and
``phasons"), can be studied separately.
These experiments led to some additional open questions.
Above the critical temperature $T_I$, a large region was reported where
non-classical critical exponents were found. \cite{Hol95}
Below $T_I$, the presence of a phason gap is discussed in order to clarify some
experiments as well as the theoretical understanding.
\cite{Top89,Hol95}
We will show how these questions can be resolved within the framework of
our theory.
This paper is organized as follows:
In the following section we introduce the model free energy for a system
that reveals a $N$ to $IC$ phase transition.
The dynamics of the amplitude and phase modes is described by
Langevin-type stochastic equations of motion, and
we give a brief outline of the general dynamical perturbation theory.
In Sec. III, the connection between the $NMR$ experiments and the
susceptibility calculated within our theory is discussed.
We shall see that the spectral density functions are closely related to
the measured relaxation times.
The high-temperature phase is analyzed in Sec. IV.
Above $T_I$, scaling arguments are used to derive the critical exponents
for the relaxation rate. The crossover from non-classical to
classical critical behavior is discussed by means of a renormalization group
analysis, and we comment on the width of the critical region.
In Sec. V, we apply the renormalization group
formalism to the low-temperature phase.
The susceptibility, containing the critical dynamical behavior for
the amplitudon and phason modes, is calculated to one-loop order.
Specifically, the influence of the Goldstone mode is investigated.
In the final section we shall discuss our results and give some conclusions.
\section{Model and Dynamical Perturbation Theory}
\subsection{Structurally incommensurate systems}
We want to study second-order phase transitions from a high-temperature
normal ($N$) phase to a structurally incommensurate ($IC$) modulated phase
at the critical temperature $T_I$.
The real-space displacement field corresponding to the one-dimensional
incommensurate modulation can be represented by its normal mode coordinates
$Q({\vc q})$. \cite{Bru78a}
We will treat systems with a star of soft modes \cite{La80}
consisting only of two
wavevectors ${\vc q}_I$ and ${-\vc q}_I$ along one of the principal
directions of the Brillouin zone, e.g. substances of the
$\mbox{A}_{2}\mbox{B}\mbox{X}_{4}$ family. \cite{Bru78a}
Because the incommensurate modulation wave is in most
cases, at least close to $T_I$, a single harmonic function of space,
the primary Fourier components $\langle Q({\vc q}) \rangle
\propto \delta({\vc q} \pm {\vc q}_I)
e^{i \phi_0}$ with the incommensurate wavevectors $\pm {\vc q}_I$
are dominating.
Using $Q({\vc q})$ as a primary order parameter of the
normal-to-incommensurate phase transition in the Landau-Ginzburg-Wilson
free energy functional, diagonalization leads to \cite{Bru78a}
\begin{align}
H[ \{ \psi_\circ^\alpha \} ] = &\frac{1}{2} \sum_{\alpha= \phi,A} \int_{\vc k}
(r_\circ + k^2) \psi_\circ^\alpha ({\vc k}) \psi_\circ^\alpha (-{\vc k})
\\
& + \frac{ u_\circ }{4 !} \sum_{\alpha,\beta = \phi,A}
\int_{{\vc k}_1} \dots \int_{{\vc k}_4} \nonumber \\
& \times \psi_\circ^\alpha ({\vc k}_1) \psi_\circ^\alpha ({\vc k}_2)
\psi_\circ^\beta ({\vc k}_3) \psi_\circ^\beta ({\vc k}_4) \;
\delta(\sum_{l=1}^{4} {\vc k}_l) \ , \nonumber
\end{align}
with new Fourier coordinates
$\psi_\circ^\phi({\vc k})$ and $\psi_\circ^A({\vc
k})$ in the $OP$ space. Here, we have introduced the abbreviations
$\int_{k}^{} ... = \frac{1}{(2 \pi)^d} \int_{}^{} d^d k ... $, and
$\int_{\omega}^{} ... = \frac{1}{2 \pi} \int_{}^{} d \omega ... $ .
Below the phase transition, the
fluctuations of $\psi_\circ^\phi$ and $\psi_\circ^A$ can be
identified with the fluctuations of the phase and the
amplitude of the displacement field,
named {\it phason} and {\it amplitudon}. \cite{Bru78b}
The wavevector ${\vc k}$ indicates the derivation from the incommensurate
wavevector ${\vc q_I}$,
\begin{align}
{\vc k} = {\vc q} \mp {\vc q}_I \ .
\end{align}
The parameter $r_\circ$ is proportional to the distance from the mean-field
critical temperature $T_{\circ I}$
\begin{align}
r_\circ \propto T-T_{\circ I}
\end{align}
and the positive coupling $u_\circ$ gives the strength of the isotropic
anharmonic term. Unrenormalized quantities are denoted by the suffix
$\circ$.
The functional $H[{\psi_\circ^\alpha}]$
describes the statics of the normal-to-incommensurate
phase transition.
It represents the $n$-component isotropic Heisenberg model;
in the case $n=2$, it is also referred to as the XY model.
For the sake of generality, the $n$-component order parameter case will be
considered in the theoretical treatment.
\subsection{Critical Dynamics}
The critical dynamics of the system under consideration is characterized
by a set of generalized Langevin equations for the ``slow" variables, which
in our case consist of the non-conserved order parameter fluctuations
(because of the critical slowing-down in the vicinity
of a phase transition). \cite{Hoh77}
The purely relaxational behavior \cite{Zey82} is described by the following
Langevin-type equation of motion
\begin{align}
\label{Lang}
\frac{\partial}{\partial t} \, \psi_\circ^\alpha({\vc k},t) =
- \lambda_\circ \, \frac{\delta H[\{
\psi_\circ^\alpha \}] }{\delta \psi_\circ^\alpha (-{\vc k}, t)} +
\zeta^\alpha({\vc k},t) \ .
\end{align}
The damping is caused by the fast degrees of freedom, which are subsumed in
the fluctuating forces $\zeta^\alpha$.
According to the classification by Halperin and Hohenberg,
we are considering model A. \cite{Hoh77}
The probability distribution for the stochastic forces
$\zeta^\alpha$ is assumed to be Gaussian. Therefore
\begin{align}
\langle \zeta^\alpha({\vc k},t) \rangle & = 0, \\
\langle \zeta^\alpha({\vc k},t) \, \zeta^\beta({\vc k'},t')
\rangle & = 2 \, \lambda_\circ \, \delta({\vc k} - {\vc k'}) \,
\delta(t - t') \, \delta^{\alpha \beta} \ ,
\label{stoch}
\end{align}
where the Einstein relation (\ref{stoch}) guarantees that the
equilibrium probability density is given by
\begin{equation} \label{opvert}
P [ \{\psi_\circ^\alpha \} ] =
\frac{e^{-H[ \{ \psi_\circ^\alpha \} ] }}{
\int {\cal D} [ \{ \psi_\circ^\alpha \} ] e^{-H[ \{ \psi_\circ^\alpha \} ] }} \quad.
\end{equation}
Following the dynamical perturbation theory
developed by Janssen \cite{Jan76}
and De Dominicis, \cite{Dom76} we want to calculate the dynamical
properties of our system, e.g. the dynamical correlation functions.
First, the stochastic forces are eliminated, using equation
(\ref{Lang}) and the
Gaussian distribution for the stochastic forces $\zeta^\alpha$.
After a Gaussian transformation and the
introduction of auxiliary Martin-Siggia-Rose \cite{Mar73}
fields ${\tilde \psi}_\circ^\alpha $, the non-linearities occuring in the
initial functional are reduced.
A perturbation theory analogous to the static theory can now be implemented
on the basis of the path-integral formulation.
We define the generating functional
\begin{align}
Z[\{ {\tilde h}^\alpha \} &, \{ h^\alpha \}] \propto \int {\cal
D}[\{ i {\tilde \psi}_\circ^\alpha \}] \, {\cal D}[\{ \psi_\circ^\alpha
\}] \, \nonumber \\
& \times e^{J[\{ {\tilde \psi}_\circ^\alpha \} ,
\{ \psi_\circ^\alpha \}] +
\int \! d^dx \! \int \! dt \sum_\alpha ({\tilde h}^\alpha \,
{\tilde \psi}_\circ^\alpha + h^\alpha \, \psi_\circ^\alpha)} \ ,
\end{align}
where the resulting Janssen-De Dominicis functional $J=J_0+J_{int}$ is split into
the harmonic part $J_0$ and the interaction part $J_{int}$,
\begin{align}
\label{dyn_func_harm}
J_0 & [\{ {\tilde \psi}_\circ^\alpha \} , \{ \psi_\circ^\alpha\}]
= \int_k \int_\omega \sum_\alpha \biggl[ \lambda_\circ \,
{\tilde \psi}_\circ^\alpha({\vc k},\omega) \, {\tilde
\psi}_\circ^\alpha(- {\vc k},-\omega) \nonumber \\
& - {\tilde \psi}_\circ^\alpha({\vc k},\omega) \, \Bigl[ i \omega +
\lambda_\circ \, (r_\circ + k^2) \Bigr] \, \psi_\circ^\alpha(- {\vc
k},- \omega) \biggr] \ , \\
J_{int} & [\{ {\tilde \psi}_\circ^\alpha \} , \{
\psi_\circ^\alpha \}]
= \frac{-\lambda_\circ \, u_\circ}{6} \, \int_{q_i}
\int_{\omega_i} \,
\delta ( \sum_i {\vc k}_i ) \, \delta ( \sum_i \omega_i ) \nonumber \\
& \times \, \sum_{\alpha \beta} {\tilde \psi}_\circ^\alpha({\vc
k}_1,\omega_1) \, \psi_\circ^\alpha({\vc k}_2,\omega_2) \,
\psi_\circ^\beta({\vc k}_3,\omega_3) \,
\psi_\circ^\beta({\vc k}_4,\omega_4) \ .
\label{dyn_func}
\end{align}
The N-point Green functions
$G_{\circ \, {\tilde \psi}^\alpha_i \psi^\alpha_j}({\vc k},\omega)$
and cumulants $G^c$
can be derived by appropriate derivatives of $Z$ and $\ln Z$
with respect to the sources ${\tilde h}^\alpha$ and $ h^\alpha$ .
Thus the standard scheme of perturbation theory can be applied.
Further details can be found in textbooks (Refs. \onlinecite{Ami84,Zin93})
and in Refs. \onlinecite{Jan76,Tae92}.
In addition we want to list some important relations that will
be useful for the discussion.
The dynamical susceptibility gives meaning to the auxiliary fields by
noting that it can be represented as a correlation function
between an auxiliary field and the order parameter field \cite{Jan76}
\begin{align}
\chi_\circ^{\alpha \beta}({\vc x},t;{\vc x'},t')
& = {\delta \langle \psi_\circ^\alpha({\vc x},t) \rangle \over \delta
{\tilde h}^\beta({\vc x'},t')} \bigg \vert_{{\tilde h}^\beta =0}
\nonumber \\
& = \langle \psi_\circ^\alpha({\vc x},t) \, \lambda_\circ \, {\tilde
\psi}_\circ^\beta({\vc x'},t') \rangle \ .
\end{align}
Its Fourier transform
\begin{align}
\chi_\circ^{\alpha \beta}({\vc k},\omega) = \lambda_\circ \,
G_{\circ \, {\tilde \psi}^\alpha \psi^\beta}({\vc k},\omega) \ .
\label{sus_cor1}
\end{align}
is associated with the Green functions $G_{\circ \, {\tilde \psi}^\alpha
\psi^\beta}$.
The fluctuation-dissipation
theorem relates the correlation function
of the order parameter fields and the imaginary part of the response function
\cite{Jan76}
\begin{align}
\label{fluk_diss}
G_{\circ \, \psi^\alpha \psi^\beta}({\vc k},\omega) =
2 \frac{\Im \chi_\circ^{\alpha \beta}({\vc k},\omega)}{\omega} \ ,
\end{align}
which will enter the calculation of the $NMR$ relaxation rate.
E.g., considering only the harmonic part $J_0$ of the dynamical functional and
carrying out the functional integration gives
\begin{align}
G_{\circ \, \psi^\alpha \psi^\beta}({\vc k},\omega) =&
\delta^{\alpha \beta}
\frac{2 \lambda_\circ}{[\lambda_\circ(r_\circ+k^2)]^2 + \omega^2} \ .
\label{G_harm}
\end{align}
Finally we want to introduce the vertex functions $\Gamma_{\circ \, {\tilde
\psi}^\alpha \psi^\beta}$, which are related to the
cumulants through a Legendre transformation.
For example \cite{Jan76,Tae92}
\begin{align}
G_{\circ \, {\tilde \psi}^\alpha \psi^\alpha}^c ({\vc k},\omega) =
\frac{1}{\Gamma_{\circ \, {\tilde \psi}^\alpha \psi^\alpha}(-{\vc k},-\omega)}
\ .
\label{cor_cum1}
\end{align}
The vertex functions are entering the explicit
calculation of $Z$ factors and the susceptibility
in the renormalization group theory.
The advantage of working with vertex functions is that they are represented by
one-particle irreducible Feynman diagrams only.
\section{$NMR$-Experiments and Spin-Lattice Relaxation}
\label{sec_nmr}
Quadrupolar perturbed nuclear magnetic resonance ($NMR$) can be
used to study the dynamics of phase transitions from a $N$
to a $IC$ modulated phase. \cite{Bli86}
In this method the interaction between the nuclear quadrupole moment
$Q$ and the electric-field gradient
($EFG$) $V$ is the dominant perturbation ${\cal H}_Q$ of the Zeeman
Hamiltonian.
Thus in the corresponding Hamiltonian
\begin{align}
{\cal H} &= {\cal H}_{Z} + {\cal H}_{Q}
\end{align}
next to the dominating Zeeman term ${\cal H}_{Z}$ one has to consider
the quadrupole
interaction ${\cal H}_{Q}=\frac{1}{6} \sum_{j,k}^{} Q_{jk} V_{jk}$ as a
perturbation.
The quadrupole moment operator $Q_{jk}$ is coupled linearly to the $EFG$
tensor $V_{jk}$ at the lattice site. \cite{Bon70,Zum81} \\
The fluctuations of
$V_{jk}$ can be expressed via order parameter fluctuations, because
of the dominant linear coupling of the $EFG$ to the order parameter
\cite{Per87a,Per87b}
\begin{align}
\delta V_{ij}({\vc x}, t) = A_{1ij} [\delta \psi^A(t) + i
\delta \psi^\phi(t)] e^{ i {\vc k} {\vc x} + \Phi_0} + c.c. \ .
\end{align}
We now briefly sketch how the $OP$ fluctuations determine the
relaxation rate.
The spin-lattice relaxation describes the return of the nuclear spin
magnetization $M$ in the direction of the external field
back to its thermal equilibrium value
following a radio frequency pulse. \cite{Abr86}
During that time the energy of the spin system is transferred to single
modes of the lattice fluctuations. \cite{Wal94}
Because the $EFG$ fluctuations can be written as $OP$ fluctuations, the
spin-lattice relaxation is determined by the spectral density
functions of the
local $OP$ fluctuations at the Larmor frequency $\omega=\omega_L$.
The transition probabilities for nuclei with spin $I=\frac{3}{2}$ in three
dimensions are given by \cite{Abr86,Coh57}
\begin{align}
\frac{1}{T_1} &=
W \left( \pm \frac{3}{2} \leftrightarrow \pm \frac{1}{2} \right)
= \frac{\pi^2}{3}
\left[ J(V_{xy}, \omega_L) + J(V_{yz}, \omega_L) \right] \nonumber \\
& \propto \int_{BZ}^{}
\frac{\Im \chi_\circ^{\alpha \beta} ({\vc k},\omega_{L})}{\omega_{L}}
= \int_{0}^{\Lambda} \frac{1}{2} \ k^2 \ G_{\circ \, \psi^\alpha \psi^\beta}
({\vc k},\omega_L) dk \ ,
\label{relax-time}
\end{align}
with the spectral density of the $EFG$ fluctuations
\begin{align}
J(V_{ij}, \omega) =
\int_{- \infty}^{\infty} \overline{V_{ij}(t) {V_{ij}}^{*}(t+\tau)}
e^{-i \omega \tau} d\tau \ .
\end{align}
Measuring the spin-lattice relaxation thus yields information on
the susceptibility of the local fluctuations of the order parameter.
The spin-lattice relaxation was studied in great detail by means of
echo pulse methods both below and above $T_I$. \cite{Bli86,Wal94}
Below the critical temperature $T_I$, it is possible for the prototypic system
Rb$_2$ZnCl$_4$ to identify the relaxation rates $1/T^A_1$
and $1/T^\phi_1$, dominated in the plane-wave limit by
the amplitudon and phason fluctuations, respectively. \cite{Wal94}
Therefore, the dynamical properties of the order parameter fluctuations
can be studied below the phase transition as well, and separately
for the two distinct excitations.
\section{High-temperature phase}
In this section, the critical behavior of the incommensurate
phase transition above $T_I$ will be investigated.
On the basis of scaling arguments and the use of critical exponents,
calculated within
the renormalization-group theory for the XY model in three dimensions and
model A, the temperature dependence of the $NMR$ relaxation rate is
analyzed in the first subsection.
Next we study the crossover scenario of the temperature
dependence of the
relaxation rate in the second subsection by means of the
renormalization group theory.
Comparison with experimental data is made, and we comment on the width of
the critical region.
\subsection{Scaling laws for the relaxation time}
Above the phase transition, the thermodynamical average
of the order parameter components is zero.
Because the structure of the correlation function of the order parameter
does not change dramatically above $T_I$ (see section \ref{tgtc_renorm})
and the calculation of the relaxation
rate $1/T_1$ involves the integration of the correlation function over all
wavevectors,
we will derive a form of the correlation function using scaling arguments.
Thus we are able to discuss the universal features of the relaxation rate
behavior when approaching $T_I$ from above.
In the harmonic approximation we immediately get the correlation
function, which turns out to be the propagator of
our functional $J_0$, [see Eq. (\ref{G_harm})]
\begin{align}
\langle \psi_\circ^\alpha({\vc k},t) \,
\psi_\circ^\beta({\vc k'},t') \rangle &= \delta({\vc k} + {\vc k'})
\delta(\omega + \omega') \delta^{\alpha \beta} G_\circ (k,\omega) \ ,\\
G_\circ (k,\omega) &= \frac{2 \lambda_\circ}{[\lambda_\circ(r_\circ + k^2)]^2 +
\omega^2} \ .
\end{align}
The suffix $\circ$ will be omitted in the following discussion of this
subsection, because no renormalization will be considered here.
We want to exploit our knowledge about the critical region. The static scaling
hypothesis for the static response function states
\begin{align}
\chi(k) = A_\chi \cdot \hat \chi (x) \cdot k^{-2+\eta} \
\end{align}
with the scaling function $\hat \chi$, a constant prefactor $A_\chi$,
the scaling variable $x = (k \xi)^{-1}$ ($\xi$ denoting the correlation
length), and the critical exponent $\eta$.
Neglecting a frequency dependence for the
kinetic coefficient (Lorentzian approximation),
the dynamic scaling hypothesis for the characteristic frequency of the $OP$
dynamics states
\begin{align}
\omega_{\varphi}(k) &\equiv \lambda (k)/ \chi(k) \sim k^{z} \ ,
\label{dy_sc_hy}
\end{align}
and we can deduce
\begin{align}
\lambda (k) &= A_\lambda \cdot \hat \lambda (x) \cdot
k^{ z-2 + \eta} \ ,
\end{align}
with $ \hat \lambda (x)$ being the scaling function for the kinetic
coefficient and $A_\lambda$ a constant prefactor.
Notice that for fixed wavevector $k$ Eq. \ref{dy_sc_hy} leads to \cite{Hoh77}
\begin{align}
\omega_{\varphi}(k) \sim \xi^{-z} \ .
\end{align}
The correlation function $G(k,\omega)$ can now be rewritten in scaling form
\begin{align}
G(k,\omega)
= \Lambda \cdot \frac{1}{k^{z +2 - \eta}} \cdot \hat{f}(\hat{\omega},x) \ ,
\end{align}
with
\begin{align}
\Lambda &= \frac{2 \ A_\chi^{2}}{A_\lambda}, \ \ \hat{\omega} =
\frac{A_\chi}{A_\lambda} \frac{\omega}{k^{z}}, \ \
x = \frac{1}{k \xi}, \nonumber \\
\hat{f}(\hat{\omega},x) &= \hat \chi(x) \cdot
\frac{\hat \lambda(x)/{\hat \chi(x)}}{[\hat \lambda(x)/{\hat \chi(x)}]^2 +
\hat{\omega}^2} \ ,
\end{align}
where the Lorentzian line shape is retained. Above $T_I$, this not a very crucial
approximation, because the shape of the correlation function does
not change in a first-order renormalization group analysis,
as we will see in the next section.
To calculate the relaxation rate $1/T_1$, one has to evaluate the integral
[see Eq. (\ref{relax-time})]
\begin{align}
\frac{1}{T_1}
& \propto \int_{0}^{\Lambda} \frac{1}{2} \ k^2 \ G(k,\omega_L) dk \\
& \propto \Lambda \cdot \int_{BZ}^{}k^2 dk \ k^{-z -2 + \eta}
\hat{f}(k \omega_L^{-1/z}, k \xi) \ . \nonumber
\end{align}
With $u = k \omega_L^{-1/z}$ and $v= k \xi$
we introduce new variables
\begin{align}
\label{varrho}
\varrho &= \sqrt{u^2+v^2} = k \sqrt{\omega_L^{-2/z} + \xi^2} \\
\intertext{and}
\label{varphi}
\tan \varphi & = \frac{v}{u} = \frac{\xi}{\omega_L^{-1/z}} \ .
\end{align}
This leads to the relation
\begin{align}
\frac{1}{T_1}
= \Lambda \cdot \left( \sqrt{\omega_L^{-2/z} + \xi^2} \right)^{z -1 -\eta} \
I_{\varrho} \left( \hat f(\varrho,\varphi) \right) \ ,
\end{align}
where the integral $I_{\varrho}$ does not contribute to the leading temperature
dependence.
The temperature dependence of the relaxation rate can now easily be found
in the limits where the Larmor frequency or the
frequency, of the critical fluctuations, respectively, dominate the integral
and its prefactor.
\subsubsection{Fast-motion limit ($\omega_{L}/\omega_{\varphi} \ll 1 $)}
For temperatures very far above the
critical temperature $T_I$, the characteristic frequency
is larger than the Larmor frequency. Thus the temperature dependence
of the $OP$ fluctuations determines the temperature dependence of the
relaxation rate; the value of the Larmor frequency should not play any
role. \cite{Rig84} For the integral
\begin{align}
\label{results_fml}
\frac{1}{T_1} &=
\Lambda \cdot
\left( \omega_{L}^{-2/z} \left[ 1 + \left(
\omega_{L}/ \xi^{-z} \right)^{2/z}
\right] \right)^{(z - 1 - \eta)/2}
\cdot I_{\varrho} \nonumber \\
\end{align}
we obtain with $\tan \varphi =$ const. [see Eq. \ref{varphi}]
\begin{align}
\frac{1}{T_1} &\propto \xi^{z-1 - \eta}
= \left( \frac{T-T_{I}}{T} \right)^{-\nu \cdot (z-1-\eta) }
\ .
\end{align}
Taking the values for the critical exponents from Table \ref{tab1}, we find
\begin{align}
\frac{1}{T_1} \propto \ \left( \frac{T-T_I}{T}
\right)^{-0.663} \ \ .
\end{align}
This can be compared with the experimental results for
Rb$_2$ZnCl$_4$ by Holzer et al. \cite{Hol95},
who found for the leading scaling
behavior of the relaxation rate, following the
temperature-independent region, the exponent $-0.625$.
\subsubsection{Slow-motion limit ($\omega_{L}/ \omega_{\varphi} \gg 1$)}
In the vicinity of the critical temperature $T_I$ critical slowing down
will occur. This means that the characteristic frequency $ \omega_{\varphi}$
is approaching zero and will fall below the value of the Larmor frequency.
\cite{Rig84} Thus, the characteristic
time scale of the $OP$ fluctuations is slower than the
experimental time scale. For the temperature dependence of the
relaxation rate
\begin{align}
\frac{1}{T_1}
& \propto
\Lambda \cdot
\left( \xi^{2} \left[ \left(
\xi^{-z}/\omega_{L} \right)^{2/z} + 1
\right] \right)^{(z - 1- \eta)/2}
\cdot I_{\varrho} \ , \nonumber
\end{align}
we now obtain
\begin{align}
\frac{1}{T_1} &\propto \omega_{L}^{-(z-1 -\eta)/z} \cdot \text{const} \ .
\end{align}
Taking the values for the critical exponents from Table \ref{tab1} again
\begin{align}
\frac{1}{T_1} \propto \ \omega_{L}^{-0.49} \ .
\end{align}
This is in good agreement with the experimental result \cite{Hol95} for
Rb$_2$ZnCl$_4$ that the value of
the relaxation rate near $T_I$ scales as $\omega_L^{-0.5}$
for different Larmor frequencies.
We want to stress that the transition from the fast- to the slow-motion limit
is a property of the integral entering the calculation of the
relaxation rate.
Because the susceptibility is evaluated at fixed $\omega_L$, there exists
for a lower boundary for the integral $T \approx T_I$.
This means that the
transition from the temperature-dependent to the temperature independent
behavior near $T_I$ is fixed by the scale $\omega_L$.
It should also be mentioned that our results reproduce those
obtained earlier by Holzer et al. (see Ref. \onlinecite{Hol95}),
when the van Hove approximation for $z$ is used ($z \approx 2$).
\subsection{Renormalization-group analysis above $T_I$}
\label{tgtc_renorm}
Our investigations concerning the critical behavior above $T_I$ in the last
section led to fair agreement with experimental data.
It was possible to gain the critical exponents for the
frequency and temperature dependence of the relaxation rate in a
quantitatively accurate way.
Furthermore, we obtained a qualitative understanding of
the transition of the relaxation rate $1/T_1$ from
the slow- to the fast-motion limit.
This transition is caused by the characteristic frequency of the
order-parameter dynamics $\omega_\varphi$ approaching zero, i.e., the
critical slowing down near $T_I$.
This renders the Larmor frequency
$\omega_L$ the dominating time scale, $1/T_1$ becomes temperature-independent
near $T_I$.
At this point we want to consider
what happens upon leaving the region near the transition
temperature and going to higher temperatures.
Our results (\ref{results_fml}) for the fast-motion
limit are based on the assumption
that fluctuations are very important.
We used non-classical exponents and scaling arguments valid in the critical
region.
The question arises how large this region of temperature, where
the system displays the non-classical behavior calculated in the last section,
will be.
Increasing temperature diminishes the effect of fluctuations.
One would expect that at some temperature Gaussian behavior
should emerge.
We shall apply the renormalization group theory to one-loop
approximation in order to describe the transition from the
fluctuation-dominated behavior near $T_I$ to a temperature region
where the mean-field description should be valid.
It is obvious that the properties of the integral, responsible for the transition
between the slow and the fast motion limit, will account for the crossover
of the leading scaling behavior when the temperature is increased.
In order to discuss the crossover within this analysis a modified minimal-subtraction
prescription is employed.
This scheme was first introduced by Amit and
Goldschmidt \cite{Ami78} and subsequently
explored by Lawrie for the study of Goldstone
singularities. \cite{Law81}
It can comprise exact statements in a certain limit.
Below $T_I$, this is the regime
dominated by the Goldstone modes alone; in the region above $T_I$, which we will
consider in this section, the mean-field result is used. In this
scheme, in addition the standard field-theoretical formulation
of the renormalization group is neatly reproduced.
Following the arguments of Schloms and Dohm \cite{Sch89} and
Ref. \onlinecite{Tae92} we can
refrain from the $\varepsilon$ expansion, with
\begin{align}
\varepsilon = 4-d
\end{align}
defining the difference from the upper critical dimension of the $\phi^4$
model.
This is motivated by the following.
Above $T_I$ the Gaussian or zero-loop theory
becomes exact in the high-temperature limit.
The critical fixed point, i.e., the Heisenberg fixed point,
dominating the behavior of the system near the critical temperature is
calculated to one-loop order.
The main interest here lies in the crossover behavior between these
two fixed points, which is calculated to one-loop order, too.
Thus no further approximations are necessary to be consistent.
Very close to the critical temperature
$T_I$ an $\varepsilon$-expansion or Borel resummation
\cite{Sch89} would of course be inevitable in order
to obtain better values for the critical exponents.
A description of the generalized minimal subtraction scheme is for example
given in Refs. \onlinecite{Ami78,Law81,Tae92} and \onlinecite{Fre94}.
A crossover into an asymptotically Gaussian theory is described by this method
in Ref. \onlinecite{Tae93}.
\subsubsection{Flow equations}
Our aim is to calculate the wavenumber and frequency dependence of
the susceptibility to one-loop order.
The field renormalization is zero to one loop order. Thus
we will not take into account corrections to the static exponent
$\eta$ and corrections to the mean-field value of the dynamic exponent
$z \approx 2$. This leaves $r_\circ$ and $u_\circ$ as the only
quantities to be renormalized. \cite{Tae93}
There is a shift of the critical temperature from the mean-field result
$T_{\circ I}$ to the ``true" transition temperature $T_I$.
In order to take this shift into account,
a transformation to a new temperature variable,
being zero at the critical temperature $T_I$, is performed.
This new variable will be denoted again as $\tau_\circ$.
The only renormalized quantities are then written as
\begin{align}
\label{tau_HT}
\tau &= Z_r^{-1} \tau_\circ \mu^{-2} \\
\label{u_HT}
u &= Z_u^{-1} u_\circ A_d \mu^{-\varepsilon} \ .
\end{align}
Here, the geometric factor $A_d$ is chosen \cite{Sch89} as
\begin{align}
A_d = \frac{\Gamma(3-d/2)}{2^{d-2} \pi^{d/2}(d-2)} \ .
\end{align}
For the non-trivial $Z$ factors one finds in the generalized minimal
subtraction procedure (see App. \ref{app1})
\begin{align}
\label{Z-factors}
Z_u &= 1 + \frac{n+8}{6 \varepsilon} u_\circ A_d \mu^{- \varepsilon}
\frac{1}{(1+\tau_\circ/\mu^2)^{\varepsilon/2}} \ , \\
Z_r &= 1 + \frac{n+2}{6 \varepsilon} u_\circ A_d \mu^{- \varepsilon}
\frac{1}{(1+\tau_\circ/\mu^2)^{\varepsilon/2}} \ .
\label{Z_gt}
\end{align}
Setting $\tau_\circ=0$ the familiar renormalization constants for the
$n$-component $\phi^4$ model are recovered.
In general here, however,
the $Z$ factors are functions of both $u_\circ$ and $\tau_\circ$. \cite{Ami78}
In the next step the fact that the
unrenormalized $N$-point functions do not depend on the scale $\mu$ is
exploited and the
Callan-Symanzik equations are derived. \cite{Ami84}
The idea behind that is to connect
via the renormalization-group equations the uncritical theory, which can be
treated perturbationally, with the critical theory displaying infrared
divergences.
The resulting partial differential equations can be solved with the method of
characteristics ($\mu(l)=\mu \, l$).
With the definition of Wilson's flow functions
\begin{align}
\zeta_\tau (l) &= \left. \mu \frac{\partial}{\partial \mu} \right|_0
\ln \frac{\tau}{\tau_\circ} \ , \\
\beta_u (l) &= \left. \mu \frac{\partial}{\partial \mu} \right|_0 u \ ,
\end{align}
we proceed to the flow dependent couplings $\tau(l)$ and $u(l)$
(see Eqs. \ref{tau_HT} and \ref{u_HT})
\begin{align}
l \frac{\partial \tau(l)}{\partial l} &= \tau (l)
\zeta_\tau(l) \ , \\
l \frac{\partial u(l)}{\partial l} &= \beta_u ( l) \
\end{align}
given by the first order ordinary differential equations
\begin{align}
\label{flow_eq1}
l \, \frac{\partial \tau(l)}{\partial l} &= \tau ( l)
\left(
-2 + \frac{n+2}{6} u(l) \frac{1}{[1+\tau(l)]^{1+ \varepsilon/2}}
\right) \ ,\\
\label{flow_eq2}
l \, \frac{\partial u(l)}{\partial l} &= u ( l)
\left(
-\varepsilon + \frac{n+8}{6} u(l) \frac{1}{[1+\tau(l)]^{1+ \varepsilon/2}}
\right) \ ,
\end{align}
and the initial conditions $\tau(1)=\tau$ and $u(1)=u$.
The asymptotic behavior is determined by zeros of the $\beta$ function, giving
the fixed points of the renormalization group.
Here, we find the Gaussian fixed point $u_G^* = 0$ with $\zeta_G^* = -2$ and the
Heisenberg fixed point $u_H^* =\frac{6 \varepsilon}{n + 8}$ with $\zeta_H^* =
-2+\varepsilon $.
These fixed points
are of course well-known, \cite{Ami84} but in the generalized minimal
subtraction scheme it is now possible to describe the crossover between these
two fixed points.
We are interested in the theory in three dimensions and will henceforth
discuss this case ($\varepsilon =1)$.
First we investigate the crossover of the $\tau(l)$ flow.
It is possible to recover
the universal crossover in the flow by plotting $\tau(l)$ against the scaling
variable (compare Refs. \onlinecite{Sch94,Tae92,Tae93})
\begin{align}
x = \cfrac{l}{\tau(1)^{1/(2-\frac{n+2}{n+8})}} \ \ .
\end{align}
In Fig. \ref{fig1} the effective exponent for the $x$-dependence of
$\tau(l)$ is depicted for ten different values of $\tau(1)$ [with fixed
$u(1)$ and $n=2$], coinciding perfectly.
There is a crossover from the region $l \rightarrow 0$
with the exponent $-2$ to the region $l \rightarrow 1$ with the
exponent $-2 + (n+2)/(n+8)$.
Next we find, with the scaling variable $x \propto (k \xi)^{-1}$,
the effective exponent $\nu_{\text{eff}}$
of the temperature dependence of the
correlation length
\begin{align}
\tau(l) \propto l^{-1/\nu_{\text{eff}}} \ \ \Rightarrow \ \
\frac{1}{\nu_{\text{eff}}} =
\begin{cases}
2 & l \rightarrow 0 \\
2 - \frac{n+2}{n+8} & l \approx 1
\end{cases}
\end{align}
Thus, with the generalized minimal-subtraction scheme we can describe
the crossover from the
non-classical critical behavior to the Gaussian behavior, e.g., as a
function of the temperature variable $\tau$.
\subsubsection{Matching}
To one-loop order,
only the tadpole graph enters in the two-point function
(see $\Gamma_{\circ \tilde \psi \psi}$ in
App. \ref{app1}) shifting the
critical temperature as stated above.
Thus the susceptibility does not change its form and the renormalized version
reads with Eq. (\ref{sus_cor1}), Eq. (\ref{cor_cum1}) and
App. \ref{app1} to one-loop order,
\begin{align}
\chi_R^{-1} ({\vc k},\omega) = k^2 - i \omega/\lambda + \tau(l) \mu^2 l^2 \ .
\end{align}
Yet what we gained in the last subsection was the temperature dependence of the
coupling constants. We have to take into consideration this temperature
dependence in order to discuss the changes resulting from the fluctuation
corrections.
One now has to ask the question: how does the flow enter
the physical quantities measured in an experiment?
With the flow dependence of the coupling constants, the relaxation rate to
one-loop order becomes [see Eq. (\ref{relax-time})]
\begin{align}
\frac{1}{T_1} &\propto
\int_{}^{} k^2 dk \frac{\Im \chi_R({\vc k},\omega_L)}{\omega_L}
\nonumber \\
&= \int_{}^{} k^2 dk
\frac{1}{\omega_L^2/\lambda^2 + [k^2 + \tau(l)\mu^2 l^2]^2} \cdot
\frac{1}{\lambda} \nonumber \\
& = \frac{1}{\mu l}\frac{1}{\lambda} \int_{}^{} \tilde k^2 d \tilde k
\frac{1}{\tilde \omega_L^2 + [\tilde k^2 + \tau(l)]^2}
\label{relax1}
\end{align}
where $\tilde k = k/\mu l$ and $\tilde \omega_L = \omega_L/\lambda \mu^2 l^2$.
Keeping $r(l)$ fixed (it is set to 1) the relaxation rate $1/T_1$ is proportional to
$l^{-1}$ for large $l$. When $l$ approaches zero a constant value of
the integral and hence of the relaxation rate $1/T_1$ will be reached,
because of the fixed time scale $1/\tilde \omega_L$.
The physical reason is that in the slow motion limit the
characteristic time scale ($1/\omega_\varphi$)
becomes larger than the experimental time scale $1/\tilde \omega_L$.
In Fig. \ref{fig2} the logarithmic $l$ dependence of the integral
\begin{align}
I_1 \equiv \frac{\partial \log T_1}{\partial \log l} = - \cfrac{\partial \log \left(
\cfrac{1}{\mu l \lambda} \int_{}^{} \tilde k^2 d \tilde k
\cfrac{1}{\tilde \omega_L^2 + (\tilde k^2 + 1)^2} \right)
}{\partial \log l}
\end{align}
is plotted against the scaling variable
\begin{align}
x = l \cdot \frac{\mu \sqrt{\lambda}}{\sqrt{\omega_L}}\ .
\end{align}
We regain the transition from the $l$-independent regime ($l\rightarrow 0$)
(and therefore of the temperature-independent regime)
to the regime $I_1 \propto l$ ($l\rightarrow \infty$).
This corresponds to the transition from the slow-motion to the fast-motion
limit; a change of $l$ is aquivalent to a change of $\omega_\varphi$.
However, we are rather interested in the dependence of the relaxation rate
from the physical temperature $\tau(1)$ than from the flow parameter $l$.
This may be obtained as follows.
Knowing the solution of the flow equations $\tau(l)$ and $u(l)$,
we can find a $l_1$ for a given $\tau(1)$ that fulfills the equation
$\tau(l_1)=1$. Inverting this relation, $\tau(1)$ for a given $l_1$ with
$\tau(l_1)=1$ can be found.
It is not possible to write down an analytical expression, but numerically this
relationship is readily obtained.
Thus we are led to $1/T_1(r(1)) = 1/T_1(l_1[r(1)])$.
To connect the theory in a region where the perturbation expansion is valid
with the interesting region,
we match the temperature variable $r(l)$ to 1,
thus imposing the crossover behavior of the flow $r(l)$ to the effective
exponent of the relaxation time $1/T_1$.
In Fig. \ref{fig3} the resulting $T_1(T)$ dependence is used to fit
experimental $NMR$ data for Rb$_2$ZnCl$_4$
(measured points are indicated by circles) from Mischo et al. \cite{Mis97}
Two parameters have to be fixed in the theory. First, the
prefactor relating the relaxation rate and the integral over the imaginary part
of the susceptibility in Eq. (\ref{relax1}) must be determined.
Thus, the value of $1/T_1(T=T_I)$ is set.
The second parameter is the scale of $\omega_L$
compared to the coupling $\lambda \mu$. With this the relative temperature
$\Delta T$, where the transition between slow and fast motion limit takes
place, is adjusted.
The two fit curves presented in Fig. \ref{fig3} show a crossover
to the mean-field regime starting at $\Delta T\approx 10K$ (a)
and at $\Delta T \approx 5K$ (b).
A very good agreement, not only for the transition from the slow to the fast
motion limit, but also for the high-temperature behavior is found in the
second case.
We want to discuss this issue now in more detail.
\subsubsection{Width of the Critical Region}
Some experiments report large regions
in which non-classical exponents for the temperature dependence
of the relaxation rate are observed.
E.g., in Ref. \onlinecite{Hol95} the range above $T_I$ where
non-classical exponents are found is $\Delta T \approx 100 K$.
These findings have to be understood by means of the
Ginzburg-Levanyuk \cite{Lev59,Gin60} argument,
which states that only near to the
critical temperature the non-classical critical exponents should be valid.
Fluctuations should contribute only near the critical point and change
the mean-field picture there.
The property of the integral quantity $1/T_1$, in the region where a
crossover between the non-classical critical exponents and the mean-field
exponents occurs, was studied in the last subsection.
We now comment on the four regions that can be identified and are
indicated by numbers $1 \dots 4$ in Fig. \ref{fig4}.
Very close to $T_I$, there is a temperature-independent region (1), because
of the dominating scale $\omega_L$. Here, the probing frequency $\omega_L$ is
too fast to grasp the critical behavior.
Upon going to higher temperatures, after a transition region (2),
a temperature dependence with non-classical
critical exponents emerges (3).
For even higher temperatures one finds a crossover to the mean-field
exponents, in regime (4).
In Fig. \ref{fig4} this crossover takes place between $\Delta T \approx 5K$
and $\Delta T \approx 20K$.
From Fig. \ref{fig3}, we find that the crossover at lower temperatures,
here starting at $\Delta T \approx 5K$,
leads to a better fit of the experimental data.
Thus the reported large region, where supposedly
non-classical exponents are found,
\cite{Hol95} is in our opinion not an indispensable conclusion that can be
drawn from the experimental data.
The plausible scenario of an extended crossover regime beyond the truly
asymptotic region of width
$\Delta T \approx 5K$ is in fact in perfect agreement with the data.
As this is not a universal feature other scenarios are possible. It may happen
that the scale of $\omega_L$ is very large and thus only the Gaussian exponents
are found.
We omitted the contribution of higher Raman processes, as discussed by Holzer
et al., \cite{Hol95} leading to an additional $T^2$ dependence
for the relaxation rate. These would bend the curves downward even more and
explain the deviation present at the highest measured temperatures.
Not taking these additional contributions into consideration,
however, clarifies the crossover aspect.
\section{Low-temperature phase}
This section is devoted to the incommensurate ordered phase
below the critical temperature $T_I$.
In the $O(n)$-symmetrical model a spontaneous breaking of a
global continuous symmetry occurs and the expectation value of the order
parameter becomes nonzero.
Now parallel and perpendicular fluctuations with respect to the nonzero order
parameter have to be distinguished.
As a consequence there appear $n-1$ massless Goldstone modes
which lead to infrared singularities for all temperatures below $T_I$ in certain
correlation functions. \cite{Nel76,Maz76}
We investigate how these Goldstone modes influence
the dynamical properties of the quantities we are interested in, e.g.,
the $NMR$ relaxation rate.
To do so, we first
derive the dynamical functional appropriate below $T_I$. In the following
subsection some comments about the Goldstone anomalies are made. We will then
treat the dynamics of the fluctuations parallel (amplitudons) and perpendicular
(phasons) to the order parameter.
Again a renormalization group calculation to one-loop order is presented. We
will discuss the dynamical susceptibility before evaluating the integrals
leading to the relaxation rate. In the last section, we compare
with experimental data.
We also comment on the existence of a phason gap.
\subsection{Dynamical functional}
Let us assume that the spontaneous symmetry breaking below $T_I$
appears in the $n$th direction of the order parameter space. As usual, new
fields $\pi_\circ^\alpha$, $\alpha = 1,\ldots,n-1$, and
$\sigma_\circ$ are introduced \cite{Law81}
\begin{align}
\binom{{\tilde \psi}_\circ^\alpha}{{\tilde \psi}_\circ^n} = \binom{{\tilde
\pi}_\circ^\alpha }{ {\tilde \sigma}_\circ} \quad , \qquad
\binom{\psi_\circ^\alpha }{ \psi_\circ^n} = \binom{\pi_\circ^\alpha }{ \sigma_\circ
+ {\bar \phi}_\circ} \quad ,
\end{align}
with
\begin{align}
\langle \pi_\circ^\alpha \rangle = \langle \sigma_\circ \rangle = 0 \ .
\label{expec_fluc}
\end{align}
The order parameter is parameterized as
\begin{align}
{\bar \phi}_\circ = \sqrt{\frac{3 }{ u_\circ}} \, m_\circ \ .
\label{OP_tktc}
\end{align}
Thus $\sigma_\circ$ corresponds to the longitudinal,
and $\pi_\circ^\alpha$ to the transverse fluctuations.
Inserting these transformations into the functional (\ref{dyn_func})
leads to a new functional of the form $ J = J_0 + J_{int} + J_1+ const$
with \cite{Tae92}
\begin{align}
J_0 & [\{ {\tilde \pi}_\circ^\alpha \} , {\tilde \sigma}_\circ
, \{ \pi_\circ^\alpha \} , \sigma_\circ] = \nonumber \\
& \int_k \int_\omega \biggl[
\sum_\alpha \lambda_\circ \, \, {\tilde \pi}_\circ^\alpha({\vc
k},\omega) \, {\tilde \pi}_\circ^\alpha(- {\vc k},-\omega) \nonumber
\\
& \qquad \qquad + \lambda_\circ \, {\tilde \sigma}_\circ({\vc k},\omega) \, {\tilde
\sigma}_\circ(- {\vc k},-\omega) \nonumber \\
& - \sum_\alpha {\tilde \pi}_\circ^\alpha({\vc k}, \omega) \, \Bigl[ i
\omega + \lambda_\circ \, \, \Bigl( r_\circ + \frac{m_\circ^2}{ 2} + k^2
\Bigr) \Bigr] \, \pi_\circ^\alpha(- {\vc k},- \omega) \nonumber \\
&- {\tilde \sigma}_\circ({\vc k}, \omega) \, \Bigl[ i \omega +
\lambda_\circ \, \, \Bigl( r_\circ + \frac{3 \, m_\circ^2 }{ 2} + k^2
\Bigr) \Bigr] \, \sigma_\circ(- {\vc k},- \omega) \biggr] \ ,
\end{align}
\begin{align}
& J_{int} [\{ {\tilde \pi}_\circ^\alpha \} , {\tilde
\sigma}_\circ , \{ \pi_\circ^\alpha \} , \sigma_\circ] = \nonumber \\
& - \frac{1 }{ 6} \,
\lambda_\circ \, u_\circ \int_{k_1 k_2 k_3 k_4} \int_{\omega_1 \omega_2
\omega_3 \omega_4} \delta \! \left( \sum_i {\vc k}_i
\right) \, \delta \! \left( \sum_i \omega_i \right) \nonumber \\
& \times \biggl[ \sum_{\alpha \beta} {\tilde
\pi}_\circ^\alpha({\vc k}_1,\omega_1) \, \pi_\circ^\alpha({\vc
k}_2,\omega_2) \, \pi_\circ^\beta({\vc k}_3,\omega_3) \,
\pi_\circ^\beta({\vc k}_4,\omega_4) \nonumber \\
& \quad + \sum_\alpha {\tilde \pi}_\circ^\alpha({\vc
k}_1,\omega_1) \, \pi_\circ^\alpha({\vc k}_2,\omega_2) \,
\sigma_\circ({\vc k}_3,\omega_3) \, \sigma_\circ({\vc k}_4,\omega_4)
\nonumber \\
& \quad + \sum_\alpha {\tilde \sigma}_\circ({\vc
k}_1,\omega_1) \, \pi_\circ^\alpha({\vc k}_2,\omega_2) \,
\pi_\circ^\alpha({\vc k}_3,\omega_3) \, \sigma_\circ({\vc
k}_4,\omega_4) \nonumber \\
&\quad + {\tilde \sigma}_\circ({\vc
k}_1,\omega_1) \, \sigma_\circ({\vc k}_2,\omega_2) \, \sigma_\circ({\vc
k}_3,\omega_3) \, \sigma_\circ({\vc k}_4,\omega_4) \biggr] \nonumber \\
& - \lambda_\circ \, {\sqrt{\frac{3 \, u_\circ}{ 6}}} \, m_\circ \int_{k_1 k_2
k_3} \int_{\omega_1 \omega_2 \omega_3} \delta \! \left(
\sum_i {\vc k}_i \right) \, \delta \! \left( \sum_i \omega_i
\right) \nonumber \\
& \times \biggl[ \sum_\alpha 2 \, {\tilde \pi}_\circ^\alpha({\vc
k}_1,\omega_1) \, \pi_\circ^\alpha({\vc k}_2,\omega_2) \,
\sigma_\circ({\vc k}_3,\omega_3) \nonumber \\
& \quad + \sum_\alpha {\tilde \sigma}_\circ({\vc k}_1,\omega_1) \,
\pi_\circ^\alpha({\vc k}_2,\omega_2) \, \pi_\circ^\alpha({\vc k}_3,\omega_3)
\nonumber \\
& \quad + 3 \, {\tilde \sigma}_\circ({\vc k}_1,\omega_1) \,
\sigma_\circ({\vc k}_2,\omega_2) \, \sigma_\circ({\vc k}_3,\omega_3)
\biggr] \ ,
\end{align}
and
\begin{align}
J_1[{\tilde \sigma}_\circ] = &
- \lambda_\circ \, \sqrt{\frac{3}{ u_\circ}} \,
m_\circ \, \left( r_\circ + \frac{m_\circ^2 }{ 2} \right) \, \nonumber \\
& \qquad \times \, \int_k \int_\omega
\, {\tilde \sigma}_\circ(-{\vc k},-\omega) \delta ( {\vc k}, \omega) \ .
\end{align}
Equation (\ref{expec_fluc})
($\langle \sigma_\circ \rangle = 0$) yields a perturbative identity
that gives the relation between $r_\circ$ and $m_\circ$,
reading to one-loop order \cite{Law81}
\begin{align}
r_\circ + \frac{m_\circ^2 }{ 2} = &
- \frac{n - 1}{ 6} \, u_\circ \int_k \frac{1 }{ r_\circ +
\frac{m_{\footnotesize 0}^2 }{
2} + k^2} \nonumber \\
& - \frac{1 }{ 2} \, u_\circ \int_k \frac{1 }{ r_\circ + \frac{3 \,
m_{\footnotesize 0}^2 }{ 2} + k^2} \ .
\end{align}
In the following $r_\circ$ is replaced by $m_\circ$. Notice that
by using the variable $m_\circ$, the shift of $T_I$ is already incorporated
[see Eq. (\ref{OP_tktc})].
We can now write down the basic ingredients needed to apply the recipe for the
dynamical perturbation theory below $T_I$. The emerging propagators, vertices,
and counterterms are listed with their graphical
representation in Figs. \ref{fig6}, \ref{fig7} and \ref{fig8} (see Ref.
\onlinecite{Tae92}).
\subsection{Goldstone theorem and coexistence limit}
As mentioned before, the particularity of the $O(n)$-symmetric
functionals below the critical temperature is the occurrence of Goldstone
modes in the entire low temperature phase. \cite{Gol61,Wag66,Nel76}
Because no free energy is required for
an infinitesimal quasistatic rotation of the order parameter, the transverse
correlation length diverges in the limit of zero external field.
The corresponding massless modes are the Goldstone modes, \cite{Gol61}
in this context called phasons.
They are manifest in non-analytical behavior of correlation functions,
for example the longitudinal static
susceptibility diverges and
changes its leading behavior from being proportional to $k^{-2}$ to
\cite{Nel76,Maz76,Law81,Tae92}
\begin{align}
\chi_L^{-1}({\vc k},0) \propto k^\varepsilon \ .
\end{align}
Before discussing the details of the renormalization theory below $T_I$,
we summarize some important aspects, which explain
why below $T_I$ an $\varepsilon$-expansion can be avoided.
For more details see Ref. \onlinecite{Tae92}.
Leaving the critical temperature region $T \approx T_I$,
which is characterized in the non-perturbed case
by $m_\circ =0$, and lowering the temperature, means
that the fluctuations of the longitudinal modes (amplitudons)
become negligible, because these modes are massive ($m_\circ$).
In contrast
the phasons remain massless and hence their fluctuations will dominate.
Yet a different way of describing the
dominance of the fluctuations of the Goldstone modes is to consider
the spherical model limit \cite{Maz76} $n \rightarrow \infty$.
In this case of ``maximal" symmetry breaking, the Goldstone modes are weighted
with the factor $n-1 \rightarrow \infty$.
As mentioned above,
below $T_I$ coexistence anomalies are present. They arise from the fact that
the $n-1$ transverse modes are massless. In the limit $k \rightarrow 0$
and $\omega \rightarrow 0$ for $T<T_I$ this manifests itself
in typical infrared
divergences. An important point to remember is that in order
to gain these coexistence anomalies, one can also study the case $m_\circ
\rightarrow \infty$.
In the renormalization scheme it is shown that the flow of
the mass parameter $m_\circ$ tends to infinity as the
momentum and frequency tend to zero.
From this it is plausible, and was also proved \cite{Law81},
that in the coexistence limit the result for the
two-point vertex functions are identical with the results arising from the
spherical model limit $n \rightarrow \infty$.
These findings render an $\varepsilon$ expansion unnecessary in the
coexistence limit ($k\rightarrow0$, $\omega \rightarrow 0$ at $m_\circ>0$),
because the
asymptotic theory (the spherical model) is exactly treatable and reduces to the
zero- and one-loop contributions.
Of course, one has to make sure that the
properties of the asymptotic functional will be reproduced in the respective
limit.
Within the generalized minimal subtraction scheme this is possible. As stated
in subsection \ref{tgtc_renorm} Lawrie's method \cite{Law81}
and its dynamical extension
in Ref. \onlinecite{Tae92} lead beyond these limits
and allow for a detailed study of the crossover behavior.
The behavior of the correlation functions in our case is
driven by the crossover between
the three fixed points present below $T_I$.
Besides the Gaussian fixed point one finds the Heisenberg fixed point
\begin{align}
[u_H = 6 \epsilon/(n+8)]
\end{align}
and the coexistence fixed point \cite{Law81}
\begin{align}
[u_C=6\epsilon/(n-1)] \ .
\end{align}
We will again employ the generalized minimal subtraction scheme to study the
crossover between these fixed points.
\subsection{Renormalization group analysis below \boldmath{$T_I$}}
\subsubsection{Flow equations}
Below $T_I$, using only the one-loop diagrams, again the field
renormalization vanishes.
Hence, the only non-trivial $Z$ factors are
the ones for the temperature scale and the coupling constant:
\begin{align}
m^2 &= Z_m^{-1} m_\circ^2 \mu^{-2} \ , \\
u &= Z_u^{-1} u_\circ A_d \mu^{-\varepsilon} \ .
\end{align}
Because we use $m$ instead of $r$ an important relationship can be stated,
which is true independently of the loop order, \cite{Tae92}
\begin{align}
Z_m \cdot Z_\sigma = Z_u \ .
\label{Zum}
\end{align}
To one-loop order ($Z_\sigma = 1$) we find (see App. \ref{app1}) \cite{Tae92}
\begin{align}
Z_u = Z_m =& 1 + \frac{n-1}{6 \varepsilon} u_\circ A_d \mu^{- \varepsilon}
\nonumber \\
& + \frac{3}{2 \varepsilon} u_\circ A_d \mu^{- \varepsilon}
\frac{1}{(1+m_\circ^2/\mu^2)^{\varepsilon/2}} \ .
\label{Z_kt}
\end{align}
Here, the contribution of the transverse loops lead to different divergences
manifest in the change of the $Z$ factors compared to those above $T_I$ [see
Eq. (\ref{Z_gt})].
We recover the familiar renormalization constant in the critical region by
setting $m_\circ=0$. When considering the coexistence limit $m_\circ
\rightarrow \infty$, the weight of the effective critical fluctuations is
reduced from $n+8$ to $n-1$, the number of Goldstone modes.
Asymptotically ($m \rightarrow \infty$)
the $Z$ factors are exact. In the crossover region they are an approximation to
the order $u_\circ^2/(1+m_\circ/\mu^2)^{\varepsilon/2}$. \cite{Tae92}
From this we directly derive the flow-dependent couplings
\begin{align}
l \, \frac{\partial m(l)}{\partial l} = & \frac{1}{2} m ( l)
\left( -2 + \frac{n-1}{6} u(l) \right. \nonumber \\
& \left. + \frac{3}{2} \frac{u(l)}{[1+m(l)^2]^{1+ \varepsilon/2}} \right) \ ,
\\
l \, \frac{\partial u(l)}{\partial l} = & u ( l)
\left(
-\varepsilon + \frac{n-1}{6} u(l) \right. \nonumber \\
& \left. + \frac{3}{2}\frac{u(l)}{[1+m(l)^2]^{1+ \varepsilon/2}} \right) \ .
\end{align}
Wilson's flow equations $\beta_u$ and $\zeta_m$ now read
\begin{align}
l \frac{\partial m(l)}{\partial l} &= \frac{1}{2} m(l) \zeta_m(l)
\ , \\
l \frac{\partial u(l)}{\partial l} &= \beta_u(l) \ .
\end{align}
Three fixed points have now to be taken into consideration. \cite{Law81,Tae92}
Next to the Gaussian fixed point $u_G^* =0$ with $\zeta_{mG}^* = -2$, we find
in the critical limit ($m_\circ \rightarrow 0$)
the infrared-stable Heisenberg fixed point $u_H^* =6\varepsilon/(n+8)$
with $\zeta_{mH}^* = -2+\varepsilon$. In the coexistence limit $m_\circ
\rightarrow \infty$, we find in addition to the still
ultraviolet-stable Gaussian fixed
point the coexistence fixed point, identified by Lawrie,
\cite{Law81} $u_C^* =6\varepsilon/(n-1)$ with $\zeta_{mH}^* =
-2+\varepsilon$, which is infrared-stable.
Thus $m(l)^2$ diverges asymptotically for $l\rightarrow 0$ as
$l^{-2+\varepsilon}$, if $\varepsilon < 2$. Indeed, the coexistence limit is
described by a divergent mass parameter.
In Figs. \ref{fig10} and \ref{fig11} the flow for $m(l)$ and $u(l)$ is
plotted. We find for the flow $u(l)$
a crossover between the coexistence fixed point, inversely
proportional to the number of Goldstone modes $(n-1)$, and the Heisenberg
fixed point
\begin{align}
u(l) \ \ \Rightarrow \
\begin{cases}
6 \varepsilon/(n-1) & l \rightarrow 0 \\
6\varepsilon /(n+8) & l \approx 1 \ .
\end{cases}
\end{align}
That means that for $m(1) \ll 1 $ the coexistence limit is not
approached directly for
$l\rightarrow 0$, but for a while the flow stays near
the Heisenberg fixed point regime.
The scaling variable for $u(l)$ is here \cite{Sch94,Tae92}
\begin{align}
x=\frac{l}{m(1)^{2/(2-\varepsilon)}} \ ,
\end{align}
again leading to perfectly coinciding curves when plotted vs. $x$.
From the relation stated in Eq. (\ref{Zum})
one can deduce the renormalization-group invariant \cite{Tae92}
\begin{align}
\frac{m(l)^2}{u(l)} l^{2-\varepsilon} = \frac{m(1)^2}{u(1)} \ ,
\end{align}
which immediately gives us the scaling of $m(l)$ that can be observed in
Fig. \ref{fig10},
\begin{align}
m(l)^2 \propto l^{-1/\nu_{\text{eff}}} \ \ \text{with} \ \
\frac{1}{\nu_{\text{eff}}} =
\begin{cases}
2-\varepsilon & l \rightarrow 0 \\
2- \frac{n+2}{n+8} & l \approx 1 \ .
\end{cases}
\end{align}
Notice that the value of $1/\nu_{\text{eff}}$
in the first case is the same as $1/\nu$ for the spherical model.
The mass parameter $m$ diverges for $l \rightarrow 0$, $m(l)^2 \propto
l^{-2+\varepsilon}$ with $\varepsilon<2$.
From now on we will concentrate on three dimensions ($\varepsilon =1$) and
$n=2$.
\subsubsection{Matching}
In order to discuss the static susceptibility we use the matching
condition $\mu l=k$.
This relation connects the dependence of
the renormalized quantities on the momentum scale $\mu$
with the $k$ dependence, in which we are interested.
For the integrated value, we are interested in the
temperature dependence rather than the dependence
on the flow parameter $l$ or the wavevector $k$.
Thus, after integration we have to again
match the resulting $l$-dependent
relaxation rate with the physical temperature (see section
\ref{tgtc_renorm}).
\subsection{Susceptibility}
In order to determine the renormalized dynamical susceptibility, we evaluate
the one-loop diagrams for the two-point cumulants, which can be easily derived
from the one-loop vertex functions listed in Appendix \ref{app1}. \cite{Tae92}
Below $T_I$, the structure of the susceptibility does change to one loop order,
compared to the mean-field results.
We write in a general form
\begin{align}
\chi_{\circ \ \perp / \parallel}^{-1}( {\vc k}, \omega)
= \frac{- i \omega}{\lambda_\circ} + k^2 +
f_\circ^{\perp / \parallel} ( {\vc k}, \omega) \ ,
\end{align}
with the self-energy $f_\circ^{\perp / \parallel}$ containing the
contributions of the one-loop diagrams.
The explicit form of $f_\circ^{\perp / \parallel}$ is gained from the
calculation of the two-point vertex function in App. \ref{app1} and Eqs.
(\ref{sus_cor1}) and (\ref{cor_cum1}).
We then obtain the renormalized susceptibility $\chi_{\parallel,\perp}^R$
by inserting the $Z$
factors with the flow dependent coupling constants $u(l)$ and $m(l)$.
Because no field renormalization is present, we can replace $\lambda_\circ$
with $\lambda$ to this order.
This is because via the fluctuation-dissipation theorem
(\ref{fluk_diss}), $Z_\lambda$
and the field renormalizations are connected.
\endmcols
\subsubsection{Amplitudon modes}
In $d=3$ the longitudinal susceptibility characterizing
the amplitudon modes is given by \cite{Tae92}
\begin{align}
\chi_{\parallel}^R ( {\vc k}, \omega) =&
\, \frac{1}{ k^2 - i \,
\omega / \lambda + \mu^2 \, l^2 \, m(l)^2 \, Z_m(l)}
\Biggl(
1 \nonumber
+ \frac{1 }{ (k^2 - i \, \omega / \lambda) / \mu^2 \,l^2
+ m(l)^2} \, \frac{u(l) \, m(l)^2 }{ 2 \, k / \mu
\, l} \nonumber \\
& \times \Biggl[ \frac{n - 1 }{ 3} \left( \frac{\pi
}{ 2} + \arcsin \frac{i \, \omega / \lambda }{ k^2 - i \, \omega
/ \lambda} \right) \nonumber
+ 3 \Biggl( \arcsin \frac{ i \, \omega / \lambda \,
\mu^2 \, l^2 }{ \left[ \left( [k^2 - i \, \omega / \lambda
] / \mu^2 \, l^2 \right)^2 + 4 \, m(l)^2 \, k^2/
\mu^2 \, l^2 \right]^{1/2} } \nonumber \\
& + \arcsin \frac{ (k^2 - i \, \omega / \lambda)/
\mu^2 \, l^2 }{ \left[ \left( [k^2 - i \, \omega / \lambda
] / \mu^2 \, l^2 \right)^2 + 4 \, m(l)^2 \, k^2 / \mu^2 \, l^2
\right]^{1/2} } \Biggr) \Biggr] \Biggr)
\label{chi_long}
\end{align}
\beginmcols
First we want to discuss some limits in order to become acquainted with
this complex form of the susceptibility.
It is important to notice the change of the structure of the $RG$ susceptibility
that results from the one-loop contribution of the perturbation theory.
To clarify this, we state the asymptotic susceptibility, which is
evaluated for non-zero frequency
$({\vc k} \rightarrow 0, \omega > 0)$
\begin{align}
\chi_{\parallel}^{R} = \cfrac{1}{k^2 -i \omega/\lambda +
m(1)^2 \cdot k \cdot \cfrac{1}{1+ a \cdot g(k)}}
\end{align}
with a constant $a$ and a function $g(k)$ that is regular for $q
\rightarrow 0$.
From this limit, it becomes clear that we have to expect changes of
the scaling behavior.
In the coexistence limit $(\omega=0, {\vc k} \rightarrow 0)$
we recover the exact asymptotic result ($d=3$, $\varepsilon=1$)
\begin{align}
\chi_{\parallel}^{R} \propto k^{-1}
\label{coex_trans}
\end{align}
displaying the coexistence anomaly.
When keeping the frequency $\omega > 0$ fixed
$({\vc k} \rightarrow 0, \omega \neq 0)$ the imaginary part of
the susceptibility approaches a constant value
\begin{align}
\chi_{\parallel}^{R} \Rightarrow h(\omega) \ .
\end{align}
where $h(\omega)$ is a function of $\omega$ only.
We can now turn to the full susceptibility.
The imaginary part of the susceptibility is plotted for
different temperatures in Fig. \ref{fig13}.
The structure of $\Im \chi_{\parallel}^{R}$
changes dramatically as compared to the mean-field result, as to be expected.
The contributions of the phason and amplitudon loops are given by the terms in
brackets of Eq. (\ref{chi_long}).
They give rise to a qualitatively different behavior of
$\chi_{\parallel}^{R}$.
Different scaling regions can be identified. Expanding the imaginary part of
$\chi_{\parallel}^{R}$ yields analytical expressions for the scaling
regions, as listed in Table \ref{tab2}.
While the $k \rightarrow \infty$ and $k \rightarrow 0$ behavior reproduces
the mean-field result, the correct treatment of the Goldstone anomalies lead to
an additional $k^{-3}$ behavior in the intermediate region
$\sqrt{\omega/\lambda} < k < m(1)$.
A plateau appears for smaller $k$ and temperatures far away from the
critical temperature $T_I$.
The effective exponent $\kappa$ of the $k$ dependence of $\Im
\chi_{\parallel}^{R}$ is plotted in Fig. \ref{fig14}.
One can therefrom easily identify the scaling regions presented
in Table \ref{tab2}.
The influence of the Goldstone modes is therefore to alter the $k$ dependence
of the susceptibility not only in the coexistence limit, but also in
intermediate regions. In order to derive
the temperature dependence of $\chi_\parallel^R$, in
addition the flows of $m(l)$ and $u(l)$ need to be considered.
\endmcols
\subsubsection{Phason modes}
For the transverse susceptibility characterizing the phason modes one finds
\cite{Tae92}
\begin{align}
\chi_{\perp}^R ( {\vc k}, \omega) = &
\frac{1}{ k^2 - i \, \omega / \lambda}
\Biggl( 1 - \frac{u(l) \, m(l) / 6}{(k^2 - i \, \omega / \lambda) /
\mu^2 \, l^2}
\Biggl[ 2 - \frac{m(l)}{k / \mu \, l}
\Biggl( \frac{\pi}{ 2}
- \arcsin \frac{- i \, \omega / \lambda \, \mu^2 l^2 + m(l)^2}{
(k^2 - i \, \omega / \lambda ) / \mu^2 \, l^2 + m(l)^2}
\nonumber \\
&+ \arcsin \frac{i \, \omega / \lambda \, \mu^2 \, l^2 + m(l)^2 }{
\left[ \left(
(k^2 - i \, \omega / \lambda) / \mu^2 \, l^2 - m(l)^2 \right)^2 +
4 \, m(l)^2 \, k^2 / \mu^2 \, l^2 \right]^{1/2} } \nonumber \\
& + \arcsin \frac{(k^2 - i \, \omega) / \lambda \mu^2 \, l^2
- m(l)^2 }{ \left[ \left(
(k^2 - i \, \omega / \lambda ) / \mu^2 \, l^2 - m(l)^2 \right)^2 +
4 \, m(l)^2 \, k^2 / \mu^2 \, l^2 \right]^{1/2} } \Biggr) \Biggr]
\Biggr) \ .
\label{chi_tran}
\end{align}
\beginmcols
Here the problem lies in the cancellation of terms with
respect to their $k$ dependence, hidden in the complex structure of
Eq. (\ref{chi_tran}).
Hence we start again with considering the coexistence limit
($ {\vc k} \rightarrow 0, \omega \rightarrow 0$):
\begin{align}
\chi_{\perp}^{R} \propto k^{-2} \ ,
\end{align}
which is easily found.
The results for $ {\vc k} \rightarrow 0, \omega \neq 0$ are more difficult to
obtain, because the $\arcsin$-terms cancel
their $k$-dependence against each other. In two limits this can be
done analytically.
For $m \rightarrow 0 $ one gets
\begin{align}
\chi_{\perp}^{R} \rightarrow
\chi_{\circ \perp} = \frac{1}{k^2 -i\omega / \lambda}
\end{align}
reproducing the mean-field susceptibility for the massless transverse modes.
For $m \rightarrow \infty$ the $\arcsin$-terms read as
$\frac{2}{m} + c_1 \frac{1}{m^3} + c_2 \frac{ i \omega}{\lambda k^2}
\cdot \frac{1}{m^3}$ leading to
\begin{align}
\chi_{\perp}^{R} \rightarrow
\cfrac{1}{k^2 -i\omega / \lambda + \cfrac{1}{6} u(l)(c_1 k^2 + c_2
\frac{i\omega}{\lambda})/m(l)}
\end{align}
Here $c_1$ and $c_2$ are constants.
Thus in the two extreme limits, the temperature dependence vanishes,
and only in between,
for $m(l) \propto {\cal O} (1)$ can we expect a slightly
temperature-dependent behavior.
In Figs. \ref{fig15} and \ref{fig16} the imaginary part of the
transverse susceptibility and its effective exponent with respect to $k$ are
plotted for different temperatures.
Notice that leaving the
critical temperature leads to a temperature dependence.
This is caused by the coupling of the amplitudon and phason modes.
Yet as the temperature is further reduced,
the temperature dependence disappears again.
\subsection{Relaxation rate}
In this subsection we study consequences for the relaxation arising
from the Goldstone anomalies present in the susceptibility.
As mentioned in Sec. \ref{sec_nmr}, in order to gain the relaxation rate
we have to integrate over the imaginary part of the susceptibility.
Because the transverse susceptibility is temperature dependent, also the
relaxation rate, connected with the phasons, will be temperature dependent.
This is of course not the case in the mean-field analysis.
As discussed in the last section, for $T \rightarrow T_I$
the susceptibility approaches the mean-field
result, and thus the relaxation rate at the critical temperature is
unaltered.
For the relaxation rate connected with the amplitudons the changes are more
subtle.
Therefore, we collect all the contributions from the one-loop diagrams in a
function $f({\vc k},\omega)$, which can be interpreted as a
${\vc k}$- and $\omega$-dependent dimensionless self-energy.
The susceptibility has now the following
structure
\begin{align}
(\chi_\parallel^R)^{-1} = k^2 - i\omega/\lambda + f({\vc k}, \omega) \mu^2 l^2
\end{align}
The dependence of $f$ on $k$ and
$\omega$ is plotted in Fig. \ref{fig17}.
We see that the real part of the effective mass $f$ is decreasing for
$k \rightarrow 0$. Thus, the
Goldstone anomalies lead to a reduction of the real part of
$f$. In the coexistence limit, $\omega \rightarrow 0$ and small $k$,
the real part of $f$ tends to 0 linearly and
relation (\ref{coex_trans}) is recovered.
The imaginary part is only $k$-dependent for very small $k$.
That means, when we integrate over all $k$ the influence of the Goldstone
anomalies can be interpreted as follows.
The effective Larmor frequency is raised and the mass
is lowered for small $k$ as compared to the mean-field description.
We can easily derive this easily from the longitudinal
relaxation rate with $f(k,\omega)$ taken into consideration
\begin{align}
\frac{1}{T_1^\parallel} \propto &
\int_{}^{} k^2 dk \frac{\Im \chi_\parallel^R ({\vc k},\omega_L)}{\omega_L}
\nonumber \\
= & \int_{}^{} k^2 dk
\frac{1}{\mu^2 l^2 \omega_L} \nonumber \\
& \times \, \frac{\tilde \omega - \Im
f^\parallel(\tilde {\vc k},\tilde \omega)}
{[\tilde \omega^2 -
\Im f^\parallel(\tilde {\vc k},\tilde \omega)]^2 + [\tilde
k^2 +
\Re f^\parallel(\tilde {\vc k},\tilde \omega)]^2}
\nonumber \\
= & \frac{1}{\mu l}\frac{1}{\lambda} \int_{}^{} \tilde k^2 d \tilde k
\nonumber \\
& \times \, \frac{1- \Im f^\parallel(\tilde {\vc k},\tilde \omega)/{\tilde
\omega}}{[\tilde \omega - \Im f^\parallel(\tilde {\vc
k},\tilde \omega) ]^2 + [\tilde k^2 + \Re f^\parallel(\tilde
{\vc k},\tilde \omega)]^2} \ ,
\end{align}
where again $\tilde k = k/\mu l$ and $\tilde \omega = \omega_L/\lambda \mu^2
l^2$. When we compare this result to the mean-field result, the interpretation
given above becomes clear. The relaxation rate is raised through the
influence of the Goldstone modes. Both the transverse and longitudinal
relaxation times are plotted in Fig. \ref{fig18}.
Again we have compared our findings with experimental data,
taken from Ref. \onlinecite{Mis97}.
In the low-temperature phase we have less freedom of choice in our theory,
as the scale $T_1(T=T_I)$ and the parameters are already
fixed by their high-temperature values. Thus only one parameter is left to
be adjusted.
In the vicinity of $T_I$ we find a temperature-independent region,
because both the
transverse and the longitudinal susceptibility become temperature-independent
and approach their mean-field values.
The transverse relaxation time shows a slight temperature dependence for
temperatures further away from $T_I$. If we use the identical choice of
parameters as for the high-temperature phase,
we find good agreement in the low-temperature phase as well.
The temperature where the maximum value of the transverse
relaxation time in our theory is reached is identified with the corresponding
temperature in the experiment.
This temperature dependence is due to the coupling between the phason and
amplitudon modes. We want to emphasize that, in agreement with the analysis of
ultrasonic attenuation experiments, \cite{Sch92,Sch94} no phason gap has to be
introduced to explain the experimental data for Rb$_2$ZnCl$_4$.
However, it is important to treat the influence of the Goldstone modes
beyond the mean-field approach.
For the longitudinal relaxation time, the crossover temperature represents
additional an important scale.
We used the same range $\Delta T \approx 5K$ as in the
high-temperature phase for the plot in Fig. \ref{fig18}.
Again good agreement between experiment and theory is observed.
Both theory and experiment show two scaling regions, one above
$\Delta T \approx 5K$ and one below.
The qualitative behavior is correctly reproduced,
but the quantitative agreement for the
longitudinal relaxation rate is not as good as compared to the high-temperature
phase.
A possible reason may be the following.
We calculated the coupling of the transverse and longitudinal
order parameter fields to one-loop order.
Below $T_I$, the coupling of the order parameter is changing the susceptibility
in its structure, whereas above $T_I$ nothing dramatic happens.
One has to expect that below $T_I$ this is of course only the first step
beyond mean-field theory and the two-loop corrections might lead to
quantitative modifications in the crossover region.
Comparing the calculated
transverse and the longitudinal relaxation rates below $T_I$
with the experimental data is in agreement with this.
A slight temperature dependence is not as sensitive as the scaling behavior of
the longitudinal relaxation time, which has again two regimes due to the
crossover scenario.
As the characteristic features are reproduced correctly, we may say that
we understand the complex temperature dependence below $T_I$ in the context of
the coupled order parameter modes and a careful treatment of the Goldstone
modes.
Upon introducing
the $k$- and $\omega$-dependent self energy $f$ we could in addition provide
a physical interpretation of the changes of the longitudinal relaxation time,
as compared to the mean-field analysis.
\section{Conclusions}
In this paper we have presented a comprehensive description of the critical
dynamics at structurally incommensurate phase transitions.
Our starting point was the time-dependent, relaxational Ginzburg-Landau model with
$O (2)$ symmetry. To be more general, we discussed the $O
(n)$-symmetric functional. Hence, we were able to study the influence of the
$n-1$ Goldstone modes accurately.
We used the renormalization group theory in order to compute the dynamical
susceptibility below and above the critical temperature $T_I$
to one-loop order. Thus we could venture beyond the usual
mean-field description.
As we calculated the renormalization factors in
the generalized minimal subtraction scheme,\cite{Ami78,Law81}
we could deal with the interesting crossover scenarios carefully. \cite{Tae92}
Our findings were used to interpret experimental data from $NMR$ experiments,
measuring the relaxation rate. The relaxation rate is connected with the
calculated susceptibility via an integral over the wavevector, at fixed
frequency.
Above the critical temperature $T_I$,
we showed how scaling arguments lead to an identification of the
dynamical critical exponent for the relaxation rate and provide
a qualitative understanding of its temperature dependence.
Then we described
the crossover from the critical region to a high-temperature region, where
fluctuations should not change the classical critical exponents.
Excellent agreement for
both the critical exponents resulting from the scaling arguments and the
description of the crossover regions with the experimental data was found.
This led
us to the conclusion that the experimental data should probably not
be interpreted by identifying a critical region of supposed width of $100K$,
but rather through a crossover between
the non-classical critical exponents and the mean-field exponents, taking place
at a temperature approximately equal to $T_I+5K$. This conjecture
yields a considerably more reasonable width of the critical region.
Below the critical temperature, we analyzed the
dynamical susceptibility calculated to one-loop order in the
renormalization group theory in considerable detail.
The coupling of the $OP$ modes was considered explicitly.
We thus gained new insight into the influence of
Goldstone modes on the structure of the susceptibility and its temperature
dependence.
As a result we found that the relaxation rate of the phason fluctuations
becomes temperature-dependent.
This temperature dependence disappears in the two limits when either the
temperature approaches the critical temperature $T_I$,
or the temperature is very low.
For the amplitudon modes the influence of the Goldstone modes is more subtle.
We summarized the effect in a wavevector- and frequency-dependent ``mass" and
showed that this can be interpreted as a bending-down of the temperature-dependent
relaxation time as compared to a hypothetic situation
where no Goldstone modes are present.
All experimental findings are well understood treating the $OP$ modes
beyond their mean-field description.
As reported from the analysis of ultrasonic attenuation experiments
for Rb$_2$ZnCl$_4$ before, \cite{Sch92,Sch94} no phason gap had to be
introduced.
Recently however, the direct observation of a ``phason gap" has been reported
for a molecular compound (BCPS). \cite{Oll98}
This ``phason gap" was observed in inelastic neutron scattering
experiments for very high frequencies.
Again, the low frequency dynamics probed by $NMR$
did not reveal any gap.\cite{Sou91}
Thus, an interesting application
of the $O(2)$-symmetric model is presented
here, in terms of a crossover description and a discussion of the full $k$ and
$\omega$ dependence of the susceptibility calculated to one-loop order.
We found very good agreement with experimental data.
Besides the precise calculation of critical
exponents as one strength of the renormalization group theory, also detailed
analysis of crossover scenarios and the effect of the
anharmonic coupling of modes is possible.
We want to stress how successfully the results of the renormalization group theory
can be applied to specific experimental findings.
In addition, we emphasize that the choice of two fit parameters in the
phase above $T_I$ already essentially determined the curves in the
incommensurate phase.
The theory presented here is formulated in a general way.
Therefore it could be readily used to analyze further experiments,
especially below and near $T_I$.
\acknowledgments
We benefited from discussions with E. Frey, J. Petersson,
and D. Michel. B.A.K. and F.S. acknowledge support from the
Deutsche Forschungsgemeinschaft under contract No. Schw. 348/6-1,2.
U.C.T. acknowledges support also from the
Deutsche Forschungsgemeinschaft through a habilitation fellowship
DFG-Gz. Ta 177 / 2-1,2.
\endmcols
| proofpile-arXiv_065-8190 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Final state interactions in the $pp\,\pi^0$ and $pp\,\eta$--system}
The energy dependence of the cross section of $pp\rightarrow pp\,\phi$
($\phi\in{\pi^0,\eta}$) at threshold is determined by final state modified
phasespace integrals$\;$\cite{kle1}.
A simple example is the shape independent effective range expansion
in the $pp$--final state
($s_1 := (p^\prime_1 + p^\prime_2)^2$,
$\kappa:= \sqrt{\lambda(s_1,m^2_p,m^2_p)}/(4m_p)$, $s=P^2$):
\begin{eqnarray}
\lefteqn{
R^{\; \scriptsize \mbox{FSI}}_3 (s \; ; \; m^2_p, m^2_p, m^2_{\phi} )
\approx
\int \! \frac{d^{\,3} p_{1^\prime}}{
2 \, \omega_{p} (|\vec{p}_{1^\prime}|)} \;
\frac{d^{\,3} p_{2^\prime}}{
2 \, \omega_{p} (|\vec{p}_{2^\prime}|)} \;
\frac{d^{\,3} k_\phi}{
2 \, \omega_{\,\phi} (|\vec{k}_\phi|)} } \cdot \nonumber \\
& \cdot & \delta^4 (p_{1^\prime} + p_{2^\prime} + k_\phi - P\, )
\left|
- \, \frac{\mbox{N}_\phi}{a_{pp}}
\frac{1}{(- \, 1/a_{pp} \; + \; r_{pp}
\, \kappa^2/2
- i \kappa )}
\, \right|^2 \; \alpha \;\; \eta^4_\phi \qquad
\end{eqnarray}
The dimensionless normalization $\mbox{N}_\phi$ is an open question in nearly
all theoretical models in the literature which prevents them from
being quantitative.
\section{The reaction $pp\rightarrow pp\,\pi^0$ at threshold}
Applying a nonrelativistic meson exchange model$\;$\cite{kle1,kle2} to $pp\rightarrow pp\pi^0$ at
threshold we are able to describe the total cross section qualitatively upto $\eta_\pi\approx
0.3$. The model contains contributions from IA, offshell
rescattering, heavy meson exchanges, resonant S-- and P--waves,
recoil currents and $\pi\pi$--box--diagrams.
\section{The reaction $pp\rightarrow pp\,\eta$ at threshold}
The total cross section of $pp\rightarrow pp\,\eta$ at threshold has been
calculated within
a nonrelativistic and a relativistic meson--exchange model$\;$\cite{kle1,kle2}
(exciting the $S^-_{11}(1535)$) which is based on the following interaction Lagrangian:
\begin{eqnarray} \lefteqn{{\cal L}_{int} (x) =
- \, g_{\delta NN}
\bar{N} \vec{\tau} \cdot {\vec{\phi}}_{\delta} \, N
- \, g_{\sigma NN}
\bar{N} \phi_{\sigma} \, N
- i \, g_{\pi NN}
\bar{N} \gamma_5 \, \vec{\tau} \cdot {\vec{\phi}}_{\pi} \, N \, -} \nonumber \\
& - & i \, g_{\eta NN}
\bar{N} \gamma_5 \, \phi_{\eta} \, N
- \, g_{\rho NN} \,
\bar{N} \, \vec{\tau} \cdot [ \gamma_\mu \, {\vec{\phi}}^{\,\mu}_{\rho}
+ \frac{K_\rho}{2 m_N} \,
\sigma_{\mu\nu} \, \partial^\mu {\vec{\phi}}^{\,\nu}_{\rho} ]
\, N \, - \nonumber \\
& - &
g_{\omega NN} \,
\bar{N} \, [ \gamma_\mu \, \phi^{\,\mu}_{\omega}
+ \frac{K_\omega}{2 m_N} \,
\sigma_{\mu\nu} \, \partial^\mu \phi^{\,\nu}_{\omega} ]
\, N
+ \Big[ \,
i \, g_{\delta NS_{11}} \;
\bar{N}_{S_{11}} \gamma_5 \, \vec{\tau} \cdot {\vec{\phi}}_{\delta} \; N \, +
\nonumber \\
& + &
i \, g_{\sigma NS_{11}} \;
\bar{N}_{S_{11}} \gamma_5 \, \phi_{\sigma} \; N
- g_{\pi NS_{11}} \;
\bar{N}_{S_{11}} \vec{\tau} \cdot {\vec{\phi}}_{\pi} \; N
- g_{\eta NS_{11}} \;
\bar{N}_{S_{11}} \phi_{\eta} \; N \, +
\nonumber \\
& + &
i \, g_{\rho NS_{11}} \,\bar{N}_{S_{11}} \gamma_\mu \gamma_5
\, \vec{\tau} \cdot {\vec{\phi}}^{\,\mu}_{\rho} N
+ i \, g_{\omega NS_{11}} \,\bar{N}_{S_{11}} \gamma_\mu \gamma_5
\, \phi^{\,\mu}_{\omega} N
+ h.c. \Big]
\end{eqnarray}
Following Watson and Migdal we multiplicatively separate the long-ranged
final state interactions from the T--matrix. The short ranged (approximately
constant) production amplitude
is set to its threshold value. The resulting cross section of the relativistic
model is ($m_N \simeq (m_p+m_n)/2$):
\begin{eqnarray}
\lefteqn{
\sigma_{pp\rightarrow pp\,\eta} (s) \quad \simeq
\quad
\frac{1}{2\,!} \; \frac{1}{(2\pi )^5} \;\;
\frac{R^{\, \mbox{\scriptsize FSI}}_3 (s\; ; \; m^2_p, m^2_p,m^2_\eta )}{
2\; \sqrt{\lambda (s\; ; \; m^2_p, m^2_p )}}
\;\;
m_\eta \; (m_\eta + 4\, m_{\scriptscriptstyle N} \,) \; \cdot} \nonumber \\
& \cdot & \Big| \;
2\, m_{\scriptscriptstyle N} \,
\; [ \,
(\, X_{\,\delta } + X_{\,\sigma } \, ) \, (m_{\scriptscriptstyle N} + m_\eta ) -
(\, X_{\,\pi } + X_{\,\eta } \, ) \, (m_{\scriptscriptstyle N} - m_\eta ) \, +
\nonumber \\
& & + \,
(\, Y_{\,\delta } + Y_{\,\sigma } - Y_{\,\pi } - Y_{\,\eta } \, ) \, (m_{\scriptscriptstyle N} + m_\eta ) +
M_\delta + M_\sigma - M_\pi - M_\eta
\, ]
\, - \nonumber \\
& & - \, X_\rho \; m_{\scriptscriptstyle N} \;
[ \,
4 \, (m_\eta - 2\, m_{\scriptscriptstyle N}) +
K_\rho \, (5 m_\eta - 4\, m_{\scriptscriptstyle N})
\, ] \; + \nonumber \\
& & + \,
[ \,
Y_{\,\rho } \, (m_{\scriptscriptstyle N} + m_\eta ) +
\tilde{M}_\rho
\, ]
\; [\, K_{\rho}
\; (m_\eta - 4\, m_{\scriptscriptstyle N})
- 8\, m_{\scriptscriptstyle N} \, ] - \nonumber \\
& & - \, X_\omega \; m_{\scriptscriptstyle N} \;
\left[ \,
4 \, (m_\eta - 2\, m_{\scriptscriptstyle N}) +
K_\omega \, (5 m_\eta - 4\, m_{\scriptscriptstyle N})
\, \right] \; + \nonumber \\
& & + \,
[ \,
Y_{\,\omega } \, (m_{\scriptscriptstyle N} + m_\eta ) +
\tilde{M}_\omega
\, ]
\; [\, K_{\omega}
\; (m_\eta - 4\, m_{\scriptscriptstyle N})
- 8\, m_{\scriptscriptstyle N} \, ] \;
\Big|^{\, 2}
\end{eqnarray}
with$\;$\cite{kle4} ($\phi\in \{\delta,\sigma,\pi,\eta,\rho,\omega\}$)
($M_{S_{11}}:=m_{S_{11}}- i \,\Gamma_{S_{11}}/2$)($D_\phi (q^2):=(q^2-m_\phi^2)^{-1}$)
($q^2:=- m_p m_\eta$, $p^2:=m_p \; (m_p-2 m_\eta)$,
$P^2:=(m_p+m_\eta)^2$):
\begin{eqnarray}
& & X_{\,\phi }
:=
D_\phi (q^2) \; g_{\, \phi NN} (q^2) \;
D_{S_{11}} (p^2) \; g^{\, \ast}_{\phi NS_{11}} (q^2) \;
g_{\eta NS_{11}} (m^2_{\, \eta}) \nonumber \\
& & Y_{\,\phi }
:=
D_\phi (q^2) \; g_{\, \phi NN} (q^2) \;
D^R_{S_{11}} (P^2)
\;
g^{\, \ast}_{\eta NS_{11}^R} (m^2_\eta) \;
g_{\Phi NS_{11}^L} (q^2 ) \nonumber \\
& & M_\phi := X_\phi \, m_{S_{11}} + Y_\phi \, M_{S_{11}} \quad , \quad
\tilde{M}_\phi := - X_\phi \, m_{S_{11}} + Y_\phi \, M_{S_{11}} \nonumber \\
& &
D^R_{S_{11}} (P^2):=(P^2-M^2_{S_{11}})^{-1} \; , \quad
D_{S_{11}} (p^2):=(p^2-m^2_{S_{11}})^{-1}
\end{eqnarray}
\section{The reaction $pp\rightarrow p\Lambda\,K^+$ at threshold}
To open the door for future investigations we sketch out some details of our
convariant quark--gluon Bethe--Salpeter (BS) model to describe $pp\rightarrow p\Lambda K^+$ at
threshold$\,$\cite{kle3}.
The total cross section of $pp\rightarrow p\Lambda K^+$ is given by:
\begin{eqnarray} \lefteqn{\sigma_{pp\rightarrow p\Lambda K} (s) \quad =
} \nonumber \\
& = &
\frac{1 }{2 \, \sqrt{\lambda (s; m^2_p, m^2_p)}}
\int\!\frac{d^{\,3}p_{1^\prime}}{(2\pi )^3\, 2\,\omega_p
(|\vec{p}_{1^\prime}|)}
\frac{d^{\,3}p_{2^\prime}}{(2\pi )^3\, 2\,\omega_\Lambda
(|\vec{p}_{2^\prime}|)}
\frac{d^{\,3}k}{(2\pi )^3\, 2\,\omega_K (|\vec{k} |)}
\nonumber \\
& & \nonumber \\
& &
(2\pi )^4 \; \delta^4 (p_{1^\prime} + p_{2^\prime} + k - p_1 - p_2) \;
\frac{1}{4} \; \sum\limits_{s_1,s_2} \;
\sum\limits_{s_{1^\prime},s_{2^\prime}}
\; {\left| \, T_{fi} \, \right| }^2 \label{xlb1}
\end{eqnarray}
\begin{eqnarray} i \, T_{fi}
& = & <p\,(\vec{p}^{\;\prime}_1,s^\prime_1)
\, \Lambda^0 (\vec{p}^{\;\prime}_2,s^\prime_2) \, K^+ (\vec{k})| \,T \,\Big[
\frac{i}{1!} \; {\hat{{\cal L}}}_{int} (0) +
\frac{i^2}{2!} \; {\hat{{\cal L}}}_{int} (0) \, {\hat{{\cal S}}}_{int}
+
\nonumber \\
& + &
\frac{i^3}{3!} \; {\hat{{\cal L}}}_{int} (0)
\, {\hat{{\cal S}}}_{int}
\, {\hat{{\cal S}}}_{int}
+
\ldots \Big] \, |p\,(\vec{p}_1,s_1)\,p(\vec{p}_2,s_2)>_{c} \label{xlb2}
\end{eqnarray}
\begin{equation}
{\hat{{\cal L}}}_{int} (x) \simeq
\sqrt{4\pi\alpha_s} \;\; \bar{\psi} (x) \;
\frac{\lambda_a}{2} \, \slsha{A}_a (x) \; \psi (x) \; , \;
\alpha_s (Q^2) \simeq 4\pi (\beta_0)^{-1} /
\ln \left(1+\frac{Q^2}{\Lambda_{QCD}^2}\right)
\end{equation}
(${\hat{{\cal S}}}_{int}:= \int d^{\,4}\!x\; {\hat{{\cal L}}}_{int} (x)$). The in-- and outgoing protons are considered to be bare three--quark objects
dressed by scalar $q\bar{q}$--pairs. In our model we use properly normalised momentum
space quark and antiquark creation ($b^+,d^+$) and
annihilation ($b,d$) operators
($\{ b(\vec{p},\alpha), b^+(\vec{p}^{\;\prime},\alpha^\prime)\}=
\delta_{\alpha\alpha^\prime} \; (2\pi)^3 $ $2\,\omega(|\vec{p}\,|) \; \delta^3
(\vec{p}-\vec{p}^{\;\prime})$, \ldots) and Dirac
spinors ($\bar{u}(\vec{p},s) \, u(\vec{p},s^\prime)=2m \;\delta_{ss^\prime}$,
$\bar{v}(\vec{p},s)$ $v(\vec{p},s^\prime)=-2m \;\delta_{ss^\prime}$)
to express the asymptotic in-- and outgoing
proton state vectors and the outgoing Kaon state vector
(upto renormalisation constants) in terms of
corresponding BS--amplitudes
($s_i,t_i,c_i$ denote spin, flavour and colour):
\begin{eqnarray} \lefteqn{|K^+ (P) > \quad \simeq \quad - \;
\sum\limits_{s_1,s_2} \;
\sum\limits_{t_1,t_2} \;
\sum\limits_{c_1,c_2}
\int \frac{d^{\, 3}p_1}{(2\pi )^3\, 2 m_1}
\; \frac{d^{\, 3}p_2}{(2\pi )^3\, 2 m_2} \;
\int d^{\, 3}x_1 \, d^{\, 3}x_2
} \nonumber \\
& \cdot &
\exp ( \displaystyle -i\, (\vec{p}_1 \cdot \vec{x}_1 +
\vec{p}_2 \cdot \vec{x}_2 )) \; \cdot \nonumber \\
& \cdot &
\bar{u}^{\,(1)} (\vec{p}_1,s_1,t_1,c_1) \;\;
< 0 | \, T \left( \, \psi^{\,(1)} (x_1) \, \bar{\psi}^{\,(2)} (x_2) \, \right) \, |P ,K^+>
\cdot \nonumber \\
& \cdot &
v^{\,(2)} (\vec{p}_2,s_2,t_2,c_2) \;
d^+ (\vec{p}_2,s_2,t_2,c_2) \;
b^+ (\vec{p}_1,s_1,t_1,c_1) \;
|0>
{\Big|}_{x^0_1\;=\;x^0_2\;=\;0} \qquad
\end{eqnarray}
\begin{eqnarray} \lefteqn{|p^+ (P) > \quad =
\sum\limits_{s_1,s_2,s_3} \;
\sum\limits_{t_1,t_2,t_3} \;
\sum\limits_{c_1,c_2,c_3}
\int \frac{d^{\, 3}p_1}{(2\pi )^3 \; 2 \,m_1}
\; \frac{d^{\, 3}p_2}{(2\pi )^3 \; 2 \,m_2}
\; \frac{d^{\, 3}p_3}{(2\pi )^3 \; 2 \,m_3}
} \nonumber \\
& & \nonumber \\
& &
\int d^{\, 3}x_1 \; d^{\, 3}x_2 \; d^{\, 3}x_3 \;
\exp[ -i\, (\vec{p}_1 \cdot \vec{x}_1 + \vec{p}_2 \cdot \vec{x}_2 +
\vec{p}_3 \cdot \vec{x}_3 )] \, \cdot
\nonumber \\
& &
\Big\{ \;
\bar{u}^{(1)} (\vec{p}_1,s_1,t_1,c_1) \,
\bar{u}^{(2)} (\vec{p}_2,s_2,t_2,c_2) \,
\bar{u}^{(3)} (\vec{p}_3,s_3,t_3,c_3) \nonumber \\
& &
< 0 | \, T \left( \, \psi^{\,(1)} (x_1) \, \psi^{\,(2)} (x_2) \, \psi^{\,(3)}
(x_3) \, \right) \, |P,p^+>
\nonumber \\
& &
b^+ (\vec{p}_3,s_3,t_3,c_3) \;
b^+ (\vec{p}_2,s_2,t_2,c_2) \;
b^+ (\vec{p}_1,s_1,t_1,c_1) \;
|0> \; - \;
\sum\limits_{s_4,s_5} \;
\sum\limits_{t_4,t_5} \;
\sum\limits_{c_4,c_5}
\nonumber \\
& &
\int
\frac{d^{\, 3}p_4}{(2\,\pi )^3 \; 2\, m_4} \;
\frac{d^{\, 3}p_5}{(2\,\pi )^3 \; 2\, m_5} \;
\int
d^{\, 3}x_4 \;
d^{\, 3}x_5 \;
\exp[ -i\, (
\vec{p}_4 \cdot \vec{x}_4
+ \vec{p}_5 \cdot \vec{x}_5 )] \cdot
\nonumber \\
& & \nonumber \\
& &
\bar{u}^{(1)} (\vec{p}_1,s_1,t_1,c_1) \,
\bar{u}^{(2)} (\vec{p}_2,s_2,t_2,c_2) \,
\bar{u}^{(3)} (\vec{p}_3,s_3,t_3,c_3) \,
\bar{u}^{(4)} (\vec{p}_4,s_4,t_4,c_4) \nonumber \\
& &
\quad < 0 | \, T \left( \, \psi^{\, (1)} (x_1) \, \psi^{\, (2)} (x_2)
\, \psi^{\, (3)} (x_3) \, \psi^{\, (4)} (x_4)
\,\bar{\psi}^{\, (5)} (x_5) \, \right) \, |P,p^+>
\cdot \nonumber \\
& &
\bar{v}^{(5)} (\vec{p}_5,s_5,t_5,c_5)
\;
d^+ (\vec{p}_5,s_5,t_5,c_5) \;
b^+ (\vec{p}_4,s_4,t_4,c_4) \;
b^+ (\vec{p}_3,s_3,t_3,c_3) \nonumber \\
& &
b^+ (\vec{p}_2,s_2,t_2,c_2) \;
b^+ (\vec{p}_1,s_1,t_1,c_1) \;
|0> +
\ldots \quad \Big\} {\Big|}_{x^0_1\;=\;x^0_2 \;=\;\ldots \;=\;0}
\end{eqnarray}
\begin{eqnarray}<K^+(\vec{P})|K^+(\vec{P}^{\;\prime})> \; = &
<P,K^+|P^\prime,K^+> & = \; (2\pi )^3 \; 2 \,
\omega_K (|\vec{P}\;|) \;
\delta^3 (\vec{P}-\vec{P}^{\;\prime}) \nonumber \\
<p^+(\vec{P})|\,p^+(\vec{P}^{\;\prime})> \; = &
<P,p^+|P^\prime,p^+> & = \; (2\pi )^3 \; \; 2 \,
\omega_p (|\vec{P}\;|) \; \delta^3 (\vec{P}-\vec{P}^{\;\prime}) \nonumber
\end{eqnarray}
A similar expression holds for the outgoing $\Lambda$--Baryon state
$|\Lambda (P) >$. Both, the proton and the $\Lambda$ are considered to be
quark--diquark objects. Obviously the five quark BS--amplitude above
is connected to the ``strangeness content'' of the proton.
As for the baryons $p^+$ and $\Lambda$ the BS--amplitude of the $K^+$ is calculated via a separable
BS--kernel. The further evaluation of equ.\ (\ref{xlb1}) and (\ref{xlb2}) is performed via Wick's
theorem. Final integrations are performed in lightcone variables. A similar approach to $pp\rightarrow pp\,\phi$ (quark--gluon picture)
and $pp\rightarrow d\pi^+$ (nucleon--meson picture) at threshold is on the way.
\section*{References}
| proofpile-arXiv_065-8192 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
This paper is a sequel to the work we had started, with Dennis
Sullivan, in our earlier publication \cite{BNS}. In that work
the {\it universal commensurability mapping
class group}, $MC_{\infty}(X)$, was introduced, and it
was shown that this group acts by biholomorphic
{\it modular transformations} on the universal
direct limit, ${\cal T}_{\infty}(X)$, of Teichm\"uller spaces of
compact surfaces. This space ${\cal T}_{\infty}(X)$ was named the universal
commensurability Teichm\"uller space.
Let $X$ be a compact connected oriented surface of genus at
least two. We recall that the elements of the universal
commensurability mapping class group, $MC_{\infty}(X)$,
arise from closed circuits, starting and terminating at $X$, in
the graph of all {\it topological} coverings of $X$ by other
compact connected oriented topological surfaces. The edges of
the circuit represent covering morphisms, and the vertices
represent the corresponding covering surfaces. The group
$MC_{\infty}(X)$
is naturally isomorphic with the group of virtual automorphisms
of the fundamental group ${\pi}_1(X)$ \cite{BN}. A virtual
automorphism is an isomorphism between two finite index
subgroups of ${\pi}_1(X)$; two such isomorphisms are
identified if they agree on some finite index subgroup.
The Teichm\"uller space for $X$, denoted ${\cal T}(X)$, is
the space of all conformal structures on $X$ quotiented by
the group of diffeomorphisms of $X$ path connected to the
identity diffeomorphism.
Any unramified covering $p : Y \longrightarrow X$ induces an
embedding ${\cal T}(p) : {\cal T}(X) \longrightarrow {\cal T}(Y)$,
defined by sending a complex structure on $X$ to its pull back
on $Y$ using $p$. The complex analytic ``ind-space'',
${\cal T}_{\infty}(X)$,
is the direct limit of the finite dimensional Teichm\"uller
spaces of connected coverings of $X$, the connecting maps of
the direct system being the maps ${\cal T}(p)$. (This inductive system
of Teichm\"uller spaces is built over the directed set of all
unramified covering surfaces of $X$. The precise definitions
are in the pointed category; see \cite{BNS} and Section 2 below.)
As stated earlier, there is a natural action of $MC_{\infty}(X)$ on
${\cal T}_{\infty}(X)$. For exact definitions we refer
to section III.1 below. Let $CM_{\infty}(X)$ denote the
image of $MC_{\infty}(X)$, arising via this action, in the
holomorphic automorphism group of ${\cal T}_{\infty}(X)$.
The group $CM_{\infty}(X)$ was called the {\it universal
commensurability modular group} in \cite{BNS}.
We prove in Theorem 3.14 of this paper that the action of
$MC_{\infty}(X)$ on ${\cal T}_{\infty}(X)$ is {\it effective}. In other
words, the projection of $MC_{\infty}(X)$ onto $CM_{\infty}(X)$
is an isomorphism.
As noted in \cite{BNS}, the direct limit space ${\cal T}_{\infty}(X)$ is the
universal parameter space of {\it compact} Riemann surfaces, and
it can be interpreted as the space of transversely locally
constant complex structures on the universal hyperbolic solenoid
$$
{H}_{\infty}(X) ~ := ~
{\widetilde{X}}{\times_{{\pi}_1(X)}}{\widehat{{\pi}_1(X)}} \, .
$$
Here $\widetilde{X}$ is the universal cover of $X$ and
$\widehat{{\pi}_1(X)}$ is the profinite completion of
${\pi}_1(X)$. The transverse direction mentioned above refers
to the fiber direction for the natural projection of ${H}_{\infty}(X)$
onto $X$.
In this article, to any compact connected {\it Riemann surface}
$X$, we associate a subgroup of $MC_{\infty}(X)$, which we
denote by $\mbox{\rm ComAut}(X)$, that may be called the {\it
commensurability automorphism group} of $X$. The members of
$\mbox{\rm ComAut}(X)$ arise from closed circuits, again
starting and ending at $X$, whose edges represent {\it
holomorphic} coverings amongst compact connected {\it
Riemann} surfaces. Indeed, this group $\mbox{\rm ComAut}(X)$,
turns out to be precisely the {\it stabilizer}, of the
(arbitrary) point $[X] \in {\cal T}_{\infty}(X)$, for the action of the
universal commensurability modular group $CM_{\infty}(X)$.
As we mentioned earlier, each point of ${\cal T}_{\infty}(X)$ represents a
complex structure on the universal hyperbolic solenoid,
${H}_{\infty}(X)$. The base leaf in ${H}_{\infty}(X)$ is the path connected subset
$\widetilde{X}\times_{{\pi}_1(X)} {\pi}_1(X)$. We
show that $\mbox{\rm ComAut}(X)$ acts by {\it holomorphic}
automorphisms on this complex analytic solenoid, preserving the
base leaf. In fact, we demonstrate that $\mbox{\rm ComAut}(X)$ is the full
group of {\it base leaf preserving holomorphic automorphisms of}
${H}_{\infty}(X)$.
The study of the isotropy group associated to any point of
${\cal T}_{\infty}(X)$ makes direct connection with the well-known theory of
{\it commensurators} of the corresponding uniformizing Fuchsian
groups. The commensurator, denoted by $\mbox{\rm Comm}(G)$, of a
Fuchsian group, $G \subset PSL(2, {\Bbb R})$, is the group consisting
of those M\"obius transformations in $PSL(2, {\Bbb R})$ that
conjugate $G$ onto a group that is ``commensurable'' ($\equiv$
finite-index comparable) with $G$ itself. In other words,
$g \in \mbox{\rm Comm}(G)$ if and only if $G \cap gGg^{-1}$
is of finite index in both $G$ and $gGg^{-1}$.
Let $X = {\Delta}/G$, where $\Delta$ is the unit disk, and $G$
is a torsion free co-compact Fuchsian group. We will demonstrate
in Section 4 that $\mbox{\rm ComAut}(X)$ is {\it canonically isomorphic} to
$\mbox{\rm Comm}(G)$.
If the genus of $X$ is at least three, then the subgroup of
$MC_{\infty}(X)$ that fixes the stratum ${\cal T}(X) (\subset
{\cal T}_{\infty}(X))$ pointwise, is shown to be precisely a copy
of ${\pi}_1(X)$, [Proposition 4.4]. The group ${\pi}_1(X)$
is realized as a subgroup of the group of virtual automorphisms
of ${\pi}_1(X)$ by the inner conjugation action.
Now, the commensurator of the Fuchsian group, $G$, associated to a
generic compact Riemann surface (of genus at least three), is
known to be simply $G$ itself \cite{Gr1}, \cite{Sun}. On the
other hand, it is a deep result following from the work of
G.A. Margulis \cite{Mar}, that $\mbox{\rm Comm}(G)$ (for $G$
any finite co-volume Fuchsian group) is {\it dense} in
$PSL(2, {\Bbb R})$ if and only if the group $G$ was an {\it arithmetic}
subgroup. We explain these matters in Section 4.
The countable family of co-compact arithmetic Fuchsian groups
play a central r\^ole in one result of this paper, which
we would like to highlight. In Theorem 5.1 we assert
that the biholomorphic action of $\mbox{\rm ComAut}(X)$
on the complex solenoid ${H}_{\infty}(X)$ turns out to be {\it ergodic}
precisely when the Fuchsian group uniformizing $X$ is
{\it arithmetic}. The proof of this theorem
utilizes strongly the result of Margulis quoted above. Here the
ergodicity is with respect to a natural measure that exists on
each of these complex analytic solenoids ${H}_{\infty}(X)$. In fact, the
product measure on $\widetilde{X}\times\widehat{{\pi}_1(X)}$,
arising from the Poincar\'e measure on $\widetilde{X}$ and the
Haar measure on $\widehat{{\pi}_1(X)}$, is actually invariant
under the action of ${\pi}_1(X)$. Consequently it induces the
relevant natural measure on ${H}_{\infty}(X)$. (There are also elegant
alternative ways to construct this measure; see Section 5.)
Another aspect regarding the applications of the group $\MCinX$,
as well as of its isotropy subgroups, arises in lifting these
actions to vector bundles over ${\cal T}_{\infty}(X)$. In fact, the space ${\cal T}_{\infty}(X)$
supports certain natural holomorphic vector bundles, where each
fiber can be interpreted as the space of holomorphic $i$-forms
on the corresponding complex solenoid. The action of the
modular group $MC_{\infty}(X)$ does in fact lift canonically to
these bundles, and the action on the relevant fiber of the isotropy
subgroup, $\mbox{\rm ComAut}(X)$, is studied. The very basic question, asking whether
or not the action of the commensurability automorphism group
is {\it effective} on the corresponding infinite dimensional fiber,
is settled in the affirmative [Theorem 6.5]. It is also shown,
[section VI.3], that the action of the commensurability modular
group preserves a natural {\it Hermitian} structure on the bundles.
Some of the results presented here were announced in \cite{BNcras}.
\medskip
\noindent
{\it Acknowledgments:}\, As we have already said at the outset,
the present work is a continuation of the joint work, \cite{BNS},
with Dennis Sullivan. It is a pleasure for us to record our
gratitude to him.
We are grateful to the following mathematicians for many helpful
and interesting discussions~: S.G. Dani, S. Kesavan, D.S. Nagaraj,
C. Odden, M.S. Raghunathan, P. Sankaran, V.S. Sunder and
T.N. Venkataramana. We are especially grateful to S.G. Dani for
rendering us a lot of help regarding Section 5, and to T.N.
Venkataramana for pointing out a very useful reference.
\bigskip
\section {The universal limit objects ${H}_{\infty}$ and ${\cal T}_{\infty}$}
The Teichm\"uller space ${\cal T}(X)$ parametrizes isotopy classes of
complex structures on any compact, connected, oriented
topological surface, $X$. We recall that if $\mbox{Conf}(X)$ is
the space of all complex structures over $X$ compatible with the
orientation, and $\mbox{Diff}_0(X)$ is the group of all
diffeomorphisms of $X$ homotopic to the identity map, then
${\cal T}(X) \, = \, \mbox{Conf}(X)/\mbox{Diff}_0(X)$.
Fix a compact connected oriented surface $X$. Consider any
orientation preserving unbranched covering over $X$~:
$$
p : \, Y \, \longrightarrow \, X \, ,
\leqno(2.1)
$$
where $Y$ is allowed to be an arbitrary compact connected
oriented surface. Associated to $p$
is a proper injective holomorphic immersion
$$
{\cal T}(p) : \, {\cal T}(X) \, \longrightarrow \, {\cal T}(Y) \, ,
\leqno(2.2)
$$
which is defined by mapping (the isotopy class of) any complex
structure on $X$ to its pull back by $p$. It is easy to check
that the injective map
$$
p^* \, : \, \mbox{Conf}(X) ~ \longrightarrow ~
\mbox{Conf}(Y) \, ,
$$
obtained using pull back of complex structures
by $p$, actually descends to a map ${\cal T} (p)$ between the
Teichm\"uller spaces. The map ${\cal T}(p)$ respects the Teichm\"uller
metrics; the Teichm\"uller metric
determines the quasiconformal-distortion. The
association of ${\cal T}(p)$ to $p$ is a contravariant functor from
the category of surfaces and covering maps to the category
of complex manifolds and holomorphic maps.
We will now recall some basic constructions from \cite{BNS} and
\cite{BN}. For our present purposes we first need to carefully
explain the various related directed sets over which our
infinite limit constructions will proceed.
\bigskip
\noindent
{\bf II.1. Directed sets and the solenoid ${H}_{\infty}$~:}
Henceforth assume that $X$ has genus greater than one,
and also fix a {\it base point} $x \in X$.
Fix a {\it pointed universal covering} of $X$~:
$$
u\, :\, ({\widetilde X}, \star) \, \longrightarrow\, (X, x) \, ,
\leqno(2.3)
$$
and canonically identify $G:={\pi}_1(X,x)$ as the
group of deck transformations of the covering map
$u$. Note that any two choices
of the pointed universal covering are canonically isomorphic.
Let ${\cal I}(X)$ denote the directed set consisting of all arbitrary
unbranched {\it pointed} coverings $p: (Y,y) \longrightarrow (X,x)$ over
$X$. Note that if $p$ is a (unpointed) covering of degree $N$,
as in (2.1), then there are $N$ distinct members of ${\cal I}(X)$
corresponding to it, each
being a copy of $p$ but with the $N$ distinct choices of a base
point $y \in p^{-1}(x)$ on $Y$. The partial order in ${\cal I}(X)$ is
determined in the obvious way by base-point preserving factoring
of covering maps. More precisely, given another pointed covering
$q : (Z,z) \longrightarrow (X,x)$ in ${\cal I}(X)$, we say $q \geq p$
if and only if there is a pointed covering map
$$
r ~ : ~(Z,z) ~ \longrightarrow ~(Y,y)
$$
such that $p\circ r = q$. It is important to note that a factoring
map, when exists, is {\it uniquely} determined because we work
in the pointed category.
Let ${\rm Sub}(G)$ denote the directed set of all finite index
subgroups in $G$, ordered by reverse inclusion. In other words,
for two subgroups $G_1, G_2 \subset G$, we say $G_1 \geq G_2$ if
and only if $G_1 \subseteq G_2$.
There are
order-preserving canonical maps each way~:
$$
A\, : \, {\cal I}(X) \, \longrightarrow \, {\rm Sub}(G) ~~~\mbox{and}~~~ B\, : \,
{\rm Sub}(G) \, \longrightarrow \, {\cal I}(X) \leqno(2.4)
$$
The map $A$ associates to any $p \in {\cal I} (X)$, as above, the
image of the monomorphism $p_* : {\pi}_1(Y,y) \longrightarrow
{\pi}_1(Y,y)$. The latter map, $B$, sends the subgroup $H \in
{\rm Sub}(G)$ to the pointed covering $({{\widetilde X}}/H, \star) \longrightarrow
(X,x)$, with $\star$ denoting, of course, the $H$-orbit of the
base point in the universal cover ${\widetilde X}$. These covers
arising from quotienting ${\widetilde X}$
by subgroups of $G$, provide canonical
models, up to isomorphism, for arbitrary members of ${\cal I}(X)$.
Notice that the composition $A \circ B$ is the identity map on
${\rm Sub}(G)$. Consequently, $A$ is surjective, and $B$ maps
${\rm Sub}(G)$ injectively onto a cofinal subset in ${\cal I}(X)$.
As mentioned, this cofinal subset contains a representative
for every isomorphism class of pointed covering of $X$.
It is also convenient to introduce the directed cofinal subset,
${\cal I}_{\rm gal}(X)$ comprising only the normal (Galois) coverings
in ${\cal I}(X)$. The corresponding cofinal subset in ${\rm Sub}(G)$
is denoted ${\rm Sub}_{\rm nor}(G)$, and it consists of all the
normal subgroups in $G$ of finite index.
\medskip
\noindent
{\it Remark 2.5~:} For the construction of the projective and
inductive limit objects, ${H}_{\infty}$ and ${\cal T}_{\infty}$ mentioned in the
Introduction, we note that we can work with any of the directed
sets (${\cal I}(X)$, ${\cal I}_{\rm gal}(X)$, ${\rm Sub}(G)$, or ${\rm
Sub}_{\rm nor}(G)$) as introduced above; by utilizing the
relationships described above, it follows that the actual limit
objects will not be affected by which directed set we happen to
use. It is, however, a rather remarkable thing, that to define
the commensurability groups of automorphisms on these very limit
objects (see Section 3 below), one is forced to work with the
sets like ${\cal I}(X)$, or, equivalently, with the actual
monomorphisms between surface groups; in other words, working
with just their image subgroups in $G$ does not suffice.
\medskip
Denote by $H_{\infty}(X)$ the inverse limit, $\limproj
X_{\alpha}$, where ${\alpha}$ runs through the index set
${\cal I}(X)$, and $X_{\alpha}$ being the covering surface
(the domain of the map $\alpha$). Introduced in \cite{Su}, the
space $H_{\infty}(X)$ is know as the {\it universal hyperbolic
solenoid}. The ``universality'' of this object resides in the
evident but crucial fact that these spaces ${H}_{\infty}(X)$, as well as
their Teichm\"uller spaces $\THinX$, do {\it not} really depend
on the choice of the base surface $X$. If we were to start with
a pointed surface $X'$ of different genus (both genera being greater
than one), we could pass to a common covering surface of $X$ and
$X'$ (always available), and hence the limit spaces we construct
would be isomorphic. More precisely, there is a natural
isomorphism between ${H}_{\infty}(X)$ and $H_{\infty}(X')$ whenever we fix
a surface $(Y,y)$ together with pointed covering maps of it
onto $X$ and $X'$.
We are therefore justified in suppressing $X$ in our
notation and referring to ${H}_{\infty}(X)$ as simply ${H}_{\infty}$. For
each surface $X$ there is a natural projection
$$
p_{\infty}\, : \, {H}_{\infty}(X) \longrightarrow X \,
\leqno(2.6)
$$
induced by coordinate projection from $\Pi{X_\alpha}$ onto
$X$. Each fiber of $p_{\infty}$ is a perfect, compact and totally
disconnected space --- homeomorphic to the Cantor set.
The space ${H}_{\infty}(X)$ itself is compact and connected, but not path
connected. The path components of $H_{\infty}(X)$ are christened
``leaves". Each leaf, equipped with the ``leaf-topology" (which
is strictly finer than the subspace topology it inherits from
${H}_{\infty}(X)$), is a simply connected two-manifold; when restricted
to any leaf, the map $p_{\infty}$ is an universal covering
projection on $X$. There are uncountably many leaves in ${H}_{\infty}(X)$,
and each is dense in ${H}_{\infty}(X)$. The base point of $\widetilde X$
determines a base point in $H_{\infty}(X)$. The {\it base leaf}
is, by definition, the one containing the base point.
An alternative construction of $H_{\infty}(X)$ that we will be
using repeatedly is as follows.
First let us recall the definition of the profinite
completion of any group $G$. For us $G$ will be
$\pi_1(X)$.
The {\it profinite completion} of $G$ is the projective limit
$$
\widehat{G} \, = \, \limproj (G/H) \, ,
\leqno(2.7)
$$
where the limit is taken over all $H \in {\rm Sub}_{\rm
nor}(G)$. For $G$ a surface group, note that each $G/H$ can be
identified with the deck transformation group of the finite
Galois cover corresponding to $H$. This group $\widehat{G}$,
with the discrete topology being assigned on each of the finite
groups $G/H$, is homeomorphic to the Cantor set. There is a
natural homomorphism from $G$ into $\widehat{G}$ induced by the
projections of $G$ onto $G/H$. Since $G$ is residually finite,
one sees that this homomorphism of $G$ into $\widehat{G}$ is
injective.
We will also require another useful description
of the profinite completion group $\widehat{G}$.
Consider the Cartesian product
$$
\prod_{H \in {\rm Sub}_{\rm nor}(G)} G/H \, ,
$$
which, using Tychonoff's theorem, is a compact topological
space. There is a natural homomorphism of $G$ into it; the
injectivity of this homomorphism is equivalent to the assertion
that $G$ is residually finite. The {\it closure} of $G$ in this
product space is the profinite completion $\widehat G$.
Denoting by $G$ the fundamental group, ${\pi}_1(X,x)$, of the
pointed surface $(X,x)$, the universal cover, $u: {\widetilde X} \rightarrow X$,
has the structure of a principal $G$-bundle over $X$. It is not
difficult to see that the solenoid $H_{\infty}(X)$ can also be
defined as the principal $\widehat{G}$-bundle
$$
p_{\infty} \, : \, {H}_{\infty}(X) \, \longrightarrow \, X \, ,
\leqno(2.8)
$$
obtained by extending the structure group of the principal
$G$-bundle, defined by the
universal covering, using the natural inclusion homomorphism
of $G$ into its profinite completion $\widehat{G}$. The typical
fiber of the
projection $p_{\infty}$ is the Cantor group $\widehat{G}$.
In other words, the solenoid ${H}_{\infty}(X)$ is identified with
the quotient of the product ${\widetilde X} \times {\widehat{G}}$
by a natural properly discontinuous and free $G$-action~:
$$
{H}_{\infty}(X) \, \equiv \, {{\widetilde X} \times_G \widehat{G}}
\leqno(2.9)
$$
Here $G$ acts on ${\widetilde X}$ by the deck transformations, and it acts
by left translations on $\widehat{G}$.
One further notes that the projection~:
$$
P_G ~:~{{\widetilde X} \times {\widehat{G}}}~\longrightarrow ~{\widetilde X} \times_G \widehat{G}
\leqno(2.10)
$$
enjoys the property that its restriction to any slice of the
form ${{\widetilde X}} \times \{\hat{\gamma}\}$, for arbitrarily fixed
member ${\hat{\gamma}} \in \widehat{G}$, is a homeomorphism onto
a path connected component (i.e., a ``leaf'') of ${\widetilde X}\times_G
\widehat{G}$. It will be useful to remark that a point
$(z,\hat{\gamma}) \in {{\widetilde X} \times \widehat{G}}$ maps by $P_G$
into the {\it base leaf}, if and only if there exists a fixed $g
\in G$ such that, for every $H \in {\rm Sub}_{\rm nor}(G)$
the coset of $G/H$ listed as the $H$-coordinate in
$\hat{\gamma}$ is $gH$.
We leave it to the reader to check these elementary matters, as
well as to trace through the canonical identifications between
the various descriptions of the solenoid that we have offered
above.
\bigskip
\noindent
{\bf II.2. The Teichm\"uller functor and ${\cal T}_{\infty}$~:} We now apply
the ``Teichm\"uller functor'' to the tower of coverings
parametrized by ${\cal I} (X)$; that produces an inductive system of
Teichm\"uller spaces, and we set~:
$$
{\cal T}_{\infty} \, \equiv \, {\cal T}_{\infty}(X) \, =\, \limind{\cal T} (X_{\alpha})
\leqno(2.11)
$$
This is the direct limit of finite dimensional Teichm\"uller
spaces, where ${\alpha}$ runs through the same directed set
${\cal I}(X)$. The connecting morphisms in the direct system are the
immersions defined in (2.2), and the limit object is called the
{\it universal commensurability Teichm\"uller space}.
This space ${\cal T}_{\infty}$ is an universal parameter space for compact
Riemann surfaces of genus at least two. It is a metric space
with a well-defined Teichm\"uller metric. Indeed, ${\cal T}_{\infty}(X)$ also
carries a natural Weil-Petersson K\"ahler
structure obtained from scaling the
Weil-Petersson pairing on each finite
dimensional stratum ${\cal T} (X_{\alpha})$. (See \cite{BNS},
\cite{BN}, for details.) We will now interpret ${\cal T}_{\infty}(X)$ as the
space of a certain class of complex structures on the universal
hyperbolic solenoid.
Local (topological) charts for the solenoid, ${H}_{\infty}(X)$, are
``lamination charts'', namely homeomorphisms of open subsets of
the solenoid to $\{disc\} \times \{transversal\}$. Now, by
definition, a {\it complex structure on the solenoid} is the
assignment of a complex structure on each leaf, such that the
structure changes continuously in the transverse (Cantor fiber)
direction. Details may be found in \cite{Su}. When the solenoid
is equipped with a complex structure we say that we have a {\it
complex analytic solenoid}, alternatively called a {\it Riemann
surface lamination}. The space of leaf-preserving isotopy
classes of complex structures on ${H}_{\infty}$ constitutes a complex
Banach manifold --- {\it the Teichm\"uller space of the
hyperbolic solenoid} --- and it is denoted by $\THin = \THinX$
(\cite {Su}, \cite{NS}). In fact, this Banach manifold contains
${\cal T}_{\infty}(X)$, as we explain next, and is actually the Teichm\"uller
metric completion of the direct limit object ${\cal T}_{\infty}(X)$.
(Note \cite{NS}.)
Each point of ${\cal T}_{\infty}(X)$ corresponds to a
well-defined Riemann surface lamination. Indeed, fix a
complex structure on $X_{\alpha}$, with ${\alpha}:X_{\alpha}
\longrightarrow X$ being any member of ${\cal I}(X)$. Then
for every $\beta\in {\cal I}(X)$, with $\beta \geq \alpha$,
we obtain a complex structure on $X_{\beta}$
by pulling back the complex structure on $X_{\alpha}$,
using the pointed (factorizing) covering map
$\sigma: X_{\beta} \longrightarrow X_{\alpha}$.
(The factorization $\beta = \alpha \circ \sigma$, in the pointed
category, uniquely determines $\sigma$.) It can be now verified
that there is a unique complex structure on ${H}_{\infty}(X)$ enjoying
the property that the
natural projection of ${H}_{\infty}(X)$ onto any $X_{\beta}$, where
$\beta \geq \alpha$, is {\it holomorphic} with respect to the
complex structure on $X_{\beta}$ just constructed.
The complex structures so obtained on ${H}_{\infty}(X)$ are more than just
continuous in the transversal direction. They are, in fact,
precisely those complex structures that are transversely
locally constant. This demonstrates that ${\cal T}_{\infty}(X)$ is naturally
a subset of $\THinX$.
\bigskip
\section{The commensurability modular action on ${\cal T}_{\infty}$}
The group of topological self correspondences of $X$, which
arise from undirected cycles of finite pointed coverings
starting at and returning to $X$, gave rise to
the {\it universal commensurability mapping class group} $MCinX$.
This group acts by holomorphic automorphisms on
${\cal T}_{\infty}(X)$ --- and that is one of the main
themes of this paper.
\bigskip
\noindent
{\bf III.1. The ${H}_{\infty}$ and ${\cal T}_{\infty}$ functors on finite covers~:}\,
We proceed to recall in some detail a chief construction
introduced in \cite{BNS}, and followed up in \cite{BN},
\cite{BNcras}, \cite{BNmrl}. Indeed, a quite remarkable
yet evident fact about the construction of the genus-independent
limit object ${H}_{\infty}$, is that every member of ${\cal I}(X)$, namely
every pointed covering, $p:Y \longrightarrow X$, induces
a natural, {\it homeomorphism, mapping the base leaf to the
base leaf}, between the two copies of the universal solenoid
obtained from the two different choices of base surface~:
$$
{H}_{\infty}(p) ~ : ~ {H}_{\infty}(X) ~ \longrightarrow ~ {H}_{\infty}(Y) \, .
\leqno{(3.1)}
$$
In fact, any compatible string of points, one from each covering
surface, representing a point of ${H}_{\infty}(X)$, becomes just such a
compatible string representing an element of ${H}_{\infty}(Y)$
--- simply by discarding the coordinates in all the strata
that did not factor through $p$.
As for ${\cal T}_{\infty}$, the same cover $p$ also induces a bijective
identification between the two corresponding models of ${\cal T}_{\infty}$
built with these two different choices of base surface~:
$$
{\cal T}_{\infty}(p) ~ : ~ {\cal T}_{\infty}(Y) ~ \longrightarrow ~ {\cal T}_{\infty}(X)
\leqno{(3.2)}
$$
The mapping above corresponds to the obvious order
preserving map of directed sets, ${\cal I}(Y)$ to ${\cal I}(X)$,
defined by $\theta \mapsto p \circ \theta$. The image of ${\cal I}(Y)$
is {\it cofinal} in ${\cal I}(X)$. That induces a natural morphism
between the direct systems that define ${\cal T}_{\infty}(Y)$ and ${\cal T}_{\infty}(X)$,
respectively; the limit map is the desired ${\cal T}_{\infty}(p)$.
That the map ${\cal T}_{\infty}(p)$ is {\it invertible} follows simply
because the pointed coverings with target $Y$ are cofinal
with those having target $X$.
The bijection ${\cal T}_{\infty}(p)$ is easily seen to be a {\it Teichm\"uller
metric preserving biholomorphism}.
Since both the maps ${H}_{\infty}(p)$ as well as ${\cal T}_{\infty}(p)$ are
invertible, it immediately follows that every (undirected) cycle
of pointed coverings starting and ending at $X$ produces~:
(i) a {\it self-homeomorphism}, which preserves the base leaf (as a
set), of ${H}_{\infty}(X)$ on itself;
(ii) a biholomorphic {\it automorphism} of ${\cal T}_{\infty}(X)$.
The above observation, \cite[Section 5]{BNS}, leads one to
define the {\it universal commensurability {\underline{mapping
class}} group of $X$}, denoted $\MCinX$, as the group of
equivalence classes of such undirected cycles of topological
coverings starting and terminating at $X$. Notice that $\MCinX$
is a purely {\it topological} construct, whose definition has
nothing to do with the theory of Teichm\"uller spaces. The
equivalence relation among undirected polygons of pointed
coverings is obtained, as explained both in \cite{BNS} and in
\cite{BN}, by replacing any edge (i.e., a covering) by any
factorization of it, thus allowing us to {\it reroute} through
fiber product diagrams.
Indeed, by repeatedly using appropriate fiber product diagrams,
we know from the papers cited above that any cycle (with
arbitrarily many edges) is equivalent to just a two-edge cycle.
Thus every element of $\MCinX$ arises from a finite topological
``self correspondence'' (two-arrow diagram) on $X$.
Fix any such self correspondence given by an arbitrary pair of
pointed, orientation preserving, topological coverings of $X$,
say :
$$
p\, : \, Y \, \longrightarrow \, X
\hspace{.5in} {\rm and} \hspace{.5in}
q\, : \, Y \, \longrightarrow \, X
\leqno{(3.3)}
$$
We have the following induced {\it automorphism} :
$$
A_{(p,q)} \, = \,
{\cal T}_{\infty}({q})\circ {\cal T}_{\infty}(p)^{-1}
\leqno{(3.4)}
$$
of ${\cal T}_{\infty}(X)$. The set of all automorphisms of
${\cal T}_{\infty}(X)$ arising this way constitute a group of
biholomorphic automorphisms of ${\cal T}_{\infty}(X)$, which is called the
{\it universal commensurability {\underline {modular}} group}
$\CMinX$, acting on ${\cal T}_{\infty}(X)$ as well as on its Banach completion
$\THinX$.
In Theorem 3.14 below we will prove that this natural map from
$\MCinX $ to $\CMinX$ is an {\it isomorphism} of groups.
\bigskip
\noindent
{\bf III.2. The virtual automorphism group of $\pi_{1}(X)$~:}\,
The group of {\it virtual automorphisms} of any group $G$,
${{\rm Vaut}(G)}$, comprises equivalence classes of isomorphisms
between arbitrary finite index subgroups of $G$. To be explicit,
an element of ${{\rm Vaut}(G)}$ is represented by an isomorphism
$a: G_1 \rightarrow G_2$, where $G_1$ and $G_2$ are finite index
subgroups of $G$; another such isomorphism
$b: G_3 \longrightarrow G_4$ is identified with $a$ if and only
if there is a subgroup $G' \subset G_1\cap G_3$ of finite index
in $G$, such that $a$ and $b$ coincide on $G'$.
For us $G$ will always be the fundamental group, $\pi_{1}(X,x)$,
of a closed oriented surface.
Let $\mbox{Vaut}^{+}(G) \subset \mbox{Vaut}(G)$ denote the
subgroup of index two that consists of the orientation
preserving elements. Given a virtual automorphism of the surface
group $G$, it is possible to check whether it is in
$\mbox{Vaut}^{+}(G)$ by looking at the action on the second
(group) cohomology level. We will be dealing only with
the subgroup $\mbox{Vaut}^{+}(G)$.
We recall a proposition from \cite{BN}. For any pointed covering
$p : (Y,y) \rightarrow (X,x)$, the induced monomorphism
${\pi}_1(Y,y) \rightarrow {\pi}_1(X,x)$ will be denoted
by ${\pi}_1(p)$.
\medskip
\noindent
{\bf Proposition 3.5.} \cite[Proposition 2.10]{BN} \,
{\it The group ${\rm Vaut}^{+}({\pi}_{1}(X))$, is naturally
isomorphic to ${MC}_{\infty}(X)$. The element of $\MCinX$ determined by
the pair of covers $(p,q)$ as in (3.3), corresponds to the virtual
automorphism represented by the isomorphism: ${\pi_{1}(q)} \circ
{\pi_{1}(p)}^{-1}: H \longrightarrow K$, where $H={\rm Image}({\pi_1(p)}),
~ K= {\rm Image}({\pi_1(q)})$.}
\medskip
Let $\Mob$ denote the group of (orientation preserving)
diffeomorphisms of $S^1$ defined by the M\"obius transformations
that map $\Delta $ on itself. Recall that for any Fuchsian
group $F$ (of the first kind), the Teichm\"uller space ${\cal T}(F)$
consists of equivalence classes of all monomorphisms
$\alpha : F \longrightarrow \Mob$ with discrete image. Two monomorphisms,
say $\alpha$ and $\beta$, are in the same equivalence class if
$\beta = {\rm Conj}(A) \circ \alpha$, where ${\rm Conj}(A)$
denotes the inner automorphism of $\Mob$ achieved by an arbitrary
M\"obius transformation $A$ in that group. In fact, the
monomorphism $\alpha : F \longrightarrow \Mob$, representing any point of
${\cal T}(F)$ can be chosen to be the unique ``Fricke-normalized''
one in its class --- see \cite{Abik}, \cite{N}, and III.5 below.
We will need to describe explicitly the action of
${MC}_{\infty}$ on ${\cal T}_{\infty}$, when we model ${\cal T}_{\infty}$ via any
co-compact, torsion free, {\it Fuchsian group} $\Gamma $. Namely :
$$
{\cal T}_{\infty}(\Gamma) \, = \, \limind {{\cal T}(H)}
\leqno(3.6)
$$
the inductive limit being over the directed set ${\rm Sub}(\Gamma )$.
The connecting maps, ${\cal T}(H_1) \rightarrow {\cal T}(H_2)$, whenever
$H_2 \subset H_1 (\subset \Gamma )$, are obvious.
Fix an element $[\lambda ] \in {\rm Vaut}^{+}(\Gamma)$ represented by the
isomorphism $\lambda :H \longrightarrow K$, as in the setting above.
The present aim is to describe the automorphism~:
$$
[\lambda ]_* \, : \, {\cal T}_{\infty}(\Gamma) \, \longrightarrow \, {\cal T}_{\infty}(\Gamma)
$$
Now, any isomorphism $\lambda $ of a Fuchsian group $H$ onto
a Fuchsian group $K$ determines the following natural
``allowable isomorphism'' between their Teichm\"uller spaces~:
$$
{\cal T}(\lambda ) \, : \, {\cal T}(K) \, \longrightarrow \, {\cal T}(H)
\leqno(3.7)
$$
defined by precomposition of monomorphisms by $\lambda $. See
\cite[Section 2.3.12]{N} for the details.
Evidently, $\lambda $ induces an order-preserving isomorphism between
the directed sets ${\rm Sub}(H)$ and ${\rm Sub}(K)$. The
definitions of ${\cal T}_{\infty}(H)$ and ${\cal T}_{\infty}(K)$ as inductive limits
proceed over these two directed sets, respectively.
Moreover, if ${\cal T}(Z)$ is the Teichm\"uller space of any
such group $Z$, where $Z \in {\rm Sub}(H)$, then there is
the corresponding allowable isomorphism, induced by $\lambda $,
between the following two Teichm\"uller spaces~:
$$
\tau_{\lambda }^{Z}\, : \, {\cal T}(Z) \, \longrightarrow \, {\cal T}(\lambda (Z))
\leqno(3.8)
$$
The collection of all these allowable isomorphisms, (as $Z$ runs
through ${\rm Sub}(H)$), defines {\it a morphism of direct
systems}, thus resulting in a map, say ${\cal T}_{\infty}(\lambda )$, mapping
isomorphically ${\cal T}_{\infty}(H)$ onto ${\cal T}_{\infty}(K)$.
But for any finite index subgroup $G \subset \Gamma$, the cofinality
of ${\rm Sub}(G)$ in ${\rm Sub}(\Gamma)$ certainly gives us
an isomorphism of the corresponding limit Teichm\"uller spaces~:
$$
I_{G \subset \Gamma }: {\cal T}_{\infty}(G) \longrightarrow {\cal T}_{\infty}(\Gamma )
$$
It follows by tracing through the definitions, that the
assigned $[\lambda ] \in {\rm Vaut}^{+}(\Gamma)$ acts on ${\cal T}_{\infty}(\Gamma)$ by the
commensurability modular automorphism~:
$$
[\lambda ]_* \, = \, {I_{K \subset \Gamma }} \circ {{\cal T}_{\infty}(\lambda )} \circ
I^{-1}_{H \subset \Gamma }
\leqno(3.9)
$$
\bigskip
\noindent
{\bf III.3. Representation of $\mbox{${{\rm Vaut}^{+}(\pi_1(X))}$}$ within $\Hqs$~:}\, We
will utilize the result, \cite{BN}, that ${\rm
Vaut}(\pi_{1}(X))$ allows certain natural representations in the
homeomorphism group of the unit circle $S^1$, by the theory of
{\it boundary homeomorphisms}. The general theory of
quasisymmetric boundary homeomorphisms that arise in
Teichm\"uller theory can be found, for example, in \cite[Chapter
2]{N}.
Let us fix any Riemann surface structure on $X$.
Then the universal covering ${\widetilde X}$ can be conformally
identified as the unit disc $\Delta \subset {\Bbb C}$,
with the base point being mapped to $0 \in \Delta$; the
cover transformations group, $G$, then becomes a co-compact
torsion free Fuchsian group, say $\Gamma$:
$$
\Gamma \, \subset \, \Mob \, \equiv \, PSU(1,1) \, \equiv \,
{\mbox{Aut}}(\Delta)
$$
By $\Mob$ we simply mean the restrictions of the
holomorphic automorphisms of the unit disc ($PSU(1,1)$)
to the boundary circle.
Let $[\rho ] \in {\rm Vaut}^{+}(\Gamma)$ be represented by the group isomorphism
$\rho :H \longrightarrow K$, where $H$ and $K$ are Fuchsian subgroups of finite
index within $\Gamma$. A description of the boundary
homeomorphism associated to this virtual automorphism
is as follows~: Consider the natural map,
${\sigma }_{\rho }$, that $\rho $ defines from the orbit of the origin (= $0
\in \Delta $) under $H$ to the orbit of $0$ under $K$. In other words,
the map
$$
\sigma _\rho ~ : ~ H(0) ~ \longrightarrow ~ K(0)
$$
is defined by $h(0) \longmapsto \rho (h)(0)$. But each orbit under
these co-compact Fuchsian groups $H$ and $K$ accumulates
everywhere on the boundary $S^1$. Therefore, it follows that the
map $\sigma _\rho $ extends by continuity to define a homeomorphism of
$S^1$. That homeomorphism is quasisymmetric, and it is the one
that we naturally associated to the element $[\rho ]$ of
${\rm Vaut}^{+}(\Gamma)$ --- see \cite{BN}.
Thus, we have a faithful representation
$$
\Sigma \, : \, \mbox{${{\rm Vaut}^{+}(\pi_1(X))}$} \, \longrightarrow \, \Hqs \, .
$$
Here $\Hqs$ denotes the group of orientation preserving
quasisymmetric homeomorphisms of the circle. The image of
$\Sigma$ is exactly the {\it group of virtual normalizers of
$\Gamma $ amongst quasisymmetric homeomorphisms}. By this we mean
$$
{\rm Vnorm}_{\rm q.s.}(\Gamma ) \,=\, \{f \in \Hqs: f~
\hbox{conjugates some finite index}
\leqno(3.10)
$$
$$
~~~~~~~~~~~~~~~~~~~~~~\hbox{subgroup of} ~\Gamma ~
\hbox{to another such subgroup of} ~\Gamma \}
$$
See \cite{BN} for details.
\smallskip
\noindent
{\it Remark :} \, This faithful copy, (3.10), of $\MCinX$
demonstrates that the normalizers in $\Hqs$ of every
finite index subgroup of $\Gamma $ sit naturally embedded in
${\rm Vnorm}_{\rm q.s.}(\Gamma ) \cong \MCinX$. Any such
normalizer, say $N_{q.s.}(H)$, for $H \in {\mbox{Sub}}(\Gamma )$,
is precisely the ``extended modular group" for the Fuchsian
group $H$, as defined by Bers \cite{B2}. As $H$ ranges over all
the finite index subgroups of $\Gamma$, these extended modular
groups sweep through the ``mapping class like"
(\cite{BN}, \cite{Od}) elements of $\MCinX$.
\bigskip
\noindent
{\bf III.4. ${MC}_{\infty}$ as subgroup of Bers' universal modular
group~:} The representation of $\mbox{${{\rm Vaut}^{+}(\pi_1(X))}$}$ above allows us to
consider the action of ${MC}_{\infty}$ on ${\cal T}_{\infty}$ via the usual type of
right translations by quasisymmetric homeomorphisms, as is
standard for the classical action of the universal modular group
on the universal Teichm\"uller space.
Recall that the {\it Universal Teichm\"uller space} of
Ahlfors-Bers, ${\cal T}(\Delta)$, is the homogeneous space of right cosets
(i.e., $\Mob$ acts by post-composition):
$$
{\cal T}(\Delta) \, := \, {\Mob}\backslash {\Hqs}
\leqno(3.11)
$$
The coset of $\phi \in \Hqs$, viz. $[\Mob]\phi$, will be denoted
by $[\phi]$. There is a natural base point $[Id] \in {\cal T}(\Delta)$ given
by the coset of the identity homeomorphism $Id :S^1 \rightarrow S^1$.
The Teichm\"uller space of an arbitrary Fuchsian group $G$
embeds naturally in ${\cal T}(\Delta)$ as the cosets of those quasisymmetric
homeomorphisms that are compatible with $G$. Compatibility
(\cite{B2}) of $\phi$ with $G$ means that
${\phi G {\phi}^{-1}} \subset \Mob$.
Since ${\cal T}(\Delta)$ is a homogeneous space for $\Hqs$, the group $\Hqs$
acts (in fact, by biholomorphic automorphisms) on this complex
Banach manifold, ${\cal T}(\Delta)$. The action is by right translation
(i.e., by precomposition). In other words, each $f \in \Hqs$
induces the automorphism~:
$$
f_{*}:{\cal T}(\Delta) \, \longrightarrow \, {\cal T}(\Delta)\,~;~~~f_{*}([\phi])~=~[\phi \circ f]
\leqno(3.12)
$$
This action on ${\cal T}(\Delta)$ is classically called the universal modular
group action (see \cite{B2}, \cite{B1}, or \cite[Chapter 2]{N}).
Let us note here that {\it every} non-trivial element of $\Hqs$,
including all the non-identity elements of the conformal group
$\Mob$, acts {\it non-trivially} on the homogeneous space ${\cal T}(\Delta)$.
Of course, the set of universal modular transformations that
keep the base point fixed are precisely those that arise from
$\Mob$.
Having fixed the Fuchsian group $\Gamma$ uniformizing the
reference compact Riemann surface $X$, we see that a copy of
the universal commensurability Teichm\"uller space, ${\cal T}_{\infty}(X)$,
appears embedded in ${\cal T}(\Delta)$ as follows~:
$$
{\cal T}_{\infty}(X)~~\cong~~{\cal T}_{\infty}(\Gamma ) = \{[\phi] \in {\cal T}(\Delta): \phi \in \Hqs
~\hbox{is compatible}
\leqno(3.13)
$$
$$
~~~~~~~~~~~~~~~~~~~~~~\hbox{with some finite index subgroup of}
~\Gamma \}
$$
Indeed, one notes that ${\cal T}_{\infty}(\Gamma)$ is precisely the union, in ${\cal T}(\Delta)$,
of the Teichm\"uller spaces of all the finite index subgroups
of $\Gamma $. The reader will observe that (3.13) is simply (3.6)
--- but now embedded within the ambient space ${\cal T}(\Delta)$.
This embedded copy of ${\cal T}_{\infty}$ (see \cite{NS}) was called the
``$\Gamma$-tagged'' copy. In connection with the discussion (see
II.2 above) of the full space of complex structures on the
solenoid, it is relevant to point out that the topological
{\it closure} in ${\cal T}(\Delta)$ of any such $\Gamma$-tagged copy of
${\cal T}_{\infty}$ is a model of $\THin$.
Finally then we will need the important fact, (we refer again to
\cite{BN}), that the action of $\MCinX$ on ${\cal T}_{\infty}(X)$ {\it coincides}
with the action, by right translations, of the subgroup of the
universal modular group corresponding to
${\rm Vnorm}_{\rm q.s.}(\Gamma ) \subset \Hqs$. In fact, the
universal modular transformations of ${\cal T}(\Delta)$ induced by the members
of ${\rm Vnorm}_{\rm q.s.}(\Gamma )$
preserve the subset ${\cal T}_{\infty}(\Gamma)$, and, under
the canonical identification of ${\cal T}_{\infty}(X)$ with ${\cal T}_{\infty}(\Gamma)$, these
transformations on ${\cal T}_{\infty}(\Gamma)$ correspond to the universal
commensurability modular transformations acting on ${\cal T}_{\infty}(X)$.
\bigskip
\noindent
{\bf III.5. ${MC}_{\infty}$ acts effectively on ${\cal T}_{\infty}(X)$~:}\, We are ready
to prove a very basic fact that will be important in what follows~:
\medskip
\noindent
{\bf Theorem 3.14.}\,
{\it $\mbox{${{\rm Vaut}^{+}(\pi_1(X))}$}$ acts effectively on ${\cal T}_{\infty}(X)$. In other words, the
natural homomorphism $\MCinX \longrightarrow \CMinX$ has trivial kernel.}
\medskip
To prove the theorem we need the following lemma~:
\medskip
\noindent
{\bf Lemma 3.15.}\, {\it Assume that the genus of $X$ is at
least three. Suppose that $p,q : (Y,y) \longrightarrow (X,x)$ are
any two pointed unbranched finite coverings such that the induced
monomorphisms ${\pi}_{1}(p)$ and $\pi_{1}(q)$ are unequal,
and further that they remain unequal even after any inner
conjugation in either the domain or the target group is applied.
Then the corresponding induced embeddings ${\cal T}(p)$ and ${\cal T}(q)$,
(of ${\cal T}(X)$ into ${\cal T}(Y)$), must be unequal.}
\noindent
{\it Proof of Lemma 3.15~:} Let $X$ have genus $g$, $g \geq 3$.
Fix any complex structure on $X$, and let $X={\Delta}/{\Gamma }$
where $\Gamma $ is the uniformizing Fuchsian group (isomorphic to
$\pi_1(X,x)$). Let $H$ and $K$ denote the images of the
monomorphisms $\pi_{1}(p)$ and $\pi_{1}(q)$, respectively.
Denote by $\lambda $:
$$
\lambda \, := \, {\pi_{1}(q)} \circ {\pi_{1}(p)}^{-1}\, : \, H \,
\longrightarrow \, K \leqno(3.16)
$$
the non-trivial isomorphism of $H$ onto $K$ that is given to us.
By assumption it represents some non-identity element of ${\rm Vaut}^{+}(\Gamma)$.
By Nielsen's theorem, \cite{Macb}, there exists a based
diffeomorphism, say $\Theta$, of $\Delta /H$ onto $\Delta /K$
whose action on $\pi_1$ is given by $\lambda $.
Lift this diffeomorphism to the universal covering, $\Delta$,
and consider the homeomorphism, say $\theta:S^1 \rightarrow S^1$ defined
by the boundary action of the lift. The map $\theta$ is exactly the
quasisymmetric homeomorphism that we associated --- see III.3 ---
as the boundary homeomorphism that corresponds to the given
element $[\lambda ] \in {\rm Vaut}^{+}(\Gamma)$. In fact, the isomorphism $\lambda $ is
realized as conjugation by $\theta$ of members of $H$:
$$
\lambda (h) \, = \, \theta \circ h \circ {\theta}^{-1}\,,~~~h \in H \,.
\leqno(3.17)
$$
Note that if we extend $\theta$ as a quasiconformal
homeomorphism of $\Delta$ by using the conformally natural
Douady-Earle extension operator, then the extension (we will
still call it $\theta$) will satisfy the above equation not only
on the boundary circle but throughout the unit disc.
We recall that there is a unique ``Fricke-normalized''
monomorphism $\alpha : F \longrightarrow \Mob$, representing any point
$[\alpha]$ of the Teichm\"uller space ${\cal T}(F)$, (see section
III.2 above), once we have chosen standard generators
$\{A_1, B_1, A_2, B_2\cdots , A_g, B_g\}$
for the genus $g$ Fuchsian group $F$. More
precisely, we want to normalize the positions of three of the
four fixed points (on the unit circle) for the hyperbolic
M\"obius transformations that get assigned by the monomorphism
to $B_g$ and $A_g$. For exact details see \cite[Section
2.5.2]{N} or \cite[Chapter II]{Abik}. Such a normalization
eliminates the $3$-parameter M\"obius ambiguity in identifying
Teichm\"uller equivalent monomorphisms.
Now, since $H$ and $K$ are subgroups of $\Gamma$, we
have natural embeddings between the Teichm\"uller
spaces given by restricting the monomorphisms of
$\Gamma $ into $\Mob$ to the respective subgroups~:
$$
E_H: {\cal T}(\Gamma ) \longrightarrow {\cal T}(H) ~~~~ \mbox{and} ~~~~
E_K: {\cal T}(\Gamma ) \longrightarrow {\cal T}(K)\, .
\leqno(3.18)
$$
Let us identify the surface $Y$ with $\Delta /H$; then the
embeddings between Teichm\"uller spaces ${\cal T}(\Gamma ) \rightarrow {\cal T}(Y)$
that we desire to compare are given by:
$$
{\cal T}(p) = E_H:{\cal T}(\Gamma ) \longrightarrow {\cal T}(H), ~~~~
{\cal T}(q) = {\cal T}(\lambda ) \circ E_K:{\cal T}(\Gamma ) \longrightarrow {\cal T}(H)
\leqno(3.19)
$$
where ${\cal T}(\lambda )$ is the allowable isomorphism between Teichm\"uller
spaces described in (3.7). The lemma will be demonstrated by
proving that the two mappings above are unequal.
By assumption, the map $\lambda $ which acts on $H$ as follows~:
$h \mapsto {\theta} \circ {h} \circ {\theta}^{-1}$, is not the
identity map on $H$. Therefore, let $h_1$ be a primitive element
of $H$ which is not equal to $\lambda (h_1)$. Let us choose a set of
$2g$ generators $\{A_1, B_1, A_2, B_2, \cdots , A_g, B_g\}$ for
$\Gamma$ such that $A_1=h_1$, satisfying the standard single
relation $\Pi{[A_j,B_j]} =1$.
Now let the Fricke-normalized monomorphism
$\sigma : \Gamma \longrightarrow \Mob$ represent an arbitrary point of ${\cal T}(\Gamma )$.
To obtain a contradiction, we may assume that for
{\it every} such $\sigma $ we have:
$$
\sigma ({\theta} \circ {h} \circ {\theta}^{-1}) \, = \, \sigma (h)
$$
for all $h \in H$. But, in order to produce a normalized
monomorphism $\sigma $, we can (essentially arbitrarily) assign
hyperbolic elements of $\Mob$ to the first $2g-3$ generators of
$\Gamma$, and then fill in for the last three generators
judiciously in order to maintain the relation in the group, and
to keep valid the Fricke normalization. (Vide \cite[p. 134
ff]{N}.) Since $\Gamma $ is a surface group of genus at least three,
(so that it has six or more generators in its standard
presentation), it is easy to produce some (in fact, infinitely
many) normalized monomorphisms $\sigma $ so that $\sigma (h_1) \neq \sigma
(\lambda (h_1))$. This completes the proof of the lemma.
$\hfill{\Box}$
\medskip
\noindent
{\it Remark~:} We avoid genus $2$, because if we take $p$ and
$q$ to be the hyperelliptic involution and the identity
homeomorphism, respectively, on $Y=X=$a surface of genus two,
then, in fact, ${\cal T}(p) = {\cal T}(q)$. This is the
well-known non-effectiveness of the action of the mapping class
group in genus $2$ (vide \cite[Section 2.3.7]{N}). But, in the
context of compact hyperbolic Riemann surfaces, that is the one
and only case when a nontrivial mapping class group element
induces the identity on the Teichm\"uller space.
\medskip
\noindent
{\it Proof of Theorem 3.14~:} First let us note the crucial fact
that the limit constructions we are pursuing are independent of
the genus of the base surface. In fact, if $\alpha : X_{\alpha}
\longrightarrow X$ is a covering in ${\cal I}(X)$, then $\alpha$ sets
up a natural isomorphism between the pairs~:
$$
({\cal T}_{\infty}(X), MC_{\infty}(X)) ~~~ \mbox{and} ~~~
({{\cal T}}_{\infty}(X_{\alpha}), MC_{\infty}(X_{\alpha}))
\leqno(3.20)
$$
Therefore, to understand the action of the universal
commensurability mapping class group, we may, and therefore do,
take $X$ to be of genus greater than or equal to three.
In view of the description of $\MCinX$
as the group ${\rm Vaut}^{+}(\Gamma)$ given in Proposition 3.5, a copy of the
group $\Gamma $ itself sits embedded inside $\MCinX$. Indeed, each
element of $\Gamma $ determines a virtual automorphism of $\Gamma $ by
inner conjugation. Let us first take care of these elements of
${\rm Vaut}^{+}(\Gamma)$, which play a rather special r\^ole.
Given any non-identity element $\gamma \in \Gamma $, we utilize the
residual finiteness of the surface group $\Gamma $ to find a finite
index subgroup $H \subset \Gamma $ so that $\gamma$ is {\it not} in
$H$. Then in the direct limit construction of ${\cal T}_{\infty}(\Gamma)$, it
follows easily that the automorphism of ${\cal T}_{\infty}(\Gamma)$ arising from
$\gamma$ will already act nontrivially on the stratum ${\cal T}(H)$.
It is also clear by a similar argument that every
non-identity {\it mapping class like} element of $\MCinX$,
(see the Remark following (3.10), and \cite{BN}),
will act non-trivially on ${\cal T}_{\infty}(X)$. We have already
disposed of members of $\Gamma$ itself, therefore, let
the element under scrutiny be given by
$\sigma \in {{\rm Vnorm}_{\rm q.s.}(\Gamma ) \backslash \Gamma}$.
By assumption $\sigma $ is mapping class like, --- one
therefore sees easily that it must preserve some appropriate
stratum ${\cal T}(X_{\alpha})$ (as a set), and will act as the
standard modular transformation on that Teichm\"uller space.
But the classical genus $g$ mapping class group, say $MC_g$,
is known to act effectively, (see \cite{B2},
\cite[Chapter 2.3]{N}), on the genus $g$ Teichm\"uller
space ${\cal T}_g$, for every $g \geq 3$. That takes care of
$\sigma $. Actually, by essentially the above argument, we
can see that every member of
${\rm Vnorm}_{\rm q.s.}(\Gamma ) \cap \Mob$ acts non-neutrally
on ${\cal T}_{\infty}(X)$.
We now come to the interesting case when the element of
$\MCinX$ being investigated is {\it not} of the above types.
Take therefore a nontrivial element of
$\MCinX$ determined by a self correspondence $(p,q)$, namely
by the two coverings $p$ and $q$ from $(Y,*)$ onto $(X,*)$,
as in (3.3). The condition on the element of $\MCinX$ so
determined implies that the hypothesis of Lemma 3.15 may
be assumed satisfied.
Let $t \in {\cal T}_{\infty}(X)$ be a point that is represented as a Riemann
surface $X_\mu$, ($\mu$ being a complex structure on $X$), in
the base stratum ${\cal T}(X)$. Remember that in the direct limit
construction of ${\cal T}_{\infty}(X)$ (over the directed set ${\cal I}(X)$), there
are different strata, each corresponding to a copy of ${\cal T}(Y)$, but
tagged by every distinct choice of finite pointed covering map
$Y \longrightarrow X$. Let us agree to denote the stratum ${\cal T}(Z)$
corresponding to any such pointed covering $r:Z \longrightarrow X$, by
${\cal T}(Z)_r$.
Thus the point $t = [X_\mu]$ may be represented in the stratum
${\cal T}(Y)_p$ as $Y_{p^{*}{\mu}} \in {\cal T}(Y)_p$, (and, of course, also as
$Y_{q^{*}{\mu}} \in {\cal T}(Y)_q$). Here, in self-evident notation, we
are writing $p^{*}{\mu}$ for the complex structure on $Y$
obtained as the pull back of $\mu$ via $p$.
Now from the work in sections III.1 and III.2 we note that
the automorphism $A_{(p,q)}$ of ${\cal T}_{\infty}(X)$ determined by the
self correspondence $(p,q)$ on $X$ acts as follows~:
take any point $t = [Y_{q^{*}{\mu}} \in {\cal T}(Y)_q]$; then
$$
A_{(p,q)}(t) \, = \, [Y_{p^{*}{\mu}} \in {\cal T}(Y)_q]
\leqno(3.21)
$$
is valid.
The point of the equation (3.21) is that we have arranged both
the element $t$, {\it as well as} its image under the
$\MCinX$-automorphism, $A_{(p,q)}$, to be represented in one and
the same stratum. We deduce immediately that if $t= A_{(p,q)}(t)$,
for each $t$ coming from the base stratum ${\cal T}(X) (\subset {\cal T}_{\infty}(X))$,
then the mappings ${\cal T}(p)$ and ${\cal T}(q)$ coincide (as embeddings of
${\cal T}(X)$ into ${\cal T}(Y)$). Thus Lemma 3.15 is contradicted, and we are
through. Notice that we have actually proved that each
$A_{(p,q)}$ of this type is {\it already non-trivial when
restricted to the base stratum}.
$\hfill{\Box}$
\medskip
\noindent
{\it Remark 3.22~:} In the context of the model ${\cal T}_{\infty}(\Gamma)$ of
${\cal T}_{\infty}(X)$, which we exhibited as an embedded complex analytic
``ind-space'' within the Ahlfors-Bers universal Teichm\"uller
space ${\cal T}(\Delta)$, the result of Theorem 3.14 asserts that each one of
the universal modular transformations of ${\cal T}(\Delta)$ arising from any
non-trivial member of ${\rm Vnorm}_{\rm q.s.}(\Gamma )$ must move
nontrivially points of ${\cal T}_{\infty}(\Gamma)$. A little reflection shows that
although it is easy to create some arbitrary quasisymmetric
homeomorphism $\phi$, such that $[\phi]$ is actually moved (by
any given universal modular transformation), it is not quite
trivial to produce such a $\phi$ with the extra property that it
is compatible with the Fuchsian group $\Gamma $ [in the sense that it
conjugates some finite index subgroup of $\Gamma $ to again such a
subgroup].
\bigskip
\section{Isotropy subgroups of ${MC}_{\infty}$}
Fix an arbitrary point $t \in {{\cal T}}_{\infty}(X)$. We want to
study the {\it stabilizer subgroup} at this point of the
universal commensurability modular action.
Utilizing again the observation, (3.20), above regarding the
natural isomorphism of pairs, it is evident that we lose no
generality by assuming that the point $t$ is already represented
in the base stratum ${\cal T}(X)$.
Therefore, in this section, $X = X_{\mu}$ will be a {\it Riemann
surface}, with the complex structure $\mu$. The universal
covering ${\widetilde X}$ is then identified biholomorphically with $\Delta $,
and we let $G$ denote the Fuchsian group uniformizing $X$.
\bigskip
\noindent
{\bf IV.1. Commensurable subgroups of $\Mob$~:}\, Two subgroups,
say $H$ and $K$, of $\mbox{Aut}(\Delta) \equiv \Mob$ are called
{\it commensurable} if $H\cap K$ is of finite index in both $H$
and $K$. We define the {\it commensurability automorphism
group} for the Riemann surface $X = \Delta/G$, denoted as
${\rm ComAut}(X) \equiv {\rm ComAut}(\Delta/G)$ by
setting~:
$$
{\rm ComAut}(X) \, \equiv \,
\{g\in {\mbox{Aut}}(\Delta) \vert ~~ g{G}g^{-1} ~~
\mbox{and} ~~ G ~~ \mbox{are commensurable} \}
\leqno{(4.1)}
$$
This group, which is the commensurator of the Fuchsian group
$G$, will be identified by us as arising from the finite {\it
holomorphic} self correspondences of the Riemann surface $X$.
Namely, the members of $\mbox{\rm ComAut}(X)$ appear from undirected cycles of
{\it holomorphic} covering maps that start and end at $X$.
\bigskip
\noindent
{\bf IV.2. Isotropy in ${MC}_{\infty}$ and commensurators~:}\,
Let the point of ${{\cal T}}_{\infty}(X)$ represented by the Riemann
surface $X = X_{\mu}$ be denoted by $[X]$.
\medskip
\noindent
{\bf Theorem 4.2.~(a)}\, {\it The subgroup ${\rm ComAut}(X)$ of
${\rm Aut}(\Delta)$ is the virtual normalizer of $G$
among the M\"obius transformations of $\Delta $. Namely~:}
$$
{\rm ComAut}(X)~
=~{\mbox{Vnorm}}_{{\mbox{Aut}}(\Delta )}(G)
=~{\mbox{Aut}}(\Delta ) {\bigcap} {\mbox{Vnorm}}_{\rm q.s.}(G)
$$
\noindent
{\bf (b)}\, {\it The group ${\mbox{\rm ComAut}}(X)$
is naturally isomorphic to the isotropy subgroup at $[X]$
for the action of $\MCinX$ on ${\cal T}_{\infty}(X)$.}
\medskip
\noindent
{\it Proof.} Part (a)~: Suppose $\gamma \in \mbox{\rm ComAut}(X)$. Let
$C_{\gamma }: \Mob \rightarrow \Mob$ denote the inner conjugation in
$\Mob$ given by $\phi$, i.e.,
$C_{\gamma }(A) = {\gamma } \circ {A} \circ {\gamma }^{-1}$. Set
$H = G \cap {C_{\gamma }}^{-1}(G)$
and
$K = G \cap {C_{\gamma }}(G)$. It is evident that $\gamma $
will conjugate $H$ onto $K$, and it is easily proved that
both $H$ and $K$ are finite index subgroups of $G$. This shows
that $\mbox{\rm ComAut}(X)$ lies in the virtual normalizer of $G$ in the
group $\Mob$.
Conversely, if a finite index subgroup, $H \subset G$ is carried,
under conjugation by some M\"obius transformation
$\rho $, to another finite index subgroup $K \subset G$,
then it is an easy exercise to demonstrate that
${\rho }{G}{\rho }^{-1}$ and $G$ are commensurable. $\Box$
\noindent
Part (b): To prove part (b) we choose to model
the universal commensurability Teichm\"uller space, ${\cal T}_{\infty}(X)$,
as the subset ${\cal T}_{\infty}(G)$ of the universal Teichm\"uller space ${\cal T}(\Delta)$,
(we explained this in section III.4 above). Now, the
identifying isomorphism ${\cal T}_{\infty}(X) \rightarrow {\cal T}_{\infty}(G)$ maps the point
$[X] \in {\cal T}_{\infty}(X)$ to the base point $[1] \in {\cal T}_{\infty}(G)$. Thus the
problem of identifying the stabilizer of the point $[X]$ in
the commensurability modular action $\CMinX$ is {\it
equivalent} to the problem of identifying the subgroup
of ${\rm Vnorm}_{\rm q.s.}(G)$ that fixes, (in its action by
universal modular transformation on ${\cal T}(\Delta)$), the base
point $[1]$. But, as explained in III.4, the only universal
modular transformations that keep $[1]$ fixed arise from $\Mob$.
Consequently, the stabilizer subgroup in $\CMinX$ of the point
$[X]$ is canonically identified with the intersection of
${\rm Vnorm}_{\rm q.s.}(G)$ with $\Mob$. By part (a) above, this
intersection is exactly the commensurator of the Fuchsian group
$G$. The proof of the theorem is finished.
$\hfill{\Box}$
\medskip
\noindent
{\bf $\mbox{\rm ComAut}(X)$ and holomorphic circuits of coverings ~:}
We will now delineate the crucial point that we have
mentioned already in the Introduction; namely, that
the isotropy subgroup $\mbox{\rm ComAut}(X)$ arises from undirected cycles
of {\it holomorphic} covers that start and end at $X$.
We retain the notations of part (a) of the proof of
Theorem 4.2. Thus choose any M\"obius transformation
$\gamma: \Delta \rightarrow \Delta $ that is a member of $\mbox{\rm ComAut}(\Delta /G)$.
We know that there exist two finite index subgroups $H$ and $K$
in $G$ such that the conjugation map $C_{\gamma}$,
carries $H$ isomorphically onto $K$. It follows
that $\gamma$ descends to a {\it biholomorphic isomorphism}, say
$$
\gamma_{\star} : Y \longrightarrow Z
\leqno(4.3)
$$
between the compact Riemann surfaces $Y = \Delta /H$ and $Z = \Delta /K$.
Let $\alpha : Y \rightarrow X$ and $\beta : Z \rightarrow X$ denote the
holomorphic finite covers corresponding to the group
inclusions $H \subset G$ and $K \subset G$. Then the
chosen element $\gamma$ of $\mbox{\rm ComAut}(X)$ corresponds to the
{\it circuit of holomorphic covering morphisms} given
by $\alpha $ (with arrow reversed), followed by $\gamma_{\star}$,
followed by $\beta $.
Thus, $\gamma$ in $\mbox{\rm ComAut}(X)$ is represented by the {\it holomorphic
two-arrow diagram} arising from the two holomorphic coverings
$p = \alpha $ and $q = \beta \circ {\gamma}_{\star}$ from the Riemann
surface $Y$ onto the Riemann surface $X$. This should
be carefully compared with (3.3) of Section 3.
It is interesting to note from the above, that the element
of the stabilizer subgroup in $\MCinX$, arising from any such
finite circuit of holomorphic and unramified coverings,
is well-defined without reference to base points on the
Riemann surfaces involved. A little reflection shows that
this phenomenon is due to the well-known {\it rigidity} of
holomorphic covering maps between compact hyperbolic Riemann
surfaces.
\bigskip
\noindent
{\bf IV.3. The commensurator of a generic Fuchsian group ~:}
We have now described the isotropy subgroup of
the commensurability modular action at every
point $[Y] \in {{\cal T}}_{\infty}(X)$, where $Y$ is any
compact hyperbolic Riemann surface, as precisely the
commensurator of the Fuchsian group, $G$, uniformizing $Y$.
Note therefore that this isotropy is always {\it infinite},
--- it always contains a copy of the fundamental group
$\pi_{1}(Y)$. In fact, $G$ is contained in its normalizer,
$N(G)$ (in the M\"obius group), while $N(G)$ in its turn
is contained in the virtual normalizer of $G$ --- namely :
$$
G \subset N(G) \subset {\mbox{\rm Comm}}(G)
$$
Clearly, ${\mbox{\rm Comm}}(G)$ contains the normalizer
subgroup in the M\"obius group, say $N(H)$, for any finite index
subgroup $H \subset G$. The union of these $N(H)$, over all
subgroups $H \in {\mbox{Sub}}(G)$, constitute the {\it
mapping class like} members of ${\mbox {\rm Comm}}(G) = \mbox{\rm ComAut}(Y)$
Now, $G$ is of course normal in $N(G)$, and it is well-known
(as well as rather easy to see), that the quotient
$N(G)/G = {\mbox{\rm HolAut}}(X)$ is the group of
usual holomorphic automorphisms of $X$. This quotient is
always a finite group (for any compact hyperbolic Riemann surface)
($\mbox{order}(N(G)/G) \le 84(g-1)$). Indeed,
${\mbox{\rm HolAut}}(X)$
is non-trivial only on some union of lower dimensional
subvarieties in each moduli space ${\cal M}_g$, for $g \ge 3$.
The {\it isotropy subgroup} at any $[X] \in {\cal T}_g$, of the
action of the classical Teichm\"uller modular group $MC_g$ on
the $(3g-3)$ dimensional Teichm\"uller space ${\cal T}_g$,
is identifiable as the group ${\mbox{\rm HolAut}}(X)$.
See \cite{B1} and \cite{N}, and references therein, for details.
In our infinite limit situation we are noting therefore the
following interesting parallel with the above classical
theory. Indeed, we are asserting that the {\it isotropy
subgroup} at each $[Y] \in {\cal T}_{\infty}$, of the action of the
{\it universal commensurability modular group} ${MC}_{\infty}$, is
canonically identified with the new group $\mbox{\rm ComAut}(Y)$. This group,
as we said, is always infinite, and, as we saw in Section 3,
the action of ${MC}_{\infty}$ is always effective. We note that $G$
need not be normal in $\mbox{\rm ComAut}(\Delta/G) = {\mbox {\rm Comm}}(G)$
--- so that a quotient group (as in the case of $N(G)/G$)
cannot be defined in general.
\smallskip
\noindent
{\it Generically $\mbox{\rm ComAut}(\Delta/G) = {\mbox {\rm Comm}}(G) = G$~:}
By \cite{Gr1}, a co-compact torsion free Fuchsian group
representing a compact Riemann surface from the moduli space
${\cal M}_g$, $g\ge 3$, is actually {\it maximal} amongst discrete
subgroups of $\Mob$, provided we discard the groups that lie on
certain lower dimensional subvarieties in ${\cal M}_g$. See also the
interesting discussion of this point in \cite{Sun}.
For $g=2$ remember that every member of ${\cal M}_2$ is
hyperelliptic, so that the holomorphic automorphism group
contains at least a ${\fam\msbfam\tenmsb Z}_{2}$. Generically again, the group
${\mbox{\rm HolAut}}(X)$ is just ${\fam\msbfam\tenmsb Z}_{2}$ in this genus.
It follows that, on an open dense subset of points of
${\cal M}_2$ the commensurator of the corresponding
Fuchsian group, $G$, is simply a degree two extension of $G$.
\bigskip
\noindent
{\bf IV.4. Compact Riemann surfaces possessing large $\mbox{\rm ComAut}(X)$ ~:}
At the other end of the spectrum from the generic
considerations explained just above, we want to now
explore the possibility of creating interesting elements of
the commensurability automorphism group of certain
special Riemann surfaces. We first explain a method we
have devised of finding non-trivial elements in
$\mbox{\rm ComAut}(X) \backslash G$ for certain Riemann surfaces $X = \Delta /G$,
that allow large automorphism groups. The method is to utilize
certain finite quotients of $X$.
Let us point out first the evident, yet important, fact
that the two commensurability automorphism groups
$\mbox{\rm ComAut}(X)$ and $\mbox{\rm ComAut}(Y)$ are isomorphic
whenever there is a holomorphic unramified finite covering
map from $X$ onto $Y$, (or vice versa). That is evident since
the commensurator of a Fuchsian group is completely insensitive
to either extending or contracting the group, up to finite index.
Suppose therefore that we start with some compact Riemann
surface, $X~=~\Delta /G$, of genus $g \ge 2$, with
${\mbox{\rm HolAut}}(X)$
being its (finite) group of holomorphic automorphisms. Suppose
that ${\mbox{\rm HolAut}}(X)$ contains a subgroup, $P$, such that:
\noindent
(i) every non-identity member of $P$ acts fixed point
freely on $X$;
\noindent
(ii) the subgroup $P$ is {\it not} a normal subgroup
of ${\mbox{\rm HolAut}}(X)$.
By condition (ii) there exists $\alpha \in {\mbox{\rm HolAut}}(X)$ such that
the subgroup of ${\mbox{\rm HolAut}}(X)$ given by
$Q := {\alpha} P {\alpha}^{-1}$ is {\it not} equal to $P$, and
of course, every $q \in Q$ also acts fixed point freely on $X$
(since the members of $P$ acted in that fashion).
Consider then the two quotient Riemann surfaces:
$Y := X/P$ with the finite unbranched (normal) holomorphic
covering projection: $f_P: X \longrightarrow Y$, and, correspondingly,
$Z := X/Q$ with holomorphic covering projection:
$f_Q: X \longrightarrow Z$.
But, since $\alpha$ conjugates $P$ onto $Q$, it descends to
a biholomorphic isomorphism ${\alpha}_{\star} : Y \longrightarrow Z$.
We thus have a diagram of unramified {\it holomorphic }
finite coverings:
$$
\matrix{
{X}
&\longrightarrow {}
&{X}
\cr
\mapdown{f_P}
&
&\mapdown{f_Q}
\cr
{Y}
&{\stackrel {{\alpha}_{\star}} {\longrightarrow }}
&{Z}
\cr}
$$
If we put the conjugating automorphism $\alpha$ itself as the
map on the horizontal top arrow, then this diagram will
{\it commute} and it therefore produces no interesting
element of the commensurability automorphism groups (of any of the
Riemann surfaces in sight). But, and here is a crucial
point, if we use the {\underline{\it identity}} map as the
top horizontal arrow, then the diagram will {\it not}
commute, --- and we have thus found, by circuiting through this
diagram, an interesting element of $\mbox{\rm ComAut}(Y) \cong \mbox{\rm ComAut}(Z)$
(and therefore also of $\mbox{\rm ComAut}(X)$). We note that the
description of this commensurability automorphism necessitates
the utilization of non-trivial holomorphic coverings
(i.e., of covering degree at least two).
Of course, we must find a supply of Riemann surfaces $X$ allowing
enough holomorphic automorphisms so as to be able to carry through
the above construction. But here is a well-known result~:~
Given {\it any} finite group $F$, there exists some
compact hyperbolic Riemann surface, $X$, with ${\mbox{\rm HolAut}}(X) \cong F$,
\cite{Gr2}. (The proof of this actually uses the theory of
Teichm\"uller spaces.)
We also refer the reader to the article \cite{K}, and
the references quoted therein, for some explicit examples of
compact Riemann surfaces $X$ which have large automorphism
groups, so that the above construction can be carried
through explicitly and easily in many instances.
\medskip
\noindent
{\bf Arithmetic Fuchsian groups~:}\,
A more deep way to pinpoint Fuchsian groups with large
commensurators is to involve number theory.
In the following sections we will need to utilize heavily
Margulis' well known result \cite{Mar}, regarding the fact
that the commensurator of $G$ actually becomes a {\it dense}
subset of $\Mob \cong PSL(2, {\Bbb R})$ precisely when the Fuchsian
group $G$ is an {\underline{\it arithmetic}} subgroup of the
ambient Lie group $PSL(2,{\Bbb R})$. That situation happens for only
countably many compact Riemann surfaces in each genus. Consequently
there are only countably many hyperbolic compact Riemann surfaces
with arithmetic Fuchsian groups, even when counting over all
the genera greater than one.
\noindent
{\it Definition of arithmeticity for Fuchsian lattices :}
It may be convenient to recall here the definition of
when a finite co-volume Fuchsian group $G$ in $PSL(2,{\Bbb R})$
is called {\underline{arithmetic}}.
The requirement is that, (after conjugating $G$ in $PSL(2,{\Bbb R})$,
if necessary), $G$ is commensurable with the group of matrices
whose entries are from the integers of some (arbitrary) number
field. Of course, the standard example is the subgroup
$PSL(2, {\Bbb Z})$. (This example is neither co-compact, nor
torsion-free.) Arithmetic Fuchsian groups will be at the very
center of our work in Section 5.
\bigskip
\noindent
{\bf IV.5. Subgroup of $\MCinX$ acting trivially on the base stratum~:}
The following result is obtained by considering an appropriate
intersection of the isotropy subgroups we described.
\smallskip
\noindent {\bf Proposition\, 4.4.}\,
{\it Let $X$ have genus at least three.
The Fuchsian group $G$, considered as a subgroup of
${\rm Vaut}^{+}(G) = MC_{\infty}(X)$ using
inner conjugation, coincides
with the subgroup of $MC_{\infty}(X)$ that fixes pointwise
the stratum ${{\cal T}}(X)$ of the inductive limit space
${{\cal T}}_{\infty}(X)$.}
\medskip
\smallskip
\noindent
{\it Proof :} A member of ${\rm Vaut}^{+}(G)$, considered as a
quasisymmetric homeomorphism
$f \in {\rm Vnorm}_{\rm q.s.}(G)$,
will act as the identity on the base stratum if and
only if, for every quasisymmetric homeomorphism
$\phi$ that is compatible with $G$, it is true that~:
$$
\phi \circ f \circ {\phi}^{-1} ~~\mbox{is in}~~ \Mob
\leqno(4.5)
$$
Condition (4.5) is checked by tracing through the
various canonical identifications that we have explained
amongst the models for the action of $\MCinX$ on ${\cal T}_{\infty}(X)$.
Consequently, for $f \in G$, it follows, from the
definition of $\phi$ being compatible with $G$, that
(4.5) is satisfied. This part is true even if $X$ has genus two.
Conversely, the set of transformations of $\MCinX$ holding
${\cal T}(X)$ pointwise fixed must, of course, lie in
$\mbox{\rm ComAut}(X)$. But note that we are free to choose {\it any}
co-compact torsion free Fuchsian group $G$, since the base
surface $X$ is at our disposal to fix. If we choose any $G$
so that $\mbox{Comm}(G) = G$, we are through. But with genus $X$
greater than two, as we said, such a choice of $G$ is in fact
generic. In other words, an open dense set in the moduli space
${\cal M}_g$ ($g \ge 3$) corresponds to Fuchsian groups whose
commutators are no larger than themselves. That completes the
proof.
$\hfill{\Box}$
\bigskip
\noindent
{\bf IV.6. Biholomorphic identification of solenoids~:}
Let $G \subset \mbox{Aut}(\Delta )$ be, as before, the torsion-free
co-compact Fuchsian group under study, and $X~=~\Delta /G$.
Let $H \, \subset \, G$ be any subgroup of finite index.
The inclusion homomorphism, $i$, of $H$ into $G$ induces an
injective homomorphism between the profinite completions:
$$
{\hat{i}} \, : \, {\widehat H} \, \longrightarrow \, {\widehat G}
\leqno(4.6)
$$
Now, the map
$$
Id \times {\hat{i}} \, : \,
{\Delta \times \widehat{H}} \,\longrightarrow\,{\Delta \times \widehat{G}}
\leqno(4.7)
$$
induces a natural map
$$
Q_H \,:\, {\Delta }\times_H {\widehat H} \, \longrightarrow \,
{\Delta } \times_G {\widehat G}
\leqno(4.8)
$$
between the above two copies of the universal solenoid.
(Recall the discussion in section II.1.) The action of $G$ on
$\Delta$ in (4.8) is the tautological action of $\mbox{Aut}(\Delta )$
on $\Delta$.
Note that both ${\Delta }\times_H {\widehat H}$ and
${\Delta } \times_G {\widehat G}$ carry complex structures. The
following lemma says that $Q_H$ is a biholomorphism with
respect to these complex structures.
\medskip
\noindent {\bf Lemma\, 4.9. }\,
{\it The map $Q_H$ is a base leaf preserving
biholomorphic homeomorphism.}
\medskip
\noindent {\it Proof :} The continuity of the map $Id \times
{\hat{i}}$ defined in (4.7) implies that the map $Q_H$ is
continuous.
Since the subset ${\Delta }{\times_H}{H}$ (respectively,
${\Delta }{\times_G}G$) is the base leaf in ${\Delta }{\times_H}{\widehat H}$
(respectively, in ${\Delta }i{\times_G}{\widehat G}$), it is
immediate that the map $Q_H$ sends the base leaf into
the base leaf.
The chief issue is to show that $Q_H$ is a bijection. Let
$$
\overline{i} \, : \, H \backslash \widehat{H} \,
\longrightarrow \, G \backslash \widehat{G}
\leqno(4.10)
$$
be the map induced by ${\hat{i}}$ (see (4.6)) between
the coset spaces. Note that, from the remarks following
equation (2.10), it follows that these two coset spaces are
precisely the parameter spaces of leaves of the respective
associated solenoids. The lemma will be proved by showing that
$\overline{i}$ is a bijection.
Let us denote the image of the injection $\hat{i}$
also as $\widehat{H}$. Since $\widehat{H}$ is an open
subgroup of $\widehat{G}$, and $G$ is a dense subgroup of
$\widehat{G}$, we have $G \cdot \widehat{H} = \widehat{G}$.
Therefore, to prove that $\overline{i}$ is a bijection
it suffices to show that $\widehat{H} \cap G = H$.
But the projection $G \rightarrow G/H$ extends to a
continuous map from $\widehat{G}$ to $G/H$. Since the
inverse image of the identity coset contains $\widehat{H}$, we
deduce $\widehat{H} \cap G = H$.
Consider next the natural projection of
${\Delta }\times_G {\widehat G}$, (respectively,
${\Delta }\times_H {\widehat H}$), onto $G \backslash \widehat{G}$,
(respectively, $H \backslash \widehat{H}$).
These projections fit into the following commutative diagram :
$$
\matrix{
{\Delta }\times_H {\widehat H}&{\stackrel{Q_H}{\longrightarrow }}&
{\Delta }\times_G {\widehat G}
\cr
\mapdown{} && \mapdown{}
\cr
H \backslash \widehat{H}& {\stackrel{\overline{i}}{\longrightarrow }} &
G \backslash \widehat{G}
\cr}
\leqno(4.11)
$$
But every fiber of the two vertical projections may be identified
with $\Delta $ (after choosing an element to represent the corresponding
coset). Since $\overline{i}$ is bijective, it follows immediately
that $Q_H$ is also a bijection.
We discuss now the holomorphy of $Q_H$. Consider the laminated
surfaces $\Delta \times \widehat{H}$ and $\Delta \times \widehat{G}$.
The complex structure of $\Delta $ induces a natural complex
structure on each of them which is actually constant in the
transverse direction. (Recall the definition of a complex
structure on a laminated surface given in Section II.2.) The map
$Id \times {\hat{i}}$ is evidently holomorphic from $\Delta \times
\widehat{H}$ to $\Delta \times
\widehat{G}$, with the above complex structures.
The complex structure on ${\Delta }\times_H {\widehat H}$
(respectively, ${\Delta }\times_G {\widehat G}$) is induced by
descending the complex structure on $\Delta \times \widehat{H}$
(respectively, $\Delta \times \widehat{G}$) using the complex
structure preserving action of $H$ (respectively, $G$). It is
easy to see that this descended complex structure coincides with
complex structure on a solenoid constructed in
Section II.2 from a point of ${\cal T}_{\infty}(X)$.
Since the map $Q_H$ is obtained by descending $Id \times
{\hat{i}}$ using the action of $G$, the holomorphicity of the
map $Id \times {\hat{i}}$ immediately implies the holomorphicity
of $Q_H$. This completes the proof of the lemma.
$\hfill{\Box}$
\medskip
For an unramified pointed covering $p: Y \rightarrow X$,
if we set $H = {\pi}_1(Y) \subset {\pi}_1(X)$, then the
homeomorphism $Q_H$ obtained in (4.8) can be seen to
{\it coincide} with the inverse of the homeomorphism
$H_{\infty}(p) : {H}_{\infty}(X) \rightarrow {H}_{\infty}(Y)$ that was constructed in (3.1).
Now, the fact that ${H}_{\infty}(p)$ is a biholomorphic
homeomorphism, when $X$ and $Y$ are Riemann surfaces
with $p$ being a holomorphic covering space, follows
directly from the very definitions of ${H}_{\infty}(X)$ and ${H}_{\infty}(Y)$
as inverse limits over towers of Riemann surfaces.
However, for our work in Section 5 below, the above
construction and analysis of $Q_H$ will be very useful.
\bigskip
\noindent
{\bf IV.7. Holomorphic action of $\mbox{\rm ComAut}(X)$ on ${H}_{\infty}(X)$~:}
We would like to bring out now a point that is crucial to our work.
As we explained with the equations (3.1) and (3.2), the
commensurability mapping class group, ${MC}_{\infty}$, acts
by self-bijections of the appropriate type
on the (genus-independent) limit objects like ${H}_{\infty}$ and ${\cal T}_{\infty}$,
--- simply because both of those constructions proceed
over the tower ${\cal I}(X)$ of {\it topological} finite covers
of the base surface.
By the same token then, the isotropy group $\mbox{\rm ComAut}(X)$, which
arises for us from the circuits of {\it holomorphic} finite
covers over $X$, will operate as automorphisms (for
the same purely set-theoretic reasons)
on any limit object that is created over the directed tower,
say ${\cal I}_{hol}(X)$, comprising only the {\it holomorphic}
coverings of the given Riemann surface $X$. A first
application of this principle is seen in the Proposition 4.12
below, which describes the base leaf preserving holomorphic
automorphisms of any complex analytic solenoid. Further
applications are manifest in the work of Section 6 below.
\medskip
Already in the topological category, every element of
$\mbox{Vaut}^{+}(G)$ acts on the solenoid
$H_{\infty}(X) = {\Delta } \times_G {\widehat G}$,
by a self homeomorphism that preserves the base leaf.
(See the note (i) following equations (3.1) and (3.2).)
\medskip
\noindent {\bf Proposition\, 4.12.}\,
{\it Let $X$ be any compact pointed hyperbolic Riemann surface.
The full group of holomorphic self-homeomorphisms that preserve
the base leaf of the corresponding complex analytic solenoid,
$H_{\infty}(X)$, coincides with ${\rm ComAut}(X)$.}
\medskip
\noindent {\it Proof~:}
Let $G$ be the Fuchsian group uniformizing $X$.
Take any M\"obius transformation $\gamma: \Delta \rightarrow \Delta $
that is a member of $\mbox{\rm ComAut}(\Delta/G) \equiv \mbox{\rm ComAut}(X)$.
We must first show how $\gamma$ induces the desired
kind of biholomorphic automorphism of ${H}_{\infty}(X)$.
Maintain the notations as in the proof of part (a) of
Theorem 4.2, and note the remarks following the proof of
that Theorem. There are two finite index subgroups, $H$ and $K$,
in $G$ such that the conjugation by $\gamma$
carries $H$ isomorphically onto $K$. As in (4.3) consider
the biholomorphism $\gamma_{\star}: {\Delta /H} \rightarrow {\Delta /K}$.
Applying the ${H}_{\infty}$ functor, defined in (3.1), to
$\gamma_{\star}$, we obtain a complex analytic isomorphism:
$$
{H}_{\infty}(\gamma_{\star}) \, : \, {H}_{\infty}(Z) \, \longrightarrow \, {H}_{\infty}(Y)
\leqno(4.13)
$$
Now, as explained in section II.1, we may always identify
the solenoids by their group theoretic models:
$$
{H}_{\infty}(X) \, \equiv \, \DinG,~~~
{H}_{\infty}(Y) \, \equiv \, \DinH ~~~{\mbox{and}}~~~
{H}_{\infty}(Z) \, \equiv \, \DinK
\leqno(4.14)
$$
Therefore apply the natural biholomorphic homeomorphisms:
$$
Q_H \, : \, \DinH \, \longrightarrow \, \DinG ~~~ \mbox{and}~~~ Q_K \, :
\, \DinK \, \longrightarrow \, \DinG \, ,
$$
as defined in (4.8), between the above solenoids. We thus obtain
the natural biholomorphic self homeomorphism :
$$
{H}_{\infty}(\gamma)~=~{Q_K} \circ {{H}_{\infty}(\gamma_{\star})}^{-1}
\circ {Q_H}^{-1}
\leqno(4.15)
$$
This map, ${H}_{\infty}(\gamma)$, is the holomorphic automorphism of
${H}_{\infty}(X)$ that corresponds to the chosen $\gamma \in \mbox{\rm ComAut}(X)$.
Since each factor in (4.15) is holomorphic and
carries base leaf to base leaf, it is true that ${H}_{\infty}(\gamma)$
is a holomorphic automorphism preserving the base leaf of
${H}_{\infty}(X)$. That is as desired. Note that this part of the
proposition holds for general torsion-free Fuchsian groups
$G$. Co-compactness plays no r\^ole in showing that
$\mbox{\rm ComAut}(X)$ acts biholomorphically on the inverse limit
complex solenoid.
To complete the proof of the proposition we must prove
that {\it every} base leaf preserving holomorphic automorphism
of ${H}_{\infty}(\Delta/G)$ comes from $\mbox{\rm ComAut}(\Delta/G)$, when $G$
uniformizes a {\it compact} Riemann surface.
The proof of this will be given at the end of this section. We
will need a couple of lemmata to lead up to that proof.
It is crucial at this stage to point out the {\it purely
topological version} of the above fact that $\mbox{\rm ComAut}(X)$ acts by
base leaf preserving automorphisms of ${H}_{\infty}(X)$.
\medskip
\noindent
{\it Self homeomorphisms of ${H}_{\infty}(X)$ preserving the base leaf~:}\,
Let $G$ be any discrete group, acting as a group of
self homeomorphisms, on a connected and simply connected space
${\widetilde X}$, the action being properly discontinuous and fixed point
free. Let the quotient space be denoted $X$, ($\pi_1(X) \cong G$),
and $u: {\widetilde X} \rightarrow X$ be the universal covering projection
that so transpires.
Assume that $G$ is residually finite.
Setting $\Ghat$ to be the profinite completion of $G$,
we can create just as in Section II.1, --- see (2.8)
and (2.9) --- the inverse limit solenoid built with
base $X$, namely $ {H}_{\infty}(X) \equiv {{\widetilde X} \times_G \widehat{G}}$.
The base leaf is the projection by $P_G$ (2.10) of the
slice ${{\widetilde X} \times {1}}$. The base leaf is thus canonically
identified with ${\widetilde X}$.
\medskip
\noindent {\bf Lemma 4.17 :} {\it
Let $\phi : {\widetilde X} \longrightarrow {\widetilde X}$ be any homeomorphism that
virtually normalizes $G$. Namely, there exist two finite
index subgroups $H$ and $K$ of $G$ such that
${\phi} H {\phi}^{-1} = K$. Then there is a unique self
homeomorphism
$$
\Phi \, \, : \,\, \XinG \, \, \longrightarrow \, \, \XinG
$$
which preserves the base leaf, and so that the restriction
of $\Phi$ to the base leaf coincides with the given
homeomorphism $\phi$.}
\medskip
\noindent {\it Proof of Lemma 4.17~:} The uniqueness of the
extension of $\phi$ from the
base leaf to the entire solenoid is automatic because the
residual finiteness of $G$ implies that the base leaf is a dense
subset of $\XinG$.
As regards the existence, we will exhibit a formula for $\Phi$.
Construct the solenoids $\XinH$ and $\XinK$ determined,
respectively, by the two subgroups $H$ and $K$ of
$G$. Define a map $\Sigma :
{{{\widetilde X}}{\times}{\widehat H}} \rightarrow {{{\widetilde X}}{\times}{\widehat K}}$
by
$$
(z,\langle \gamma^{F} \rangle) \, \longmapsto \,
(\phi(z), \langle \phi{\gamma^{F}}{\phi}^{-1} \rangle)
\leqno(4.18)
$$
where $F$ runs through all finite index normal
subgroups in $H$, and, of course, the $\gamma^{F}$
is a compatible string of cosets from the quotient
groups $H/F$.
It is easily checked that $\Sigma$ descends to a homeomorphism
$\Psi: \XinH \longrightarrow \XinK$ mapping base leaf to base leaf.
Therefore,
$$
\Phi = ~{Q_K} \circ \Psi \circ {Q_H}^{-1}
\leqno(4.19)
$$
(Compare with (4.15).) This defines the required self
homeomorphism of $\XinG$ with all the properties we want.
$\hfill{\Box}$
\medskip
The following {\it topological} lemma, which provides a
suitable {\underline{converse}} to Lemma 4.17, will be needed.
For this Lemma 4.20 we are strongly indebted to C. Odden's
thesis \cite{Od}. The logical organization of this paper,
is, however, actually {\it independent of} Lemma 4.20,
as well as of the remainder of the present Section 4.
\medskip
\noindent
{\bf Lemma 4.20 :} {\it In the topological set up
as above, suppose moreover that the group of deck
transformations $G$ (on ${\widetilde X}$) is a finitely generated
group. Let
$$
\Phi \, \, : \, \, \XinG \, \, \longrightarrow \, \, \XinG
$$
be any homeomorphism mapping the base leaf on itself.
Assume further that $\Phi$ is actually uniformly continuous
(in a natural uniform structure on the solenoid),
and that $X$ has positive injectivity radius. (To be
explained in a moment --- see below.)
Then the restriction of $\Phi$ to the base leaf,
say $\phi : {\widetilde X} \rightarrow {\widetilde X}$, {\underline{must}} virtually
normalize the group $G$.}
\medskip
\noindent
{\it Proof of 4.20 :}
As we said, up to simple modifications, Lemma 4.20 can be
found in \cite{Od}. For the purpose of being
reasonably self-contained we outline the ideas.
To explain the uniform structure on $\XinG$
it is easiest to work with certain metric structures on
the relevant spaces. Assume that ${\widetilde X}$ carries a
metric, say $\rho $, for which the action of $G$ is by isometries.
(We could be somewhat more general, because only the uniform
structure is what is actually needed. For instance, quasi-isometric
action of $G$ would suffice.) Now $X$ has on it an induced metric
(we still call that $\rho $).
\noindent
{\it The metric topology on $\Ghat$~:}~ $G$ is any finitely
generated, residually finite group. There is a nice way to
express the profinite topology via a metric. Define $A_n$ to be
the intersection of all subgroups of index $n$ or less in $G$.
As $G$ is assumed finitely generated, each $A_n$ must have finite
index in $G$. Note that $\cap{A_n} = \{1\}$, by the residual
finiteness of $G$. Odden uses this telescoping collection of
subgroups to define a metric on $G$: Let
$$
{\mbox{\rm ord}}(g)~=~{\rm max}
\{ n \, : \, g~~ \mbox{is an element of}~~A_n \} \, ,
$$
(and ${\rm ord}(1) = \infty$). Then set
$$
d(g,h) \, = \, \exp ({-{\rm ord}(g^{-1}h)})
\leqno(4.21)
$$
One can verify that $d$ is a metric, and that the completion
of $G$ with respect to $d$ is canonically $\Ghat$.
Combining the $G$-invariant metric $\rho $ on ${\widetilde X}$,
and the profinite completion metric $d$
above on $\Ghat$, one can get the obvious metric, say
$\sigma $, on $\XinG$ that induces the inverse-limit topology.
The uniform continuity of $\Phi$ is assumed to be with
respect to this metric $\sigma $.
\noindent
{\it Intersecting an $\epsilon$-ball of the $\sigma $-metric with
the base leaf~:}~
What does a small ball in $\XinG$ look like?
If $\epsilon$ is smaller than the injectivity radius
of the quotient $X$, then an $\epsilon$ ball has the structure
of the product of a small ball in ${\widetilde X}$ with the profinite
completion of some member of the descending chain of subgroups
of $G$ described above.
In effect, there exists $A=A_n$, (for some $n \ge 1$),
such that the $\epsilon$ ball in $\XinG$ is an $\epsilon$ ball
in ${\widetilde X}$ times $\hat{A}$.
The intersection of the base leaf and such an epsilon
ball (of the $\XinG$ metric) is an $A$-invariant
collection of disjoint balls on the base leaf ${\widetilde X}$.
This method of choosing subgroups $A=A_n$ in $G$,
associated to a given size of metric-ball in the solenoid,
is going to provide one with the desired finite index
subgroups $H$ and $K$ in $G$ that need to be exhibited
as getting mapped to each other by $\phi$-conjugation.
By the assumed uniform continuity, for each positive
$\epsilon$ there exists $\delta > 0$ such that
$\sigma (x,y) < \delta$ implies $\sigma (\Phi(x),\Phi(y)) < \epsilon$.
We take $\epsilon$ itself to be smaller than half the
injectivity radius of $X$. Find the corresponding
$\delta$ (and cut it down to be smaller than $\epsilon$).
Associated to the $\epsilon$-ball and the $\delta$-ball
in the $\sigma $-metric we get two corresponding finite index
subgroups $K$ and $H$, say, within $G$, as explained.
Now, it follows rather straightforwardly that the action
of $\Phi$ on the base leaf will conjugate $H$ into a finite
index subgroup of $K$. That is what was wanted.
$\hfill{\Box}$
\medskip
Now we are in a position to complete the proof of
Proposition 4.12.
\medskip
\noindent {\it Completion of Proof of Proposition 4.12 :}
In our situation, $X = \Delta/G$ is compact, therefore
so is $\DinG$. The Poincar\'e metric on $\Delta $ plays the
r\^ole of $\rho $. Any homeomorphism of a compact metric
space is automatically uniformly continuous.
Let $\Phi$ be any holomorphic automorphism of $\DinG$
that preserves the base leaf. It is holomorphic on the
base leaf, which is canonically the unit disc, $\Delta $.
Thus $\Phi \vert_{\Delta }$ is necessarily a M\"obius
transformation, and, by Lemma 4.20, it must virtually
normalize $G$. Thus, by Proposition 4.2(a), we deduce that
$\Phi$ is an element of $\mbox{\rm ComAut}(\Delta/G)$.
$\hfill{\Box}$
In the next section we will further investigate the action of
$\mbox{ComAut}(X)$ on $H_{\infty}(X)$.
\bigskip
\section{Ergodic action if and only if arithmetic Fuchsian}
Let $X$ be a compact connected Riemann surface. Let $G$ be a
co-compact Fuchsian group $G$ acting freely on the universal cover
$\Delta$, with $X = \Delta/G$.
\bigskip
\noindent
{\bf V.1. The measure on ${H}_{\infty}(X)$~:}
Consider the product measure on $\Delta\times\widehat{G}$,
where $\Delta$ is equipped with the volume form
given by the Poincar\'e metric, and $\widehat{G}$ is equipped
with the Haar measure. For any open set $U \subset
\Delta\times\widehat{G}$ over which the quotient map
$$
q \,: \,{\Delta\times\widehat{G}} \longrightarrow
{\Delta\times_G \widehat{G}}
$$
is injective, define the measure of $q(U)$ to be the product
measure of $U$. The action of $G$ on $\Delta$ preserves the
volume form on $\Delta$ induced by the Poincar\'e metric. The
left action of $G$ on its profinite completion $\widehat{G}$
preserves the Haar measure on $\widehat{G}$. Therefore, the
measure on $q(U)$ does not depend on the choice of the open set
$U$. It follows that there is a unique Borel measure
on $\Delta\times_G \widehat{G}$ whose restriction to any such
open subset $q(U)$ coincides with the measure of $U$ in
$\Delta\times\widehat{G}$.
Let ${\mu}_{\infty}$ denote the Borel measure on $H_{\infty}(X) =
\Delta\times_G \widehat{G}$ just constructed.
The action of the group $\mbox{ComAut}(X)$ on $H_{\infty}(X)$
preserves the measure ${\mu}_{\infty}$. To see this we first
observe that for a finite index subgroup $H\subset G$, the
natural inclusion
$\hat{i} : \widehat{H} \longrightarrow \widehat{G}$ is
compatible with the Haar measures on $\widehat{H}$ and $\widehat{G}$
respectively, in the sense that the image of $\hat{i}$ is an open
subgroup of $\widehat{G}$, and for any measurable subset $U \subset
\widehat{H}$, the measure $U$ is $\# (G/H)$-times the measure
of $\hat{i}(U)$, where $\# (G/H)$ is
the cardinality of $G/H$. From this it follows immediately that
the homeomorphism $f_H$ in Lemma 4.9 is actually measure
preserving. Now, from the definition of the action of
$\mbox{ComAut}(X)$ on $H_{\infty}(X)$ it follows immediately
that it preserves the measure ${\mu}_{\infty}$.
We will describe another construction of a measure on
$H_{\infty}(X)$.
Consider the Poincar\'e measure, ${\mu}_X$, on $X$. For any
unramified covering $p : Y \longrightarrow X$ of degree $d$,
consider the measure ${\mu}_Y/d$ on $Y$, where ${\mu}_Y$ is the
Poincar\'e measure on $Y$. For any measurable set $U \subset X$,
its measure $\mu_X(U)$ clearly coincides with
${\mu}_Y\left(p^{-1}(U)\right)/d$. This compatibility condition
of measures ensure that the inverse limit $H_{\infty}(X)$ is
equipped with a measure. This is a particular application of the
Kolmogorov's construction of measure on a inverse limit.
Let ${\nu}_{\infty}$ denote this measure on $H_{\infty}(X)$.
The action of the group $\mbox{ComAut}(X)$ on $H_{\infty}(X)$
preserves ${\nu}_{\infty}$. Indeed, the homeomorphism $f_H$
in Lemma 4.9 is compatible with this measure in the sense
described earlier.
Th measure ${\nu}_{\infty}$
on $H_{\infty}(X)$ is evidently absolutely
continuous with respect to the measure ${\mu}_{\infty}$
constructed earlier. Indeed, this is an immediate consequence of
the fact that the Haar measure on the profinite completion
$\widehat{G}$ can be obtained using the Kolmogorov's inverse limit
construction on the inverse limit of finite quotients $G/H$,
where $H$ is a normal subgroup of $G$ of finite index, and the
measure on $G/H$ being the Haar probability measure, i.e., the
counting measure divided by the cardinality.
Actually the two measures on $H_{\infty}(X)$ constructed above
are constant multiples of each other. But for our purposes it is
sufficient to know that they are absolutely continuous with
respect to each other. A discussion on this measure on ${H}_{\infty}(X)$
can also be found in Section 9 of \cite{NS}.
\bigskip
\noindent
{\bf V.2. The ergodicity theorem~:}
Using the work of Margulis that we quoted in section IV.4,
we prove in the following theorem that the question of the
arithmeticity of the Fuchsian group for $X$ is equivalent
to the question of whether or not ${\mbox{\rm ComAut}}(X)$
acts ergodically on the finite measure space
$(H_{\infty}(X), {\mu}_{\infty})$.
[Note that since the two measures constructed
on $H_{\infty}(X)$, namely ${\mu}_{\infty}$ and
${\nu}_{\infty}$, are absolutely continuous with
respect to each other, the action is ergodic with respect to
${\mu}_{\infty}$ if and only if it is ergodic with respect to
${\nu}_{\infty}$.]
\medskip
\noindent {\bf Theorem\, 5.1.}\, {\it The
Fuchsian group $G \subset {\rm Aut}({\Delta })$
is arithmetic if and only if the action of
${\rm ComAut}(X)$ on $H_{\infty}(X)$ is ergodic.
In fact, arithmeticity is also equivalent to each orbit
being dense.}
\medskip
\noindent
{\it Proof.}\, If $G$ is not arithmetic, then by a result of
Margulis, $\mbox{ComAut}(X)$ is a finite extension of $G$
\cite[Proposition 6.2.3]{Zi}. Conversely, if $G$ is
arithmetic, $\mbox{ComAut}(X)$ is dense in
$\mbox{Aut}({\Delta })$ \cite[Section 6.2]{Zi}. Since the base
leaf is dense in $H_{\infty}(X)$, the orbits of the action of
$\mbox{ComAut}(X)$ on $H_{\infty}(X)$ are dense if and only if
the group $G$ arithmetic.
If $G$ is not arithmetic, then take two nonempty disjoint
open subsets, say $U_1$ and $U_2$, of
the compact Riemann surface
$\Delta/\mbox{ComAut}(X)$. The inverse images of both
$U_1$ and $U_2$ for the natural projection of
$H_{\infty}(X) = \Delta\times_G \widehat{G}$ onto
$\Delta/\mbox{ComAut}(X)$ have positive measure. Hence the
action of $\mbox{ComAut}(X)$ on $H_{\infty}(X)$ cannot be be
ergodic in this case.
Now consider any locally integrable
function $f$ on $H_{\infty}(X)$
which is invariant under the action of $\mbox{ComAut}(X)$.
Let $\overline{f}$ be the function on $\Delta\times\widehat{G}$
obtained by pulling back $f$ using the natural
projection $q$ of
$\Delta\times\widehat{G}$ onto $\Delta\times_G\widehat{G}$.
Since the measure ${\mu}_{\infty}$ on
$\Delta\times_G\widehat{G}$ is constructed from the projection
$q$ by using the $G$-invariance property
of the product measure on $\Delta\times\widehat{G}$,
the function $\overline{f}$ is
locally integrable with respect to the product measure.
The function $\overline{f}$ is invariant (in the sense of
equality almost everywhere) firstly, under the action of
the deck transformations (action of $G$) of the covering $q$
and, secondly, under the action of $\mbox{ComAut}(X)$. These two
invariance conditions combine together to imply that
for each $g\in G$, the equality
$$
\overline{f} (x, hg) \, = \, \overline{f} (x, h)
$$
is valid for almost every $x\in \Delta$, $h \in \widehat{G}$.
Since $G$ is dense in $\widehat{G}$, by the continuity of the
associated action of $\widehat{G}$ on the space of all locally
integrable functions over $\Delta\times\widehat{G}$, we get
that $\overline{f} (x, h)$ is constant almost everywhere
in $h$; say $\overline{f} (x, h) \, = \, \hat{f}(x)$, where
$\hat{f}$ is a locally integrable function defined almost
everywhere on $X$.
Since $\overline{f}$ is invariant under the action of
$\mbox{ComAut}(X)$ on $\Delta\times\widehat{G}$, the function
$\hat{f}$ must be invariant under the action of
$\mbox{ComAut}(X)$ on $\Delta$.
Assume now that $G$ is arithmetic. Therefore, $\mbox{ComAut}(X)$
is dense in $\mbox{Aut}({\Delta})$. Using the continuity of the
action of $\mbox{Aut}({\Delta})$ on the space of all
locally integrable functions on $\Delta$, and the transitivity
of the tautological action of $\mbox{Aut}({\Delta})$, we
conclude, by an argument as above, that $\hat{f}$ must be
constant almost everywhere. This completes the proof of the
theorem.
$\hfill{\Box}$
\bigskip
\section
{Lift of the commensurability modular action on vector bundles}
\bigskip
\noindent
{\bf VI.1. Construction of natural inductive limit
vector bundles over ${\cal T}_{\infty}$~:}
Let $Y$ be any compact connected oriented smooth surface
of negative Euler characteristic. There is a {\it universal
family of Riemann surfaces}~:
$$
f :~{\cal Y} \longrightarrow {\cal T} (Y)
\leqno{(6.1)}
$$
over the Teichm\"uller space ${\cal T} (Y)$. In other words, $f$
is a Kodaira-Spencer family, namely a holomorphic proper submersion
with connected fibers, and for any point $t \in {\cal T}(Y)$,
the fiber $f^{-1}(t)$ is biholomorphic to the Riemann surface
represented by the point $t$.
Let us briefly recall the construction. (Consult, for different
points of view, \cite{Groth}, \cite{B2}, and \cite[Chapter 5]{N}.)
Let $\mbox{Conf}(Y)$ denote the space of all smooth complex structures
on $Y$, compatible with the orientation. There is a
tautological complex structure on $\mbox{Conf}(Y)\times Y$.
The group $\mbox{Diff}_0(Y)$,
consisting of all diffeomorphisms of $Y$ homotopic to the
identity map, acts naturally on both $\mbox{Conf}(Y)$ and $Y$.
The diagonal action preserves the complex structure on
$\mbox{Conf}(Y)\times Y$. Consequently, the
complex structure on $\mbox{Conf}(Y)\times Y$ descends to a complex
structure over the quotient space $\left(\mbox{Conf}(Y)\times
Y\right)/\mbox{Diff}_0(Y)$. This quotient complex manifold is
the universal Riemann surface ${\cal Y}$. The projection $f$ in
(6.1) is obtained from the natural projection of $\mbox{Conf}(Y)$ onto
$\mbox{Conf}(Y)/\mbox{Diff}_0(Y) = {\cal T}(Y)$.
The relative holomorphic cotangent bundle on $\cal Y$ will
be denoted by $K_f$. In other words, $K_f$ fits in the following
exact sequence of vector bundles over $\cal Y$~:
$$
0 \, \longrightarrow \, f^*{\Omega}^1_{{\cal T}(Y)} \,
\longrightarrow \, {\Omega}^1_{\cal Y} \, \longrightarrow \, K_f
\, \longrightarrow \, 0
$$
For any integer $i \geq 0$, let
$$
{{{\cal V}}^i}(Y) \hspace{.1in} := \hspace{.1in} f_* K^{\otimes i}_f
$$
be the holomorphic vector bundle on ${\cal T} (Y)$ given by the direct
image of the $i$-th tensor power, $K^{\otimes i}_f$, of $K_f$.
The fiber of of the vector bundle ${{{\cal V}}^i}(Y)$ over a point of
${\cal T}(Y)$ represented by a Riemann surface ${Y'}$ is
$H^0({Y'},\, K^{\otimes i}_{Y'})$.
Given any holomorphic covering $p: Y' \longrightarrow Z'$, the
homomorphism
$$
(dp)^*_{i}~:~H^0(Z',\, K^{\otimes i}_{Z'}) \longrightarrow
H^0(Y',\, K^{\otimes i}_{Y'})
\leqno{(6.2)}
$$
obtained using the co-differential of $p$, ~
$(dp)^* : p^* K_{Z'} \longrightarrow K_{Y'}$, is injective.
Given any unramified (topological) covering $p : Y \rightarrow Z$
between compact connected oriented surfaces, the ``fiberwise''
construction in (6.2) gives us a bundle homomorphism
$$
\hat{p}^* \, : \, \, {{{\cal V}}^i}(Z) \longrightarrow \,
{\cal T}(p)^* {{{\cal V}}^i}(Y)
\leqno{(6.3)}
$$
of holomorphic vector bundles over ${\cal T}(Z)$. Here ${{\cal T}}(p)$
is the basic embedding of Teichm\"uller spaces as in (2.2).
In other words, there is a natural morphism of holomorphic vector
bundles, ${{{\cal V}}^i}(Z) \longrightarrow {{{\cal V}}^i}(Y)$
commuting with the embedding ${\cal T}(p)$ of base spaces.
If $q : W \longrightarrow Y$ is another unramified covering,
with $W$ compact and connected, then consider the
homomorphisms $\hat{q}^*$ and $\widehat{p \circ q}^*$ of
holomorphic vector bundles over ${\cal T}(Y)$ and ${\cal T}(Z)$
respectively, as in (6.3). The following is a commutative
diagram of homomorphisms of the relevant vector bundles over
${\cal T}(Z)$ :
$$
\matrix{
{{{\cal V}}^i}(Z) & = & {{{\cal V}}^i}(Z)
\cr
\mapdown{\hat{p}^*} && \mapdown{\widehat{p\circ q}^*}
\cr
{\cal T}(p)^* {{{\cal V}}^i}(Y) &
{\stackrel{{\cal T}(p)^*\hat{q}^*}{\longrightarrow }} &
{\cal T}(p\circ q)^* {{{\cal V}}^i}(W)
\cr} \leqno{(6.4)}
$$
It follows that the holomorphic vector bundles
${{{\cal V}}^i}(X_{\alpha})$ over ${\cal T}(X_{\alpha})$,
(where $\alpha : X_{\alpha} \rightarrow X$ is any member of ${\cal I}(X)$),
constitute an {\it inductive system} using the homomorphisms
$\hat{\alpha}^*$ constructed in (6.3). That these connecting
homomorphisms do fit into an inductive system is ensured by
the commutativity of the diagram in (6.4).
Therefore, we have a {\it holomorphic vector bundle over} ${\cal T}_{\infty}(X)$
by passing to the inductive limit in this inductive system:
$$
{{\cal V}}^i_{\infty}(X) := \limind { {{{\cal V}}^i}(X_{\alpha})}
\leqno(6.5)
$$
We may denote this holomorphic vector bundle by ${{\cal V}}^i_{\infty}$,
suppressing in the notation the base surface $X$.
That is because, as in the work of previous sections, this
construction over $X$ produces a bundle over ${\cal T}_{\infty}(X)$ which is
{\it holomorphically isomorphic} to the corresponding construction
${{\cal V}}^i_{\infty}(Y)$ over ${\cal T}_{\infty}(Y)$, whenever any unramified pointed
topological covering $p : Y \rightarrow X$ (member of ${\cal I}(X)$),
is specified. This natural bundle isomorphism determined by $p$
$$
{{\cal V}}^{i}_{\infty}(p) : {{\cal V}}^i_{\infty}(Y) \longrightarrow {{\cal V}}^i_{\infty}(X)
\leqno(6.6)
$$
is constructed exactly as in the discussion of ${\cal T}_{\infty}(p)$ (see
equation (3.2)). It covers the biholomorphic identification
${\cal T}_{\infty}(p) : {\cal T}_{\infty}(Y) \longrightarrow {\cal T}_{\infty}(X)$ between the two base spaces.
The fiber of ${{\cal V}}^i_{\infty}(X)$ over any $[Z] \in {\cal T}_{\infty}(X)$ is
simply the direct limit of spaces of $i$-forms :
$ \limind H^0(Z_{\beta},\, K^{\otimes i}_{Z_{\beta}})$,
the index $\beta$ running through all finite unramified holomorphic
coverings $Z_{\beta}$ of the Riemann surface $Z$, with
each $Z_{\beta}$ a connected Riemann surface.
The above direct limit vector space can be interpreted as the
space of those holomorphic $i$-forms on the complex analytic
solenoid, ${H}_{\infty}(Z)$, which are complex analytic on the
leaves and locally constant in the transverse (Cantor)
direction.
\bigskip
\noindent
{\bf VI.2. Lifting the action of ${MC}_{\infty}$ and allied matters ~:}
We will now investigate the compatibility of the vector bundle
${{\cal V}}^i_{\infty}(X)$ with the action of $MC_{\infty}(X)$ on
${{\cal T}}_{\infty}(X)$.
\medskip
\noindent
{\bf Theorem\, 6.7.}\, {\it
(a) The commensurability modular action of $MC_{\infty}(X)$
on ${{\cal T}}_{\infty}(X)$ lifts to ${{\cal V}}^i_{\infty}$, for every
$i \geq 0$.
\smallskip
\noindent
(b) Take any $i \geq 1$ and any point $[Z] \in {\cal T}_{\infty}(X)$. The
isotropy group at $[Z]$, namely $\mbox{\rm ComAut}(Z)$, for the
action of $MC_{\infty}(X)$ on ${\cal T}_{\infty}(X)$, acts effectively
on the fiber of ${{\cal V}}^i_{\infty}$ over $[Z]$.}
\medskip
\noindent {\it Proof:}\, Part (a): That the action of $MC_{\infty}(X)$
on ${{\cal T}}_{\infty}(X)$ lifts to ${{\cal V}}^i_{\infty}$ is rather
straightforward. Suppose that $g \in \MCinX$ is represented by
the two-arrow diagram arising from a pair of pointed unramified
topological coverings :
$$
p_j \,:\,(Y,y) \, \longrightarrow \, (X,x) \, ,
$$
where $j=1,2$, --- as in (3.3).
Then, by (6.6) above, we have two induced isomorphisms
between the inductive limit bundles of $i$-forms over
${\cal T}_{\infty}(X)$ and ${\cal T}_{\infty}(Y)$, respectively.
Clearly then, the commensurability modular transformation $g$
on ${\cal T}_{\infty}(X)$ lifts to the holomorphic bundle automorphism :
$$
{{{\cal V}}^{i}_{\infty}(p_2)} \circ {{{\cal V}}^{i}_{\infty}(p_1)}^{-1}
\leqno(6.8)
$$
Compare this with the definition of $A_{(p_1,p_2)}$ provided in
equation (3.4).
It is also worthwhile to explicitly describe the lifted
action of $g$. For this purpose, take any complex structure
$J$ on $X$. The action of $g$ on
${{\cal T}}_{\infty}(X)$ sends the point representing the Riemann
surface $(Y, p^*_1 J)$ to $(Y, p^*_2 J)$. Let $\overline{X}$,
$\overline{Y}_1$ and $\overline{Y}_2$ denote the Riemann
surfaces defined by the complex structures $J$, $p^*_1J$ and
$p^*_2 J$ respectively.
Let the action of $g$ on ${{\cal V}}^i_{\infty}$ be such that it
sends the subspace $(dp_1)^*_iH^0(\overline{X},\, K^{\otimes
i}_{\overline{X}})$ of $H^0(\overline{Y}_1,\, K^{\otimes i}_{
\overline{Y}_1})$ to the subspace $(dp_2)^*_i H^0(\overline{X},
\,K^{\otimes i}_{\overline{X}})$ of $H^0(\overline{Y}_2,\,
K^{\otimes i}_{\overline{Y}_2})$; the homomorphism $(dp_j)^*_i$
is defined in (6.2). The resulting isomorphism
$$
(dp_1)^*_iH^0(\overline{X},\, K^{\otimes
i}_{\overline{X}}) ~ \longrightarrow ~
(dp_2)^*_i H^0(\overline{X},
\,K^{\otimes i}_{\overline{X}})
$$
is the identity automorphism of $H^0(\overline{X},\, K^{\otimes
i}_{\overline{X}})$, after invoking the natural identification
of $(dp_j)^*_iH^0(\overline{X},\, K^{\otimes
i}_{\overline{X}})$, $j=1,2$, with $H^0(\overline{X},\, K^{\otimes
i}_{\overline{X}})$.
Take any covering $\alpha \, : \, X_{\alpha} \, \longrightarrow
\, X$, representing a point $\alpha$ in ${\cal I} (X)$. Let
$$
q_j \, :\, Y_{\alpha} \, \longrightarrow \, X_{\alpha}\, ,
$$
where $j= 1,2$, be the pull back of the covering $p_j$ by
$\alpha$. Choose a complex structure $J_{\alpha}$ on
$X_{\alpha}$. The Riemann surfaces $(X_{\alpha},J_{\alpha})$,
will be denoted by $\overline{X}_{\alpha}$. The Riemann surface
$(Y_{\alpha}, q^*_jJ_{\alpha})$, ($j=1,2$), will be denoted
by $\overline{Y}_{j,\alpha}$.
The action of $g$ on ${{\cal T}}_{\infty}(X)$ sends the point
of ${{\cal T}}_{\infty}(X)$ represented by $\overline{Y}_{1,\alpha}$
to the point represented by $\overline{Y}_{2,\alpha}$.
Let us denote the fiber of the vector bundle ${{\cal V}}^i_{\infty}$
over the point of ${{\cal T}}_{\infty}(X)$ represented by
$\overline{X}_{\alpha}$, namely
${{\cal V}}_{\infty}^1{\vert}_{[{\overline{X}_{\alpha}}]}$, by
$({{\cal V}}^i_{\infty})_{\overline{X}_{\alpha}}$.
Define the action of $g$ on ${{\cal V}}^i_{\infty}$ to be such that
it sends the subspace
$$
(dq_1)^*_i H^0(X_{\alpha},\, K^{\otimes
i}_{X_{\alpha}}) \,\, \subset \,\,
H^0(\overline{Y}_{1,\alpha},\, K^{\otimes
i}_{\overline{Y}_{1,\alpha}}) \,\, \subset \,\,
({{\cal V}}^i_{\infty})_{\overline{X}_{\alpha}}
$$
to the subspace
$$
(dq_2)^*_i H^0(X_{\alpha},\, K^{\otimes
i}_{X_{\alpha}}) \,\, \subset \,\,
H^0(\overline{Y}_{2,\alpha},\, K^{\otimes
i}_{\overline{Y}_{2,\alpha}}) \,\, \subset \,\,
({{\cal V}}^i_{\infty})_{\overline{X}_{\alpha}}
$$
The resulting isomorphism between $(dq_1)^*_i H^0(X_{\alpha},\,
K^{\otimes i}_{X_{\alpha}})$ and $(dq_2)^*_i H^0(X_{\alpha},\,
K^{\otimes i}_{X_{\alpha}})$ is the identity automorphism of
$H^0(X_{\alpha},\, K^{\otimes i}_{X_{\alpha}})$, after invoking
the natural identification of
$(dq_j)^*_i H^0(X_{\alpha},\, K^{\otimes i}_{X_{\alpha}})$, $j=1,2$,
with $H^0(X_{\alpha},\, K^{\otimes i}_{X_{\alpha}})$.
The commutativity of diagram (6.4) ensures that the above
conditions on the action of $g$ are compatible. Therefore, we
have demonstrated the natural lift of the action of the element
$g \in MC_{\infty}(X)$ to the vector bundle ${{\cal V}}^i_{\infty}(X)$
over ${\cal T}_{\infty}(X)$. That completes part (a).
\noindent
Proof for part (b) :
To prove the effectivity of the action of the isotropy subgroup,
we first consider the case $i=1$.
Let $Z = \Delta /G$, where $G$ is a torsion free co-compact
Fuchsian group. From Theorem 4.2 we know that the isotropy
group at $[Z]$, $\mbox{\rm ComAut}(Z)$, is exactly the commensurator
$\mbox{Comm}(G)$.
Let $N(H) \subset \mbox{Comm}(G)$ denote the normalizer
of $H$ in $\Mob$, where $H$ is any finite index
subgroup of $G$. (Recall section IV.3.)
We will start by proving that these subgroups $N(H)$
within $\mbox{Comm}(G)$ act faithfully on the fiber of
${{\cal V}}_{\infty}^1$ over the point $[Z] \in {\cal T}_{\infty}(X)$.
First take a non-identity element $g \in G$. Since $G$ is
a residually finite group, there exists a finite index normal
subgroup $H$ of $G$ such that $g$ does not belong to $H$.
Let $p : Y \longrightarrow Z$ be the unramified Galois covering
defined by the above subgroup $H$. The Galois group $G/H$ acts
effectively by deck transformations on $Y$. Thus the projection
of $g$ in the quotient group $G/H$ produces a nontrivial
holomorphic automorphism on the Riemann surface $Y$.
Now, it is well known that action on the space of holomorphic
Abelian differentials ($H^0(Y, {\Omega}^1_{Y})$) of
any nontrivial automorphism of any Riemann surface $Y$, (of
genus at least one), can never be trivial; see, for instance,
\cite{L}.
Therefore, the Galois action of $g$ on $Y$ gives a nontrivial
action on $H^0(Y,\, {\Omega}^1_{Y})$ --- implying that the action
of $g$ on the fiber
${{\cal V}}_{\infty}^1{\vert}_{[Z]}$
is certainly nontrivial.
But every element of $N(H) \backslash G$ represents a
non-trivial holomorphic automorphism of the appropriate
covering surface of $Z$. Therefore,
by the same token, we see that every non-identity element of
every normalizer subgroup, $N(H)$, acts non-trivially on the
fiber, as desired.
For the remaining case, we need to consider
the ``non mapping class like'' elements $g \in \mbox{Comm}(G)$
(note section IV.3). Therefore we assume that $g$ is not a member of
any of the normalizers $N(H)$. As we know from Section 4,
(vide the end of section IV.2), each element $g$ is
represented by a pair of holomorphic coverings from some
connected Riemann surface $Y$ onto the given $Z$ :
$$
p_j \, : \, Y \, \longrightarrow \, Z \, ,
$$
$j =1,2$. In order that $g$ not arise as a member of some
normalizer (since we have already disposed of that), one can assume
that $p_1\circ h \neq p_2$, for any automorphism $h \in
\mbox{HolAut}(Y)$.
Choose a point $z \in Z$, and also two points $y_j \in
p^{-1}_j(z)$, $j=1,2$, satisfying the following condition~:
\begin{enumerate}
\item{} $y_1 \neq y_2$, if $Y$ is not hyperelliptic;
\item{} $y_1 \neq y_2$ and also $y_1\neq \sigma (y_2)$, if $Y$ is
hyperelliptic and $\sigma$ is the hyperelliptic involution thereon.
\end{enumerate}
The existence of such $z$, $y_1$ and $y_2$ is ensured by the
assumption spelled out regarding $p_1$ and $p_2$.
\noindent
{\it First case: Assume $Y$ is not hyperelliptic}
Therefore, the holomorphic cotangent bundle $K_Y$ over $Y$ is
very ample. In particular, there is a $1$-form $\omega
\in H^0(Y,\, {\Omega}^1_{Y})$ such that $\omega (y_1) =0$ and
$\omega (y_2) \neq 0$. Therefore, the action of $g$ on
${{\cal V}}_{\infty}^1{\vert}_{[Z]}$ does not take the line generated by
$\omega$ to itself. Effectivity is established in this
case.
\noindent
{\it Remaining case: Assume $Y$ is hyperelliptic}
If $Y$ is hyperelliptic then $K_Y$ is no longer very ample. But
$K_Y$ is still base point free, and the image of corresponding map
$Y \rightarrow {\fam\msbfam\tenmsb P}H^0(Y,\, {\Omega}^1_{Y})^*$ is ${\fam\msbfam\tenmsb
C}{\fam\msbfam\tenmsb P}^1$, with the map itself being identifiable as the
projection of $Y$ onto its own quotient by the
hyperelliptic involution. Therefore, the existence of a
$1$-form $\omega$, with $\omega (y_1) =0$ and $\omega (y_2) \neq
0$, is again assured. This completes the proof of effectivity of
$\mbox{\rm ComAut}(Z)$ on the fiber for the case of the bundle of $1$-forms.
If $i\geq 2$, then the proof is identical. In fact, it
is actually simpler. As is well-known, the line bundle
$K^{\otimes i}_Y$ is very ample if $i \geq 2$ for {\it every}
Riemann surface $Y$ with genus at least two. Therefore, the
hyperelliptic case need not be considered separately any more.
This completes the proof of the theorem.
$\hfill{\Box}$
\medskip
\noindent
{\it Remarks :} The above proof shows that the action of the
isotropy group for $[Z]$ on the projective space
${\fam\msbfam\tenmsb P}({{\cal V}}_{\infty}^1{\vert}_{[Z]})$ is also effective.
In the case of the bundle of $i$-forms with $i \geq 2$
we could utilize Poincar\'e theta series, for the relevant
Fuchsian group and its subgroups, to also provide another
proof of the effectivity of the action of the commensurability
automorphism group on the fiber.
\bigskip
\noindent
{\bf VI.3. Petersson hermitian structure on the bundles
over ${\cal T}_{\infty}$ ~:} Let $Y$ be a connected Riemann surface of genus
at least two.
The Poincar\'e metric, ${\omega}$, on $Y$ induces a Hermitian
metric, $h$, on any $K^{\otimes i}_Y$. For any two sections $s$
and $t$ of $H^0(Y,\, K^{\otimes i}_{Y})$, the pairing
$$
\int_Y \langle s, t\rangle_h \cdot\overline{\omega}\, ,
$$
where $\overline{\omega}$ is the K\"ahler form for $\omega$,
defines a Hermitian inner product on the vector space $H^0(Y,\,
K^{\otimes i}_{Y})$. This inner product is usually called the
$L^2$-{\it inner product}; it coincides with the classical
Petersson pairing of holomorphic $i$-forms on the Riemann
surface.
For any covering $\alpha :X_{\alpha} \longrightarrow X$,
representing a point of ${\cal I} (X)$, consider the inner product on
$H^0(X_{\alpha},\, K^{\otimes i}_{X_{\alpha}})$ defined by
$$
\langle s, t \rangle \,\, := \,\, \frac{\int_Y \langle s,
t\rangle_h \cdot\overline{\omega}}{d} \, ,\leqno{(6.9)}
$$
where $d$ is the degree of the covering $\alpha$. This
normalized $L^2$-inner product has the property that if $p :
X_{\beta} \longrightarrow X_{\alpha}$ is a covering map, where
$\beta = p\circ\alpha \in {\cal I} (X)$, then the natural inclusion
homomorphism
$$
(dp)^*_i \, :\, H^0(X_{\alpha},\, K^{\otimes i}_{X_{\alpha}}) \,
\longrightarrow\, H^0(X_{\beta},\, K^{\otimes i}_{X_{\beta}}) \, ,
$$
defined in (6.2), actually preserves the normalized $L^2$-inner
product. Therefore, the limit vector bundle ${{\cal V}}^i_{\infty}$ is
equipped with a {\it natural Hermitian metric}. The restriction of
this metric to any subspace of the type $H^0(X_{\alpha},\,
K^{\otimes i}_{X_{\alpha}})$
of a fiber coincides with the normalized
$L^2$-inner product.
In section VI.2 above, we saw that the commensurability
modular action on the base ${\cal T}_{\infty}(X)$ lifts to holomorphic
vector bundle automorphisms on ${{\cal V}}^i_{\infty}$.
The simple observation that $(dp)^*_i$ preserves the
normalized $L^2$-inner product, immediately implies that
{\it the lift of each $\gamma \in CM_{\infty}(X)$ preserves the
natural Hermitian structure of} ${{\cal V}}^i_{\infty}(X)$. In
fact, each of the bundle isomorphisms ${{\cal V}}^i_{\infty}(p)$
(of the type in (6.6)) is an isometric isomorphism, and
the assertion follows.
\bigskip
\noindent
{\bf VI.4. Projective limit construction of an $i$-forms
vector bundle~:}
There is a ``dual'' construction to the one exhibited
in section VI.1. Let $p : Y \longrightarrow X$ be an
unramified covering map of degree $d$ between compact
connected Riemann surfaces. The inverse of differential of the
map $p$, namely
$$
(dp)^{-1} \, : \, K_Y \, \longrightarrow \,
p^*K_X \, ,
$$
induces the isomorphism $((dp)^{-1})^{\otimes i} : K^{\otimes
i}_Y \, \longrightarrow \, p^*K^{\otimes i}_X$. Now taking the
direct image of $((dp)^{-1})^{\otimes i}$ we have
$$
(((dp)^{-1})^{\otimes i})_* \, : \, {p_*}K^{\otimes i}_Y \,
\longrightarrow \, p_*p^* K^{\otimes i}_X \, = \, K^{\otimes
i}_X\otimes p_*{\cal O}_Y
$$
The last equality is the well-known projection formula.
There is an obvious homomorphism
$p_*{\cal O}_Y \longrightarrow {\cal O}_X$. Using this, we
obtain
$$
\overline{p} \, : \, {p_*}K^{\otimes i}_Y \, \longrightarrow \,
K^{\otimes i}_X \, .
$$
Now, since $H^0(Y,\, K^{\otimes i}_Y) = H^0(X,\, p_*K^{\otimes
i}_Y)$, the above homomorphism $\overline{p}$ induces a
homomorphism
$$
\overline{p}_i \, : \, H^0(Y,\, K^{\otimes i}_Y) \,
\longrightarrow \, H^0(X,\, K^{\otimes i}_X)
$$
It is easy to see that the above homomorphism $\overline{p}_i$
is the dual of the natural homomorphism
$$
p^* \, : \, H^1(X,\, T^{\otimes (i-1)}_X) \, \longrightarrow \,
H^1(Y,\, T^{\otimes (i-1)}_Y)
$$
after invoking the Serre duality for both $K^{\otimes i}_X$ and
$K^{\otimes i}_Y$.
For any $\omega \in H^0(X,\, K^{\otimes i}_X)$, it is evident
that
$$
\overline{p}_ip^*\omega ~ = ~ d\omega \, .
\leqno{(6.10)}
$$
Let us denote $\overline{p}_i/d$ by $p_i$. If
$q : Z \longrightarrow Y$ is another such covering, then evidently
$(p\circ q)_i = p_i\circ q_i$.
This compatibility condition implies that to any Riemann surface
$X$ we can associate the {\it projective limit} of spaces of
$i$-forms of covering Riemann surfaces :
$$
\limproj H^0(X_{\alpha},\, K^{\otimes i}_{X_{\alpha}}) \, ,
$$
with ${\alpha}$ running through the directed set ${\cal I}(X)$.
The construction of this projective limit, as the fiber over $[X]$,
gives us a new holomorphic vector bundle
$$
{{\cal V}}^{{\infty},i} \, \longrightarrow \, {{\cal T}}_{\infty}(X)
$$
Furthermore, the identity $p_ip^*\omega = \omega$ (deduced from
(6.10)), implies that there is a natural injective homomorphism of
of vector bundles
$$
f_i \, : \, {{\cal V}}_{\infty}^i \, \longrightarrow \,
{{\cal V}}^{{\infty},i}
\leqno{(6.11)}
$$
In other words, for set theoretic reasons, the
inductive limit $i$-forms bundle injects into the newly
constructed projective limit $i$-forms bundle.
It is easy to see that the action of $MC_{\infty}(X)$ on
${{\cal T}}_{\infty}(X)$ lifts to ${{\cal V}}^{{\infty},i}$. Also, the two
constructions are compatible, as one may check, for essentially
set-theoretic reasons. In particular, the inclusion $f_i$ in (6.11)
commutes with the actions of $MC_{\infty}(X)$ on these bundles.
We put down these observations in the form of the following
Proposition.
\medskip
\noindent
{\bf Proposition\, 6.12.}\, {\it
For any $i\geq 0$, the action of $MC_{\infty}(X)$ on
${{\cal T}}_{\infty}(X)$ lifts to ${{\cal V}}^{{\infty},i}$. The
inclusion map $f_i$ commutes with the actions of $MC_{\infty}(X)$.
The isotropy subgroup at any point of ${{\cal T}}_{\infty}(X)$, for the
action of $MC_{\infty}(X)$, acts faithfully on the
corresponding fiber of ${{\cal V}}^{{\infty},i}$.}
\medskip
The last assertion regarding effectivity is clearly a consequence
of Theorem 6.7 part (b).
\noindent
{\it Remark :}~ The problem of extension of these bundles
to the completion of ${\cal T}_{\infty}(X)$, namely to bundles over $\THinX$,
and the question of computing the curvature forms of these bundles
as forms on the base space ${\cal T}_{\infty}(X)$, are topics to which
we hope to return at a later date.
| proofpile-arXiv_065-8198 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Recently, two separate groups have interpreted their observations of
type Ia supernovae as evidence for acceleration in the cosmic
expansion, presumably caused by a nonzero cosmological constant (Riess
et al. 1998, henceforth `R98', and Perlmutter et al. 1998). Using
the supernovae as standard candles (after correction for the relation
between luminosity and light-curve shape), both groups find a
progressive dimming of supernovae at high redshift relative to the
predictions of a flat, matter-dominated model, or even an open model
with zero cosmological constant. There are at least two explanations
for this dimming which are unrelated to the cosmological parameters:
evolutionary effects and dust. The two observational groups have
expended considerable effort attempting to account for such
systematics, but this letter argues that the obscuration of distant
supernovae by a cosmological distribution of dust is {\em not} ruled
out, by developing a simple intergalactic dust model which is
reasonable in its creation and dispersement, has reasonable physical
properties, and could cause the observed effect without violating
constraints such as those noted in R98.
The standard way to estimate dust extinction, used by the supernova
groups, is to apply a relation between reddening and extinction
derived from Galactic data (see, e.g., Cardelli, Clayton \& Mathis
1989). This relation is a result of the frequency dependence of the
opacity curve of the absorbing dust, which can be measured well for
Galactic dust at wavelengths $\lambda \la 100\ \mu$m, and fit by
theoretical models of graphite and silicate dust in the form of small
spheres with a distribution of radii (Draine \& Lee 1984; Draine \&
Shapiro 1984). Because the opacity of this `standard' dust falls off
rather quickly with increasing wavelength for $\lambda \ga 0.1\
\mu$m, attenuated light is significantly reddened.
The weakness of this method when applied to a new situation is the
necessary assumption that the same extinction-reddening relation
holds, even though the dust may be of a different species; standard
techniques would not correct for dust which causes extinction without
significant reddening. For the same reason, an effectively uniform
intergalactic distribution of non-reddening dust could remain
undetected by reddening studies such as those of Wright \& Malkan
(1988) and Cheng, Gaskell \& Koratkar (1991).
\section{A Specific Model with Carbon Needles}
To make this idea concrete, let us begin with the theory of dust
formation. Very small dust grains are believed to form in a
vapor-solid transition, via a process of crystal formation; these
grains may then coagulate into larger ones. In small grain
formation, nuclei form first, and then they grow as surface nucleation occurs
on their faces. Surface nucleation creates steps which grow along the
face, adding a new layer and increasing the crystal size. But
environments of low supersaturation (as may commonly occur
astrophysically) strongly inhibit surface nucleation. In such cases,
grains may still grow by the mechanism of a `screw dislocation' (see
e.g. Frank 1949, Sears 1955), or by growth of a rolled-up platelet (Bacon
1960), forming one-dimensional `needles' like those commonly found in
laboratory experiments (e.g. Nabarro \& Jackson 1958.)
Moreover, needles can grow rapidly where `spherical' dust cannot; thus
in some situations needles can out-compete spherical dust for
available metals. This reasoning led Donn \& Sears (1963) to predict
that interstellar dust could be dominated by needle-type dust. These
predictions were partially borne out by the discovery of estatite
needles in captured interstellar dust (Bradley, Brownlee \& Veblen
1983.) The needles were not the dominant component, but their
discovery does demonstrate that vapor-solid transitions occur
astrophysically, and that astrophysical needles can form by the same
mechanism as in laboratories (they contained screw dislocations.)
Conducting needles are physically interesting because they act as
antennas, absorbing electromagnetic radiation more effectively than
standard dust. Several authors have proposed models in which such
grains thermalize the microwave background (see, e.g., Wickramasinghe
et al. 1975; Wright 1982; Hawkins \& Wright 1988).
I have calculated the extinction cross section for needles at $\lambda
\le 10\ \mu$m using the `discrete dipole approximation' (see, e.g.,
Draine 1988) as implemented in the publicly available DDSCAT package
and using the accompanying graphite dielectric constants
$\epsilon_\perp$ and $\epsilon_\parallel$ (see Laor \& Draine 1993).
Following Wickramasinghe \& Wallis (1996), I assume that the graphite
$c$-axis is perpendicular to the needle so that $\epsilon_\perp$
applies for an electric field parallel to the needle axis (see also
Bacon 1960). Figure \ref{fig-needopac} shows curves for various
needle diameters $d$, $0.02 \le d[\mu m] \le 0.2$, averaged over
incident radiation directions, and averaged over an aspect ratio
distribution $n(L/d) \propto (L/d)^{-1}$ with $4 \le L/d \le 32$.
This mass-equipartition distribution was chosen to represent a
shattering spectrum from longer needles; laboratory needles grow up to
$L/d \sim 1000.$ The maximal $L/d$ is somewhat arbitrary but largely
irrelevant since the short-wavelength behavior depends only weakly
on $L/d \ga 8.$ The results roughly agree with the Mie calculations
of Wickramasinghe and Wallis (1996) which use somewhat different
optical data.
Major uncertainties in the opacity model include the uncertainties in
optical data (see Draine \& Lee 1984 for discussion), an unknown
impurity content in the needles, and the unknown needle
diameter;\footnote{ The needle diameter is particularly important, but
it is difficult justify any {\em a priori} estimate of its value.
For the sake of the argument at hand I take $d=0.1.$ Note that a
distribution of diameters would likely simply be dominated by the
low-diameter cutoff.} the model given is intended to be suggestive
rather than complete. The key point is that needles generically have
an opacity which is higher ($\kappa_V \simeq 10^5\ {\rm cm^2\
g^{-1}}$) and less wavelength-dependent than that of standard dust.
Several works (Chiao \& Wickramasinghe 1972, Ferrara et al. 1990 and
1991, Barsella et al. 1989) have studied dust ejection from galaxies,
all concluding that most spiral galaxies could eject (spherical)
graphite dust. These theoretical studies are supported by
observations of dust well above the gas scale height in galaxies
(Ferrara et al. 1991) and of vertical dust lanes and fingers
protruding from many spiral galaxies (Sofue, Wakamatsu \& Malin 1994).
The high opacity of needles extends over a large wavelength range,
hence they are strongly affected by radiation pressure and are even
more likely be ejected than spherical grains. In a magnetized region,
charged grains spiral about magnetic field lines. Magnetized gas may
escape from the galaxy via the Parker (1970) instability, or grains
may escape by diffusing across the lines during the fraction of the
time that they are uncharged; see, e.g., Barsella et al. (1989). Once
free of the galaxy\footnote{A grain leaving the disk would be
sputtered by gas in the hot galactic halo, but the resulting mass
loss is $ <20\%$ for the 0.01 $\mu$m silicate spheres and even less
for faster moving or larger grains (Ferrara et. al. 1991), so the
effect on the needles would be very small.}, needles would rapidly
accelerate and could reach distances of 1 Mpc or more.
Following Hoyle \& Wickramasinghe (1988), we estimate the time
required for needle ejection and dispersement as follows. A grain
with length $L$, cross section $d^2$, specific gravity $\rho_m$ and
opacity $\kappa$ in a anisotropic radiation field will attain a
terminal velocity $v$ given by equating the radiative acceleration
$\kappa F/c$ to the deceleration due to viscous drag of $a \approx
(4v^2 / \pi d)(\rho_{gas} / \rho_{m}) + {\cal O}(u^2/v^2).$ Here, $F$
is the net radiative flux, and $u=(3kT_{gas}/m_H)$ and $\rho_{gas}$
are the gas thermal speed and density. Values applicable for needles
in our Galaxy are $\kappa \simeq 10^5 {\rm\ cm^2\ g^{-1}}$, $\rho_m
\simeq 2\ {\rm g\ cm^{-3}}$, $\rho_g \simeq 10^{-24}\ {\rm g\
cm^{-3}}$, $T_{gas} \sim 100 $K, and $F/c \sim 10^{-13}\ {\rm ergs\
cm^{-3}}$. These give a terminal velocity of $v \simeq 4 \times
10^5 (d/0.1 \mu{\rm m})^{1/2}\ {\rm cm\ s^{-1}}$ and a timescale to
escape a 100 pc gas layer of $\sim 2.5 \times 10^7(d/0.1 \mu{\rm
m})^{-1/2}\ {\rm yr.}$ Outside the gas layer, the needle is subject
only to radiation pressure. For a rough estimate we assume that the
constant acceleration $\kappa F/c \sim 10^{-8}\ {\rm cm\ s^{-2}}$ acts
for the time required for the needle to travel a distance equal to the
galactic diameter. This takes $\sim (2Rc/\kappa F)^{1/2} \simeq
8 \times 10^7\ {\rm yr}$ for a galaxy of size $R \sim 3 \times
10^{22}\ {\rm cm}$ and leaves the needle with velocity $v \sim 2.5
\times 10^7\ {\rm cm\ s^{-1}}$. Such a velocity will carry the needle
1 Mpc (twice the mean galaxy separation at $z=1$) in $\sim 4$ Gyr.
For comparison, the time between $z=3$ (when dust might be forming)
and $z=0.5$ (when the supernovae are observed) is 5.5 Gyr for $\Omega
= 1$ and 7.3 Gyr for $\Omega = 0.2.$ These estimates suggest that
radiation pressure should be able to distribute the dust fairly
uniformly\footnote{Using the model of Hatano et al. (1997), R98
argues that dust confined to galaxies would cause too large a
dispersion in supernova fluxes. For the dust to create less
dispersion than observed, they must merely be `uniform' enough that
a typical line-of-sight (of length $\sim cH_0^{-1}$) passes through
many clumps of size $\sim D$ and separation $\sim \lambda$, i.e.
that $1/\sqrt{cH_0^{-1}D^2/\lambda^3} \la 1$ (the observed
dispersion in R98 is 0.21 mag, similar to the necessary extinction).
This is easily satisfied for needles traveling $\ga 50$ kpc from
their host galaxies.} before $z \simeq 0.5$.
Dust is known to exist in large quantities (masses $\sim 0.1\%$ of the
total galaxy mass are often inferred) in bright, high-redshift
galaxies (see, e.g., Hughes 1996). These galaxies would preferentially
eject dust with higher opacity at long wavelengths (e.g. needles, or
fractal/fluffy grains); such grains tend to have a shallower falloff
in opacity with wavelength, hence redden less than the observed
Galactic dust. This selection affect and the estimation of dust
escape timescales suggest that if substantial intergalactic dust
exists, it should be effectively uniform, and redden less than
standard dust.
We can compute the optical depth to a given redshift due to uniform
dust of constant comoving density using
$$
\tau_\lambda(z) = \left({c\over H_0}\right)\rho_0\Omega_{needle}\int_0^zdz'\
{(1+z') \kappa[\lambda/(1+z')] \over (1+\Omega z')^{1/2}}.
$$
Figure \ref{fig-needopac} shows the integrated optical depth to
various redshifts for needles with $d = 0.1\ \mu m$, for $\Omega =
0.2$, $h=0.65$ and $\Omega_{needle} = 10^{-5}.$ Using this
information, we can calculate the dust mass necessary to account for
the observations if $\Omega_\Lambda = 0.$
The difference between an $\Omega =0.2,\
\Omega_\Lambda=0.0$ model and a model with $\Omega =0.24,\
\Omega_\Lambda = 0.76$ (the favored fit of R98) is about 0.2
magnitudes at $z=0.7.$ In the $d=0.1\ \mu$m needle model this requires
$\Omega_{needle} = 1.6 \times 10^{-5}.$ Matching an $\Omega = 1,\
\Omega_\Lambda=0$ universe requires about 0.5 magnitudes of extinction
at $z=0.7$ and $\Omega_{needle} = 4.5\times 10^{-5}.$
A reddening correction based on standard dust properties, like that
used in R98, would not eliminate this effect. R98 effectively
estimates extinction using rest-wavelength (after K-correction) $B-V$
color and the Galactic reddening law. For standard dust this would be
reasonable even for a cosmological dust distribution, since the
reddening would still occur the across the redshift-corrected $B$ and
$V$ frames. But Figure \ref{fig-needopac} shows that this does not
hold for needles: the $d=0.1\ \mu$m needle distribution only gives
$(B-V) = 0.06 A_V$ up to $z=0.7$. The supernova group method would
$K-$correct the $B$ and $V$ magnitudes, then convert this (rest frame)
$B-V$ into an extinction based on the Galactic $(B-V) = 0.32 A_V.$ It
would therefore not be surprising for the systematic extinction to go
undetected.
Studies of redshift-dependent reddening (e.g. Wright 1981, Wright \&
Malkan 1987, Cheng et al. 1991) in far-UV (rest frame) quasar spectra
put limits on a uniform dust component, but these are most sensitive
to high redshifts, at which the needles would not yet have formed and
uniformly dispersed. In addition, it is clear from Figure
\ref{fig-needopac} that for thick needles the flatness of the opacity
curve would lead to a very small shift in the quasar spectral index up
to $z=1.$
Another available constraint, the metallicity of Ly-$\alpha$ clouds,
is probably not relevant; because the dust formation and ejection (due
to radiation pressure) from galaxies is independent of the enrichment
mechanism of the clouds (presumably population III enrichment or gas
`blowout' from galaxies), there is no clear connection between the
mass of metal gas in the clouds and the mass of needle dust in the
IGM\footnote{Of course, if the needles were also assumed to form in the
population III objects their density should then relate to the
Ly-$\alpha$ metallicity.}.
To estimate the fraction of carbon locked in the needle dust, we would
like to know $\Omega_Z$ at $0.5 \la z \la 3.$ The current value of
$\Omega_Z$ should be bounded above by the metal fraction of large
clusters, which are the best available approximation to a closed
system that is a fair sample of the universe. Clusters tend to have
$\sim 1/2$ solar metallicity (e.g. Mushotsky et al. 1996), and $\sim
10\%$ of their mass in gas (e.g. Bludman 1997 for a summary), giving
$\Omega_Z \la 10^{-3}.$ This compares reasonably well with an upper
bound on universal star density estimated from limits on extragalactic
starlight (from Peebles 1993) of $\Omega_* < 0.04$: if we extrapolate
the Galactic metallicity of $\sim Z_\odot$, we find $\Omega_Z \sim
Z_{\odot}\Omega_{*} \la 4 \times 10^{-4}.$ Assuming a current
$\Omega_Z \sim 4\times 10^{-4}$ and that metals are created constantly
(conservative, given the higher star formation rate at high-$z$) in
time from $z=6$ we find (for both $\Omega = 0.2$ and $\Omega = 1$)
that $\Omega_Z(z=3) \sim 4 \times 10^{-5}$ and $\Omega_Z(z=0.5) \sim 2
\times 10^{-4}$, which agrees with recent estimates by Renzini (1998).
Even such crude approximations are very vulnerable, but suggest that
the needed amount of needle mass is reasonable.
The needle model is falsifiable in several ways. First, the needle
opacity spectrum is not perfectly flat, especially for small $d.$
Observations over a long wavelength span might reveal a
redshift-dependent systematic change in certain colors.
Next, the needles take some minimum time to form, then more time to
achieve a uniform cosmic distribution. Thus at high enough redshift
the dispersion in supernova brightnesses discussed in R98 appears.
Moreover, at $z=1.5$ the difference
between the $\Omega = 0.2, \Omega_\Lambda =0$ model with dust and the
$\Omega = 0.24, \Omega_\Lambda = 0.76$ model sans dust is $\simeq 0.2$ mag,
which should eventually be observable.
I shall not attempt to address the question of galaxy counts here. As
commented in R98, grey dust would exacerbate the `problem' of
unexpectedly high galaxies counts at high-$z$, but the magnitude of
such an effect would depend upon the dust density field's redshift
evolution, and a full discussion of the galaxy count data as a
constraint on the model (requiring also an understanding of galaxy
evolution) is beyond the scope of this letter.
Galactic observations probably cannot disprove the model, since
needles with the properties most different than those of Galactic dust
would be ejected with high efficiency. Moreover, dust with
needle-like characteristics may have been detected by COBE (Wright et.
al. 1991; Reach et al. 1995; Dwek et al. 1997) as a minor `very cold'
component of Galactic dust. Such a component is best explained by
dust with a hitherto-unknown IR emission feature, or by fluffy/fractal
or needle dust (Wright 1993) and could represent a residual needle
component with about 0.02-0.4\% of the standard dust
mass.\footnote{The needles absorb about 5$\times$ as effectively (per
unit mass) in the optical where most Galactic radiation resides, and
the `very cold' component emits between .1\% and 2\% of the total
FIR dust luminosity (Reach et al. 1995).}
On the other hand, the dust cannot escape from clusters, which have
much higher mass/light ratios, so needles formed {\em after} the
formation of a cluster should remain trapped within. Studies of
background quasar counts (Bogart \& Wagner 1973; Boyle et al. 1988;
Romani \& Maoz 1992), cooling flows (Hu 1992), and IR emission
(Stickel et al. 1998) of rich clusters indicate extinctions $A_V \sim
0.2 - 0.4$ mag and standard dust masses of $M^{dust}_{cl} \sim
10^{10}\ M_\odot.$ Denoting by $Z_{cl}, M_{cl}$, and $M^{gas}_{cl}$
the mean cluster metallicity, total mass and gas mass, we can estimate
the fraction of metals in dust $\chi_{cl}$ to be
$$
\chi_{cl} = {M_{cl} \over M^{gas}_{cl}}
{M^{dust}_{cl} \over M_{cl}}
Z_{cl}^{-1}
\simeq 10 \times 10^{-5} / 0.01 = 0.01,
$$
using $M_{cl} \sim 10^{15}\ M_{\odot}.$ Comparing this to the
$\chi_{gal} \la 1$ typical of our Galaxy would
indicate dust destruction efficiency of $\la 99\%$ in clusters. An
earlier calculation gave $\Omega_{needle} / \Omega_Z \ga 0.1$ for
the intergalactic needles. Assuming the calculated dust destruction,
this predicts $M^{needle}_{cl}/M^{dust}_{cl} \sim 0.1.$ The needles
are about five times as opaque in optical as standard dust, so this
gives an optical opacity ratio of $\sim 0.5$. If these estimates are
accurate, comparison of nearby cluster supernovae to nearby
non-cluster supernovae at fixed distance should reveal a mean
systematic difference of $A_V \ga 0.03 - 0.06$ in fluxes {\em after}
correction for reddening.\footnote{Similar arguments {\em might} apply
to elliptical galaxies, from which dust ejection is less efficient
than from spirals.} The Mt. Stromlo Abell cluster supernova search
(Reiss et al. 1998), currently underway, should make such an
analysis possible. Note that uncertainties in the needle opacity
relative to standard dust will not affect the cluster prediction which
(modulo the quantitative uncertainties) should hold unless clusters
destroy needles more efficiently than standard dust.
\section{Conclusions}
I have argued that the reduction of supernova fluxes at high redshift
could be caused by a uniform distribution of intergalactic dust.
Both theoretical arguments and observational evidence strongly suggest
that some dust should be ejected from galaxies. Dust with high
opacity (especially at long wavelengths where most of the luminosity
of high-redshift starburst galaxies resides) would be preferentially
ejected. But this is exactly the sort of dust which would both redden
less than standard dust, and require less dust mass to produce the
observed effect. This letter develops a specific model of
intergalactic dust composed of carbon needles--a theoretically
reasonable, even expected, form of carbon dust--with conservative
properties. The supernova data can be explained by a quantity of
carbon needles which is plausible in light of rough estimates of the
universal metal abundance.
Because the dust distribution is effectively uniform, it does not induce a
dispersion in the supernova magnitudes, and because it absorbs more
efficiently than standard dust, it does not require an unreasonable
mass. Finally, because the dust is created and ejected by high-$z$
galaxies, it does not overly obscure {\em very} high redshift galaxies
or quasars. Thus the key arguments given in R98 against `grey' dust
do not apply. The dust of the proposed model should, however, provide
independent signatures of its existence; one is a systematic
difference in fluxes between cluster and non-cluster supernovae which
may be detectable in ongoing surveys. Finally, the needle model is
only one specific model for intergalactic dust. Other possible `dust'
types are fractal dust (e.g. Wright 1987), platelets (e.g. Donn \&
Sears 1963; Bradley et al. 1983), hollow spheres (Layzer \& Hively 1973), or
hydrogen snowflakes.
The explanation of reduced supernova flux at high redshift described
in this letter depends upon the plausible but still speculative
assumption that the intergalactic dust distribution has significant
mass, and is dominated by grains with properties exemplified by those
of carbon needles. The probability that this is the case should be
weighed against the severity of the demand that the explanation
favored by Riess et al. and Perlmutter et al. places on a solution
of the vacuum energy (or cosmological constant) problem: the expected
value of the vacuum energy density at the end of the GUT era must be
reduced by some as yet unknown process, not to zero, but to a value
exactly one hundred orders of magnitude smaller.
\acknowledgements
This papers has benefited significantly from the
commentary of an anonymous referee. I thank David Layzer, George
Field, Alyssa Goodman, Bob Kirshner, Ned Wright and Saurabh Jha for
useful discussions.
\section{Note added in proof}
Two recent papers bear upon this Letter. Kochanek et al. (1998,
astro-ph/9811111) find that dust in early-type high $z$ galaxies
reddens {\em more} than `standard' dust. Needles may escape
ellipticals, but the lensing technique applied to clusters would be an
excellent test of the needle model.
Perlmutter et al. (1998, Ap. J., accepted) find no statistically
significant different in mean reddening between low-$z$ and high-$z$
samples; however, the $B-V \simeq 0.01$ mag that my fiducial model
predicts fits easily within their 1-$\sigma$ errors.
| proofpile-arXiv_065-8200 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Weak interaction processes play a decisive role in the early stage of
the core collapse of a massive star \cite{Bethe78,Bethe90}. First,
electron capture on nuclei in the iron mass region, starting after the
core mass exceeds the appropriate Chandrasekhar mass limit, reduces
the electron pressure, thus accelerating the collapse, and lowers the
electron-to-baryon ratio, $Y_e$, thus shifting the distribution of
nuclei present in the core to more neutron-rich material. Second, many
of the nuclei present can also $\beta$ decay. While this process is
quite unimportant compared to electron capture for initial $Y_e$
values around 0.5, it becomes increasingly competative for
neutron-rich nuclei due to an increase in phase space related to
larger $Q_\beta$ values. However, $\beta$ decay on nuclei with masses
$A>60$ have not yet been considered in core collapse studies
\cite{Thielemann}. This is surprising since Gerry Brown pointed out
nearly a decade ago \cite{Brown89} that certain nuclei heavier than
$A=60$, like $^{63}$Co and $^{64}$Co, have very strong $\beta$-decay
matrix elements making it conceivable that they can actually compete
with electron capture. Brown argued that this might have quite
interesting consequences for the collapse. During this early stage of
the collapse, neutrinos produced in both electron capture and $\beta$
decay still leave the star. Therefore, a strong $\beta$-decay rate
will cool the star without lowering the $Y_e$ value. As a consequence,
the $Y_e$ value at the formation of the homologous core (after
neutrino trapping) might be larger than assumed. This results in a
smaller envelope, and less energy is required for the shock to travel
through the material.
Following Brown's suggestion, $\beta$ decay rates for nuclei in the
mass range $A=48-70$ were investigated \cite{Aufderheide}. These
studies were based on the same strategy and formalism as already
employed by the pioneering work in this field by Fuller, Fowler and
Newman (commonly abbreviated by FFN) \cite{FFN}. The important idea in
FFN was to recognize the role played by the Gamow-Teller resonance in
$\beta$ decay. Other than in the laboratory, $\beta$ decay rates under
stellar conditions are significantly increased due to thermal
population of the Gamow-Teller back resonance in the parent nucleus
(the GT back resonance are the states reached by the strong GT
transitions in the inverse process (electron capture) built on the
ground and excited states, see \cite{FFN,Aufderheide}) allowing for a
transition with a large nuclear matrix element and increased phase
space. Indeed, Fuller et al. concluded that the $\beta$ decay rates
under collapse conditions is dominated by the decay of the back
resonance. In a more recent work, Aufderheide et al. came to the same
conclusion. Inspired by the independent particle model, the authors of
Ref. \cite{Aufderheide} estimated the $\beta$ decay rates in a similar
fashion as the electron capture rates, and phenomenologically
parametrized the position and the strength of the back resonance. This
estimate was supplemented by an empirical contribution, placed at zero
excitation energy, which simulates low-lying transition strength
missed by the GT resonance.
When extending the FFN rates to nuclei with $A>60$, Aufderheide et al.
found indeed that the $\beta$ decay rates are strong enough to balance
the electron capture rates for $Y_e \approx 0.42-0.46$. Nevertheless,
these results have never been explored in details in core collapse
calculations.
In recent years the parametrization of the electron capture and
$\beta$ decay rates as adopted by FFN and Aufderheide et al. have
become questionable due to experimental data
\cite{gtdata1,gtdata2,gtdata3,gtdata4,gtdata5} and have been
critizised on the basis of more elaborate theoretical models
\cite{Aufderheide96,Dean98,Langanke98a,Langanke98b}. We will show in
this paper, that although the previous weak interaction rates for core
collapse are systematically incorrect, the important observation that
electron and $\beta$ decay rates balance each other for a certain
range of $Y_e$ values is indeed correct. Our conclusions will be based
on large-scale shell model calculations for several key nuclei which,
due to Aufderheide et al. \cite{Aufderheide}, contribute most
significantly to the electron capture and $\beta$ decay rates at
various stages of the collapse. These shell model calculations
reproduce the measured GT strength distributions for nuclei in the
mass range $A=50-64$ very well \cite{Martinez99}. Furthermore modern
large-scale shell model calculations also agree with measured
half-lifes very well. Thus for the first time one has a tool in hand
which allows for a reliable calculation of presupernova electron
capture and $\beta$ decay rates. Modern shell model calculations come
in two varieties: large-scale diagonalization approaches
\cite{Caurier98} and shell model Monte Carlo (SMMC) techniques
\cite{Johnson,Koonin}. The latter can treat the even larger model
spaces, but has limitations in its applicability to odd-A and odd-odd
nuclei at low temperatures, which does not apply to the former. More
importantly the diagonalization approach allows for detailed
spectroscopy, while the SMMC model yields only an ``averaged'' GT
strength distribution which introduces some inaccuracy into the
calculation of the capture and decay rates.
We will consistenly use in the following the shell model
diagonalization approach to study these rates. Due to the very large
m-scheme dimensions involved, the GT strength distributions have been
calculated in truncated model spaces which fulfill the Ikeda sum rule.
However, at the chosen level of truncation involving typically 10
million configurations or more, the GT strength distribution is
virtually converged. As residual interaction we adopted the recently
modified version of the KB3 interaction which corrects the slight
inefficiencies in the KB3 interaction around the $N=28$ subshell
closure \cite{Ni56}. In fact the modified KB3 interaction i)
reproduces all measured GT strength distributions very well and ii)
describes the experimental level spectrum of the nuclei studied here
quite accurately \cite{Martinez99,Ni56}. As $0\hbar\omega$ shell
model calculations, i.e. calculations performed in one major shell,
overestimate the experimental GT strength by a universal factor
\cite{Wildenthal,Langanke95,Martinez96}, we have scaled our GT
strength distribution by this factor, $(0.74)^2$.
Large-scale shell model calculations of the electron capture rates for
key nuclei in the presupernova collapse have been reported already
elsewhere \cite{Langanke98a,Langanke98b}; see also the SMMC results in
\cite{Dean98}. In these studies it became apparent that the
phenomenological parametrization of the GT contribution to the
electron capture and $\beta$ decay rates, as introduced in FFN
\cite{FFN} and subsequently used by Aufderheide et al.
\cite{Aufderheide}, is systematically incorrect. These authors have
placed the centroid of the GT strength distribution at too high an
excitation energy in the daughter nucleus for electron capture on
even-even nuclei, while for capture on odd-A and odd-odd nuclei they
underestimated the energy of the GT centroid noticeably. For capture
on even-even nuclei this has comparably little effect as FFN
overcompensate the misplacement of the GT centroid by a too large
empirical contribution at zero excitation energy. Typically the FFN
rates are roughly a factor of 5 larger than the shell model rates for
capture on even-even nuclei like $^{56,58}$Ni or $^{58}$Fe. For
capture on odd-odd and odd-A nuclei the misplacement of the GT
centroid makes the FFN rates about 1-2 orders of magnitude too large
compared to the shell model rates. As a consequence, FFN and
Aufderheide et al. have noticeably overestimated the electron capture
rates for the early stage of the supernova collapse.
Which consequences do the misplacement of the GT centroids have for
the competing $\beta$ decays? In odd-A and even-even nuclei (the
daughters of electron capture on odd-odd nuclei), experimental data
and shell model studies place the back-resonance at higher excitation
energies than assumed by FFN and Aufderheide et al.
\cite{Aufderheide}. Correspondingly, its population becomes less
likely at the temperatures available during the early stage of the
collapse ($T_9 \approx 5$, where $T_9$ measures the temperature in
$10^9$ K) and hence the contribution of the back-resonance to the
$\beta$ decay rates for even-even and odd-A nuclei decreases. Due to
Aufderheide et al. \cite{Aufderheide}, some of the most important
$\beta$ decay nuclei (defined by the product of abundance and $\beta$
decay rate and listed in Tables 18-22 in \cite{Aufderheide}) are
odd-odd nuclei. For these nuclei, all available data, stemming from
(n,p) reaction cross section measurements on even-even nuclei like
$^{54,56,58}$Fe or $^{58,60,62,64}$Ni, and all shell model
calculations indicate that the back-resonance resides actually at
lower excitation energies than previously parametrized. Consequently,
the contribution of the back-resonance to the $\beta$ decay rate of
odd-odd parent nuclei should be larger than assumed in the
compilations. We note that this general expectation has already been
conjectured in Ref. \cite{Aufderheide96} on the basis of (n,p) data
available at that time. These authors have attempted to fit the data
within a strongly truncated shell model calculation which then in turn
has been used to predict a corresponding $\beta$ decay rate. This
procedure is viewed as rather uncertain as i) the large energy
resolution in the data made its convolution into a $\beta$ decay rate
imprecise and ii) the shell model truncation level was too inaccurate
in order to estimate reliably the contribution of other states than
the back-resonance to the decay rate. These shortcomings can be
overcome in recent state-of-the-art large scale shell model
calculations. We have calculated the $\beta$ decay rates for several
nuclei under relevant core collapse conditions ($\rho_7=10-1000$,
where $\rho_7$ measures the density in $10^7$ g/cm$^3$ and
temperatures $T_9=1-10$). These nuclei include even-even ones
($^{52}$Ti,$^{54}$Cr, $^{56,58,60}$Fe), odd-A nuclei ($^{59}$Mn,
$^{59,61}$Fe, $^{61,63}$Co) and odd-odd nuclei
($^{50}$Sc,$^{54,56}$Mn, $^{58,60}$Co). The selection has been made
to include those nuclei which have been ranked as most important for
core collapse simulations by Aufderheide et al. \cite{Aufderheide}. In
fact, with the rates of Ref. \cite{Aufderheide} these 15 nuclei
contribute between $65 \%$ and $86 \%$ to the change of $Y_e$ due to
$\beta$ decay in the range $Y_e=0.44-0.47$.
Although the formula for the presupernova $\beta$ decay rate
$\lambda_{\rm \beta}$ is well known (e.g. \cite{FFN,Aufderheide}), we
have chosen to quote the basic result here as this allows for the
easiest discussion of the improvement incorporated in our calculation
compared to previous work. Thus,
\end{multicols}
\begin{equation}
\lambda_{\rm{\beta}}=\frac{\ln 2}{6163 \rm{sec}}
\sum_{ij} \frac{(2J_i+1) \exp{[-E_i/kT]}}{G}
S^{ij}_{\rm{GT}}
\frac{c^3}{(m_e c^2)^5}\int_0^{\cal L} dp p^2 (Q_{ij}-E_e)^2
\frac{F(Z+1,E_e)}{1+\exp\left[kT(\mu_e-E_e)\right]}\;,
\end{equation}
\begin{multicols}{2}
\noindent
where $E_e$, $p$, and $\mu_e$ are the electron energy, momentum, and
chemical potential, and ${\cal L}=(Q_{if}^2 - m_e^2 c^4)^{1/2}$;
$Q_{if}=E_i-E_f$ is the nuclear energy difference between the initial
and final states, while $S^{ij}_{\rm {GT}}$ is their GT transition
strength. $Z$ is the charge number of the parent nucleus, $G$ is the
partition function, $G=\sum_i (2J_i+1) \exp{[-E_i/kT]}$, while $F$ is
the Fermi function which accounts for the distortion of the electron's
wave function due to the Coulomb field of the nucleus. The values for
the chemical potential are taken from \cite{Dean98}.
To estimate the rates at finite temperatures, the compilations
employed the so-called Brink hypothesis
\cite{Aufderheide,Aufderheide91} which assumes that the GT strength
distribution on excited states is the same as for the ground state,
only shifted by the excitation energy of the state. We have not used
this approximation, but have performed shell model calculations for
the individual transitions. Our sum over initial states includes i)
explicitly the ground state and several excited states in the parent
nucleus (usually at least all levels below 1 MeV excitation energy)
and ii) all back-resonances which can be reached from the levels in
the daughter nucleus below 1 MeV excitation energy. As these
back-resonances also include parent states below 1 MeV, special care
has been taken in avoiding double-counting. The partition function is
consistenly summed over the same initial states.
Here a word of caution is in order. We have calculated the GT strength
distributions using 33 Lanczos iterations in all allowed angular
momentum and isospin channels. This is usually sufficient to converge
in the states at excitation energies below $E= 3$ MeV. At higher
excitation energies, $E>3$ MeV, the calculated GT strengths represent
centroids of strengths, which in reality are split over many states.
While this does not introduce uncertainties in the summing over the GT
strengths (the numerator in (1)), it might be inconsistent for the
calculation of the partition function. However, this is practically
not the case, as at the rather low temperatures of concern here the
partition function is given by those states which have already
converged in our model space. Nevertheless there might be states
outside of our model space (intruder states) which will be missed in
our evaluation of the $\beta$ decay rates. But their statistical
weight in both numerator and denominator in the rate equation (1) is
small. Although our calculations agree well with the experimental
informations available (excitation energies and GT transition
strengths), we have replaced the shell model results by data whenever
available.
In Fig. 1 we compare our shell model $\beta$ decay rates with those of
FFN for selected nuclei representing the three distinct classes:
even-even ($^{54}$Cr, $^{56,60}$Fe), odd-A ($^{59}$Mn, $^{57,59}$Fe)
and odd-odd ($^{54,56}$Mn, $^{58}$Co). We note again that $\beta$
decay of the nuclei studied here is important at temperatures $T_9
\leq 5$ \cite{Aufderheide}. For the odd-odd nuclei we calculate rates
similar to those of FFN. This approximate agreement is, however,
somewhat fortunate. In FFN the misplacement of the GT back-resonances
has been compensated by too large values for the total GT strengths
(FFN adopted the unquenched single particle estimate) and the
low-lying strengths. At higher temperatures ($T_9>5$), the FFN rates
for odd-odd nuclei are larger than our shell model rates. For odd-A
and even-even nuclei our shell model rates are significantly smaller
than the FFN rates as the back resonance, for $T_9 < 5$, is less
populated thermally than in the FFN parametrization. Using the FFN
rates, even-even nuclei were found to be unimportant for $\beta$ decay
in the core collapse; our lower rates make them even less important.
This situation is somewhat different for odd-A nuclei which (
$^{57,59}$Fe, $^{59}$Mn) have been identified as important in Ref.
\cite{Aufderheide} adopting the FFN rate. Aufderheide et al. have
added several odd-A nuclei with masses $A>60$ (which are not
calculated in FFN) to the list of those nuclei which significantly
change the $Y_e$ value during the collapse by $\beta$ decays. These
nuclei include $^{61}$Fe and $^{61,63}$Co; we will show below that
their rates have also been overestimated significantly in Ref.
\cite{Aufderheide}. Our shell model rates indicate that the
importance of odd-A nuclei is significantly overestimated when the
previously compiled values are adopted.
In Ref. \cite{Aufderheide96} $\beta$ decay rates for several nuclei
have been estimated in strongly truncated shell model calculations, in
which these authors allowed a maximum of 1 nucleon to be excited from
the $f_{7/2}$ shell to the rest of the pf-shell in the daughter
nucleus, and fitted the single particle energy spectra to reproduce
measured (n,p) data for $^{54,56}$Fe, $^{58}$Ni and $^{59}$Co; the
(n,p) data constrain the back-resonance transition to the ground
states in the $\beta$ decays of $^{54,56}$Mn, $^{58}$Co and $^{59}$Fe.
Our shell model rates are compared to the estimates of Ref.
\cite{Aufderheide96} in Fig. 2. For the 3 odd-odd nuclei, the
agreement is usually better than a factor 2. This is due to the fact
that these rates are dominated by the back-resonances, i.e. the (n,p)
data of the daughter nucleus, which are reproduced in our large-scale
shell model approach and have been fitted in Ref.
\cite{Aufderheide96}. For the odd-A nucleus $^{59}$Fe our $\beta$
decay rate is about an order of magnitude lower than the estimate of
Ref. \cite{Aufderheide96} at $T_9=2$, while the rates agree for $T_9
>6$, where it is dominated by the back-resonances. At the lower
temperatures, $\beta$ decays of low-lying states are important which
might be overestimated in the truncated calculation.
What might the revised $\beta$ decay rates mean for the core collapse?
To investigate this question we study the change of the
electron-to-baryon ratio, ${\dot Y}_e$, along a stellar trajectory.
Following Ref. \cite{Aufderheide}, we define
\begin{equation}
Y_e=\sum_k \frac{Z_k}{A_k} X_k
\end{equation}
where the sum runs over all nuclear species present in the core. $Z$,
$A$, and $X$ are the charge, mass number, and mass fraction of the
nucleus, respectively. The mass fraction is given by nuclear
statistical equilibrium \cite{Aufderheide}; we will use the values as
given in Tables 14-24 of Ref. \cite{Aufderheide}. Noting that $\beta$
decay ($\beta$) increases the charge by one unit, while electron
capture (ec) reduces it by one unit, we have
\begin{equation}
{\dot Y}_e^{ec(\beta)} = \frac{dY_e^{ec(\beta)}}{dt}=-(+) \sum_k
\frac{X_k}{A_k} \lambda_k^{ec(\beta)}
\end{equation}
where $\lambda_k^{ec}$ and $\lambda_k^{\beta}$ are the electron
capture and $\beta$ decay rates of nucleus $k$. For several key nuclei
we have calculated these rates within large-scale shell model studies.
Some of the results are listed in Tables 1 and 2, where they are also
compared to the FFN rates and the ones of Ref. \cite{Aufderheide}.
This comparison also includes the $\beta$ decay rates for $^{61}$Fe
and $^{61,63}$Co, which, due to \cite{Aufderheide} and earlier
suggested by Brown \cite{Brown89,Aufderheide90}, are important when
the stellar trajectory reaches electron-to-baryon values $Y_e
=0.44-0.46$. Our shell model rates agree for $^{63}$Co with the rate
of Aufderheide et al., but are smaller than the estimates of these
authors by factors 2 and 5 for $^{61}$Fe and $^{61}$Co, respectively.
We note that the strong ground state decay of $^{63}$Co contributes
about $15\%$ to the total rate at the condition listed in Table 2.
Some of the electron capture rates are taken from
\cite{Langanke98a,Langanke98b}, while several other shell model rates
are presented here for the first time (e.g. for $^{54,56}$Fe,
$^{58}$Ni and the odd-A nuclei). Although the nuclei, for which
reliable shell model rates are now available, include the dominant
ones at the various stages of the early collapse (due to the ratings
in Ref. \cite{Aufderheide}), there are upto 250 nuclei present in NSE
at higher densities \cite{Aufderheide}. Although we are currently
working at a revised compilation for $\beta$ decay and electron
capture rates for nuclei in the mass range $A=45-65$, its completion
is computer-intensive and tedious. Nevertheless some important
conclusions can already be drawn from the currently available data.
At first we will follow the stellar trajectory as given in Ref.
\cite{Aufderheide}, although some comments about this choice are given
below. We estimated ${\dot Y}_e^{ec}$ and ${\dot Y}_e^{\beta}$
separately on the basis of the 25 most important nuclei listed in
Tables 14-24 in \cite{Aufderheide}. We used shell model rates for the
nuclei listed in Table 1 and 2. For the other nuclei we scaled the
FFN rates using the following scheme which corrects for the systematic
misplacement of the GT centroid and is derived by the comparison of
FFN and shell model rates for the nuclei listed in Tables 1 and 2.
The FFN electron capture rates have been multiplied by 0.2
(even-even), 0.1 (odd-A) and 0.04 (odd-odd), while the FFN $\beta$
decay rates have been scaled by 0.05 (even-even), 0.025 (odd-A) and
1.5 (odd-odd). The results for ${\dot Y}_e^{ec,\beta}$ are plotted in
Fig. 3, where they are also compared to the values obtained for the
FFN rates. One observes that the shell model rates reduce ${\dot
Y}_e^{ec}$ significantly, by more than an order of magnitude for
$Y_e<0.47$. This is due to the fact, that, except for $^{56}$Ni, all
shell model electron capture rates are smaller than the
recommendations given in the FFN and Aufderheide et al. compilations
\cite{FFN,Aufderheide}. In particular, this is drastic for capture on
odd-odd nuclei, which due to these compilations, dominate ${\dot
Y}_e^{ec}$ at densities $\rho_7>10$. The shell model $\beta$ decay
rates also reduce ${\dot Y}_e^{\beta}$, however, by a smaller amount
than for electron capture. This is mainly caused by the fact that the
shell model $\beta$ decay rates of odd-odd nuclei are about the same
as the FFN rates or even slightly larger, for reasons discussed above.
It is interesting to note that FFN typically give higher $\beta$-decay
rates for odd-A nuclei than Aufderheide et al. \cite{Aufderheide},
while it is vice versa for odd-odd nuclei. As a consequence ${\dot
Y_e^{\beta}}$ is dominated by odd-A nuclei for $Y_e<0.46$ if the FFN
rates are used, while odd-odd nuclei contribute significantly if the
rates of \cite{Aufderheide} are adopted. In either case, both
compilations yield rather similar profiles for ${\dot Y}_e^{ec,\beta}$
(see Fig. 14 in \cite{Aufderheide}). The important feature in Fig. 3
is the fact that the $\beta$ decay rates are larger than the electron
capture rates for $Y_e=0.42-0.455$, which is also already true for the
FFN rates \cite{Aufderheide}.
So far we have used the same stellar trajectory as in Ref.
\cite{Aufderheide}. This allowed a comparison with the conclusions
obtained in that reference. However, this assumption is inconsistent,
and, in fact, was already inconsistent in \cite{Aufderheide}. The
chosen stellar trajectory is based on runs performed with the stellar
evolution code KEPLER \cite{Kepler} which uses the FFN electron
capture rates, but quite outdated $\beta$ decay rates
\cite{Aufderheide94}, following the old belief that $\beta$ decay
rates are unimportant \cite{Aufderheide94}. The outdated $\beta$
decay rates were derived basically from a statistical model approach
\cite{Mazurek} and are orders of magnitude too small \cite{Brown89}.
What are the consequences and will electron capture and $\beta$ decay
rate also balance in a consistent model? At the beginning of the
collapse, electron capture is significantly faster than $\beta$ decay
(see Fig. 3). The shell model rates make $^{56}$Ni the most important
contributor, but it cannot quite compensate for the reduction of the
$^{55}$Co rate. Thus, at $Y_e=0.485$ the total electron capture rate
${\dot Y}_e^{ec}$ drops slightly. This reduction is more severe for
smaller $Y_e$ values, until at $Y_e=0.46$ electron capture and $\beta$
decay balance. The consequence is that, due to the slower electron
capture, the star radiates less energy away in form of neutrinos until
$Y_e=0.46$ is reached. Thus one expects that in the early stage the
stellar trajectory is, for a given density, at a higher temperature.
This, of course, increases both the $\beta$ decay and electron capture
rates. Importantly both rates have roughly the same temperature
dependence in the relevant temperature range: typically electron
capture rates are enhanced by an order of magnitude if the temperature
raises from $T_9=4$ to $T_9=6$. But this increase is the same order of
magnitude by which the $\beta$ decay rates grow in the same
temperature interval. Consequently the two rates will also be balanced
at around $Y_e \approx 0.46$ if a consistent stellar trajectory is
used.
As stated above, the dominance of $\beta$ decay over electron capture
during a certain stage of the core collapse of a massive star has been
suggested or noted before
\cite{Brown89,Aufderheide,Aufderheide94,Aufderheide96}. However,
previous argumentation has been based on rates for these two processes
which had been empirically and intuitively parametrized, rather than
derived from a reliable many-body model. Moreover, it was shown in
recent years that the assumed parametrizations, mainly with respect to
the energy of the Gamow-Teller centroid, were systematically
incorrect. Shell model calculations are now at hand which allow, for
the first time, the reliable calculation of these rates under stellar
conditions. Given the fact that the large-scale shell model studies
reproduce all important ingredients (spectra, half-lives, GT strength
distributions) very well, the shell model rates are rather reliable.
We stress an important point, that the shell model $\beta$ decay rates
are larger than the electron capture rates for $Y_e \approx
0.42-0.455$. This might have important consequences for the core
collapse. A first investigation into these consequences has been
performed by Aufderheide et al. \cite{Aufderheide94}, however, using
the FFN values for both rates. They find that the competition of
$\beta$ decay and electron capture leads to cooler cores and larger
$Y_e$ values at the formation of the homologuous core. These results
are important motivation enough to derive a complete set of shell
model rates and then use them in core collapse calculations.
\acknowledgements
This work was supported in part by the Danish Research Council and
through grant DE-FG02-96ER40963 from the U.S. Department of Energy.
Oak Ridge National Laboratory is managed by Lockheed Martin Energy
Research Corp. for the U.S. Department of Energy under contract number
DE-AC05-96OR22464.. Grants of computational resources were provided
by the Center for Advanced Computational Research at Caltech. KL and
GMP thank Michael Strayer and the Oak Ridge Physics Division for their
kind hospitalty during which parts of the manuscript have been
written.
| proofpile-arXiv_065-8224 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
At the Rutherford Appleton Laboratory (RAL) the KARMEN collaboration
is studying neutrino-nuclear reactions, induced from the decay
products of positive pions, which are produced and stopped in the
proton beam dump. In 1995 KARMEN for the first time
reported~\cite{armbruster:1995} an anomaly in the time distribution of
single prong events concerning the time interval corresponding to muon
decay. Even with a much improved active detector shielding the anomaly
has persisted in new KARMEN data~\cite{zeitnitz:1998}.
This anomaly has been suggested to originate from the observation of a
hitherto unknown weakly interacting neutral and massive fermion,
called \ensuremath{\mathrm{x}}, from a rare pion decay process $\ensuremath{\mathrm{\pi^+}}\to\ensuremath{\mathrm{\mu^+}}\ensuremath{\mathrm{x}}$. After a
mean flight path of $\rm 17.5\,m$ \ensuremath{\mathrm{x}}\ is registered in the KARMEN
calorimeter after $\rm t_{TOF}=(3.60\pm0.25)\,\mu{}s$ beam on target
by its decay resulting in visible energies of typically $\rm
T_{vis}=11-35\,MeV$. The observed velocity and the two-body kinematics
of the assumed pion decay branch lead to a mass $\rm
m_\ensuremath{\mathrm{x}}=33.9\,MeV/c^2$, extremely close to the kinematical limit.
The hypothetical decay $\ensuremath{\mathrm{\pi^+}}\to\ensuremath{\mathrm{\mu^+}}\ensuremath{\mathrm{x}}$ has been searched for at PSI
in a series of experiments using magnetic spectrometers by studying
muons from pion decay in flight~\cite{bilger:1995:plb, daum:1995,
daum:1998}, the latest measurement resulting in an upper limit for
the branching ratio of $\rm BR(\ensuremath{\mathrm{\pi^+}}\to\ensuremath{\mathrm{\mu^+}}\ensuremath{\mathrm{x}})<1.2\cdot10^{-8}$
(95\% C.L.) \cite{daum:1998}. Combined with theoretical constraints
which assume no new weak interaction~\cite{barger:1995:plb:352} this
result rules out the existence of this rare pion decay branch if \ensuremath{\mathrm{x}}\
is an isodoublet neutrino. However, if \ensuremath{\mathrm{x}}\ is mainly isosinglet
(sterile), the branching ratio can be considerably
lower~\cite{barger:1995:plb:356}. From the number of observed \ensuremath{\mathrm{x}}\
events in comparison with the total number of \ensuremath{\mathrm{\pi^+}}\ decays the KARMEN
collaboration gives a lower limit for the branching ratio of
$10^{-16}$.
Very recently Gninenko and Krasnikov have
proposed~\cite{gninenko:1998} that the observed time anomaly can also
be explained by an exotic \ensuremath{\mathrm{\mu}}\ decay branch $\ensuremath{\mathrm{\mu^+}}\to\ensuremath{\mathrm{e}^+}\ensuremath{\mathrm{X}}$
resulting in the production of a new, weakly interacting neutral boson
with mass $\rm m_\ensuremath{\mathrm{X}}=103.9\,MeV/c^2$. They show that a second
exponential in the KARMEN time distribution with time constant equal
to the muon lifetime and shifted by the flight time of the
\ensuremath{\mathrm{X}}-particle $\rm t_{TOF}=3.60\,\mu{}s$ gives an acceptable fit to the
KARMEN data. Considering three possible \ensuremath{\mathrm{X}}-boson phenomenologies,
they predict branching ratios for $\ensuremath{\mathrm{\mu^+}}\to\ensuremath{\mathrm{e}^+}\ensuremath{\mathrm{X}}$ in the order of
$10^{-2}$, if \ensuremath{\mathrm{X}}\ is a scalar particle; $10^{-5}$, if \ensuremath{\mathrm{X}}\ decays via
a hypothetical virtual charged lepton; and $10^{-13}$, if \ensuremath{\mathrm{X}}\ decays
via two additional hypothetical neutral scalar bosons.
In this paper we present a direct experimental search for the \ensuremath{\mathrm{X}}\
particle by studying the low energy part of the Michel spectrum
looking for a peak from mono-energetic positrons with energy $\rm
T_\ensuremath{\mathrm{e}}=(m_\ensuremath{\mathrm{\mu}}^2+m_\ensuremath{\mathrm{e}}^2-m_\ensuremath{\mathrm{X}}^2)/(2m_\ensuremath{\mathrm{\mu}})-m_\ensuremath{\mathrm{e}}=1.23\,MeV$
resulting from the two-body decay $\ensuremath{\mathrm{\mu^+}}\to\ensuremath{\mathrm{e}^+}\ensuremath{\mathrm{X}}$.
In the past, searches for exotic two-body \ensuremath{\mathrm{\mu}}\ decay modes have
already been performed~\cite{bryman:1986:prl} motivated by predictions
about the existence of light, weakly interacting bosons like axions,
majorons, Higgs particles, familons and Goldstone bosons resulting in
upper limits for the branching ratio of approximatley $3\cdot10^{-4}$
(90\% C.L.). However, these searches are not sensitive to the
suggested \ensuremath{\mathrm{X}}\ boson with $\rm m_\ensuremath{\mathrm{X}}=103.9\,MeV/c^2$ since the lowest
positron energy region studied was between 1.6 and 6.8\,MeV,
corresponding to the \ensuremath{\mathrm{X}}\ mass region 103.5 to $\rm 98.3\,MeV/c^2$.
\section{The Experiment}
The basic idea is to stop a $\mu^+$ beam inside a germanium detector.
The low energy decay positrons of interest also deposit their entire
kinetic energy in the detector volume. For a sizeable fraction of
events the subsequent annihilation radiation does not interact with
the detector thus preserving the positron energy information.
This experiment has been performed at the $\rm \mu{}E4$ channel at PSI
(see Fig.~\ref{fig:setup}). The beam line is optimized for intense
polarized muon beams in the momentum range between 30 and 100\,MeV/c
with very low pion and positron contamination. Pions from the
production target are collected at an angle of $90^\circ$ relative to
the primary proton beam and are injected into a long 5\,T
superconducting solenoid in which they can decay. The last part of the
beam line is the muon extraction section which allows the selection of
a central muon momentum different from that of the injected pions.
The detector setup consists of a large ($\rm 120\times200\,mm^2$)
2\,mm thick plastic scintillator counter S1 followed by a 35\,mm
diameter hole in a 10\,cm thick lead shielding wall and a small ($\rm
20\times20\,mm^2$) 1\,mm thick plastic scintillator counter S2
directly in front of a 9\,mm thick planar high purity germanium (HPGe)
detector with an area of $\rm 1900\,mm^2$. In addition, we have placed
a 127\,mm (5 inch) diameter, 127\,mm thick NaI detector shielded
against the \ensuremath{\mathrm{\mu}}-flux adjacent to the HPGe for detecting 511\,keV
$\gamma$ rays from positron annihilation.
\begin{figure}
\centerline{\epsfig{file=mue4_kanal_detector.eps,width=0.80 \textwidth}}
\caption[]{Schematical layout of the experimental setup. The $\rm
\ensuremath{\mathrm{\mu}}{}E4$ low energy \ensuremath{\mathrm{\mu}}\ channel at PSI is shown in the left
part of the figure together with a sketch of the detector setup,
which is shown in more detail in the right part of the figure.}
\label{fig:setup}
\end{figure}
The coincidence $\rm S1\times{}S2\times{}HPGe$ was used as a trigger
which generated --- in addition to a prompt gate --- a delayed gate
$\rm 2.2-7.2\,\mu{}s$ after the prompt muon signal for the expected
decay events. During the time period for the delayed gate, S1 was
used as a veto detector to discriminate against further beam
particles. Timing and energy information from the dectetors utilizing
several different methods for signal discrimination, amplification,
shaping and digitization were recorded for both prompt and delayed
signals using the MIDAS data acquisition system~\cite{midas:1998}.
For the energy calibration of signals occuring during the prompt gate,
\ensuremath{\mathrm{\gamma}}\ rays from $\rm ^{22}Na$ and $\rm ^{60}Co$ sources were used. In
order to derive the energy information from the HPGe detector signal,
both spectroscopy amplifiers and peak-sensitive ADCs as well as a
timing filter amplifier (TFA) connected to a charge sensitive QDC were
employed. In addition, sample signals from the HPGe detector, both
before and after amplification, were recorded and stored with a
digital oscilloscope. It turned out that every spectroscopy amplifier
available during the course of the experiment showed a significantly
varying baseline shift for a few microseconds following a prompt
signal. The variations of the baseline level just after the prompt
signal were due to fluctuations in time for the onset of the baseline
restoration circuitry. Thus, for spectroscopy amplifiers, a
sufficiently accurate energy calibration for the delayed signal was
not possible.
The TFA branch did not have such baseline problems, however the energy
resolution for the delayed signal in this branch is 100\,keV FWHM
only. A short shaping time of $\rm 0.25\,\mu$s and low amplification
to avoid saturation from the high-amplitude prompt signal had to be
used to be ready in time for the delayed pulse.
During 12 hours of data taking $1.3\cdot10^7$ events were recorded on
tape. Saturation of the HPGe pre-amplifier at a singles rate of
$5-6\cdot10^3\,\rm s^{-1}$ was limiting the event rate.
\section{Results}
The energy deposition of the stopped muons in the HPGe detector is
$\rm 11.3\!\pm\!0.7\,MeV$ (see
Fig.~\ref{fig:muex_hpge_delayed_tdc_prompt_adc}). The cut on the
energy of the prompt signal is $9.9 - 12.7$\,MeV. The delayed signal
has to occur within the time interval of $\rm 3.4-7.2\,\mu{}s$ after
the prompt signal. The time distribution (see
Fig.~\ref{fig:muex_hpge_delayed_tdc_prompt_adc}) nicely shows the
expected exponential shape with $\rm \tau=2.21\pm0.02\,\mu{}s$. For
shorter times the tail of the prompt signal still causes a varying
effective discriminator threshold thus the TDC spectrum deviates from
an exponential shape. The information from the NaI detector is used to
check the consistency of the analysis, but is not used for the
determination of the branching ratio.
\begin{figure}
\begin{center}
\epsfig{file=muex_hpge_prompt_signal_energy_2.ps,width=0.45 \textwidth,bbllx=32,bblly=160,bburx=560,bbury=648}
\epsfig{file=muex_hpge_delayed_tdc_2.ps,width=0.45 \textwidth,bbllx=0,bblly=160,bburx=528,bbury=648}
\end{center}
\caption[]{Left: Energy spectrum of prompt signals resulting from
muons stopping in the HPGe detector with an additional constraint
requiring the presence of an afterpulse arriving during the
delayed gate. Right: Spectrum for the time difference between
delayed and prompt signals. The time constant $\tau=2.21\pm0.02\mu
s$ of the exponential shape is in very good agreement with the
muon life time.}
\label{fig:muex_hpge_delayed_tdc_prompt_adc}
\end{figure}
After energy and time cuts $1.32\cdot10^6$ events remain.
Accounting for high energy positrons from muon decay causing a signal
in the veto counter S1, a 3\% correction results in $1.36\cdot10^6$
good muon decays for normalization.
GEANT \cite{geant:1998} based Monte Carlo studies have provided an
understanding of the shape of the delayed signal energy spectrum (see
inset in Fig.~\ref{fig:muex_branching}). The two peaks are due to an
asymmetric \ensuremath{\mathrm{\mu}}\ stop distribution with respect to the symmetry plane
perpendicular to the beam axis of the cylindrical HPGe detector
resulting in different energy distributions for Michel positrons
emitted in the backward and forward hemispheres of the detector,
respectively.
The interaction of the annihilation \ensuremath{\mathrm{\gamma}}\ rays with the detector
has also been studied. For positrons in the considered energy range
the double escape probability is 40-44\% (no 511\,keV \ensuremath{\mathrm{\gamma}}\ rays
interacting in the HPGe), the single escape probability being a factor
4 lower. The search for $\ensuremath{\mathrm{\mu^+}}\to\ensuremath{\mathrm{e}^+}\ensuremath{\mathrm{X}}$ events as described below
concentrates on double escape events.
Assuming a smooth and gently varying background as confirmed by the
Monte Carlo studies, the search for a peak structure in the delayed
signal energy spectrum (see Fig.~\ref{fig:muex_branching}) has been
done for energies from $0.3$ to $2.2$\,MeV. The lower energy limit is
given by the effective discriminator threshold, the upper energy limit
from the positron zero transmission range in germanium. Since the beam
muons are stopped after $\rm 2-3\,mm$ (2$\sigma$) in the HPGe
detector, and since the 2.2\,MeV electrons have a zero transmission
range of 2\,mm, this is the highest energy for which all positrons
remain within the detector volume thus completely depositing their
kinetic energy.
\begin{figure}
\centerline{\epsfig{file=muex_branching_2.eps,width=0.80 \textwidth}}
\caption[]{Plots showing the energy deposition during the delayed
gate in the HPGe detector (top) and fit results leading to upper
limits for the branching ratio for the decay $\ensuremath{\mathrm{\mu^+}}\to\ensuremath{\mathrm{e}^+}\ensuremath{\mathrm{X}}$.
For the abscissa two corresponding scales, which are the same for
all graphs (except for the inset at the top, which shows the full
\ensuremath{\mathrm{e}^+}\ energy range recorded), are drawn, one is the positron
kinetic energy $\rm T_\ensuremath{\mathrm{e}}$, the other the \ensuremath{\mathrm{X}}\ boson mass $\rm
m_\ensuremath{\mathrm{X}}$. In the graph at the top the Gaussian centered at 1.23\,MeV
gives the expected detector response if $\ensuremath{\mathrm{\mu^+}}\to\ensuremath{\mathrm{e}^+}\ensuremath{\mathrm{X}}$ would
contribute with a branching ratio of $5\cdot10^{-3}$. The second
graph shows the reduced $\chi^2$, dashed line for a
polynomial-only fit, solid line for a combined polynomial and
Gaussian fit. The third graph, with ordinate units already
converted into branching ratio, shows the contents (solid line)
and the error (dashed line) of the Gaussian from this fit. The
graph at the bottom gives the upper limit for a $\ensuremath{\mathrm{\mu^+}}\to\ensuremath{\mathrm{e}^+}\ensuremath{\mathrm{X}}$
decay branch at 90\% confidence level by applying the Bayesian
method to the fit results.}
\label{fig:muex_branching}
\end{figure}
For all positron energies between 0.3 and 2.2\,MeV a typically
1.2\,MeV wide energy interval is chosen and a polynomial fitted to
this part of the spectrum. For a polynomial of low order the fit has
an unrealistically high $\chi^2$. Increasing the order of the
polynomial the resulting $\rm \chi^2/D.F.$ first decreases and then
remains roughly constant with values around one.
A polynomial of order seven was chosen as the lowest order to have a
suitable reduced $\chi^2$ (second graph in
Fig.~\ref{fig:muex_branching}). Then a simultaneous fit of a Gaussian
(position and width fixed) and a polynomial provides the area and
error for a possible peak. In the third graph of
Fig.~\ref{fig:muex_branching} these results have already been
converted in branching ratio (BR) units. With a Bayesian
approach~\cite{barnett:1996} one can derive from these results an
upper limit with a given confidence level. Shown on the bottom of
Fig.~\ref{fig:muex_branching} is the 90\% C.L.\ upper limit.
For the positron energy $\rm T_\ensuremath{\mathrm{e}}=1.23\,MeV$ corresponding to an \ensuremath{\mathrm{X}}\
particle with mass $\rm m_\ensuremath{\mathrm{X}}=103.9\,MeV/c^2$ as suggested by Gninenko
and Krasnikov~\cite{gninenko:1998} the 90\% C.L.\ upper limit for the
branching ratio in the decay $\ensuremath{\mathrm{\mu^+}}\to\ensuremath{\mathrm{e}^+}\ensuremath{\mathrm{X}}$ is $\rm
BR=4.9\cdot10^{-4}$.
\section{Summary and Outlook}
Following the proposition that a new, weakly interacting boson \ensuremath{\mathrm{X}}\
with mass $\rm m_\ensuremath{\mathrm{X}}=103.9\,MeV/c^2$ produced in $\ensuremath{\mathrm{\mu^+}}\to\ensuremath{\mathrm{e}^+}\ensuremath{\mathrm{X}}$
might be the reason for the observed anomaly in the KARMEN data, we
have searched for this two-body \ensuremath{\mathrm{\mu}}\ decay branch by inspection of
the low energy end of the Michel spectrum. Utilizing a clean \ensuremath{\mathrm{\mu}}\
beam from the $\mu{}E4$ channel at PSI and stopping the muons in a
planar HPGe detector this work is the first direct search for such an
exotic \ensuremath{\mathrm{\mu}}\ decay process for \ensuremath{\mathrm{X}}\ boson masses $\rm
103\,MeV/c^2<m_\ensuremath{\mathrm{X}}<105\,MeV/c^2$ corresponding to positron energies
$\rm 0.3\,MeV<T_e<2.2\,MeV$. Our first results give branching ratios
$\rm BR(\ensuremath{\mathrm{\mu^+}}\to\ensuremath{\mathrm{e}^+}\ensuremath{\mathrm{X}})<5.7\cdot10^{-4}$ (90\% C.L.) over most of the
accessible region, such excluding the simplest scenario for the \ensuremath{\mathrm{X}}\
boson phenomenology suggested in Ref.~\cite{gninenko:1998}. By
refining the experimental method used in this experiment it will be
feasible to improve on this result.
\bigskip
\noindent We gratefully acknowledge valuable support from and
discussions with D.~Branford, M.~Daum, T.~Davinson, F.~Foroughi,
C.~Petitjean, D.~Renker, U.~Rohrer, and A.C.~Shotter. We also would
like to thank the Paul Scherrer Institut for assistance in setting up
this experiment in a very short time.
| proofpile-arXiv_065-8226 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
One of the central problems in the investigation
of non-Abelian gauge theories is a gauge invariant description of
the vacuum and the low lying exited states.
In the standard approach to the quantization of gauge theories
the physical states have to satisfy not only the
Schr\"odinger equation but additionally be unnihilated by the Gauss
law operator to implement gauge invariance at the quantum level
\cite{Jackiw}. However it is well known that
there exist states which satisfy the Gauss law but are not
invariant under the so-called homotopically nontrivial gauge
transformations, leading to the appearence of the theta angle
\cite{JackiwRebbi},\cite{Callan}.
A well-elaborated semiclassical approach to the theta structure of the
groundstate has been given in the ``instanton picture", where
the theta angle is interpreted \cite{Jackiw} in analogy to the
Bloch momentum in Solid State Physics.
The instantons, which are selfdual solutions of the Euclidean classical
equations of motion with finite action,
correspond to semiclassical quantum mechanical tunneling paths in
Minkowski space between the infinite sequence of degenerate
zero-energy Yang-Mills vacua of different homotopy classes of the gauge
potential.
The semiclassical instanton picture of
the theta vacuum however is of course reliable only for weak coupling.
For a complete investigation of the theta stucture
of the vacuum of Yang-Mills quantum theory a rigorous treatment
at strong coupling is necessary.
The effect of the theta angle for arbitrary coupling constant
can be taken into account by adding the Pontryagin density to the
Yang-Mills Lagrangian \cite{Jackiw} .
Although the extra theta dependent CP-violating term is only a total
divergence and therefore has no meaning classically, it can have a physical
meaning at the quantum level as is still under lively discussion
\cite{Adam}-\cite{Gaba1}.
As a first step towards a full investigation of Yang-Mills theory in the
strong coupling limit the toy model of $SU(2)$ Yang-Mills mechanics of
spatially homogeneous fields has been considered
on the classical \cite{Matinyan} -\cite{GKMP} as well as on the
quantum level \cite{Simon}-\cite{gymqm}.
In the present paper we will analyse the model of $SU(2)$ Yang-Mills
mechanics of spatially homogeneous fields for arbitrary theta angle.
In order to obtain the equivalent unconstrained classical system
in terms of gauge invariant variables only \cite{GoldJack} -\cite{GKP},
we apply
the method of Hamiltonian reduction (\cite{GKP} and references therein)
in the framework of the Dirac constraint formalism
\cite{DiracL} -\cite{HenTeit}. As in our recent work \cite{GKMP}
the elimination of the pure gauge degrees of freedom is achieved
by using the polar representation for the gauge potential,
which trivializes the Abelianization of the Gauss law constraints,
and finally projecting onto the constraint shell. The obtained
unconstrained system then describes the dynamics of a symmetrical second
rank tensor under spatial rotations. The main-axis-transformation of this
symmetric tensor allows us to separate the gauge invariant variables
into scalars under ordinary space rotations and into ``rotational'' degrees
of freedom. In this final form the physical Hamiltonian
and the topological operator can be quantized
without operator ordering ambiguities.
We study the residual symmetries of the resulting unconstrained
quantum theory with arbitrary theta angle and reduced the eigenvalue
problem of the Hamiltonian to the
corresponding problem with zero theta angle.
Using the variational approach we calculate
the low energy spectrum with rather high accuracy.
In particular we find the energy eigenvalue
and the magnetic and electric properties of the groundstate,
as well the corresponding value of the "gluon condensate".
The groundstate energy is found to be independent of the
theta angle by construction of the explicit transformation
relating the Hamiltonians with different theta parameter.
This is confirmed by an explicit calculation of the Witten formula
for the topological susceptibility using our variational results for the
groundstate and the low lying excitations give strong support
for the independence of the groundstate energy of theta thus
indicating the consistence of our results.
Our paper is organized as follows. In Section II the Hamiltonian reduction
of $SU(2)$ Yang-Mills mechanics for arbitrary theta angle
is carried out and the corresponding unconstrained system put into
a form where the rotational and the scalar degrees of freedom are maximally
separated. In Section III the obtained unconstrained classical
Hamiltonian is quantized, its residual symmetries, the necessary
boundary conditions for the wave functions
and the relevance of the theta angle on the quantum level discussed.
In Section IV the eigenvalue problem of the unconstrained
Hamiltonian with vanishing theta angle is solved approximately
in the low energy region using the variational approach.
In Section V the Witten formula for the topological susceptibility
is evaluated using the obtained variational results.
Section VI finally gives our conclusions.
Appendices A to C state several results and additional discussions
relevant to the main text.
\section{Unconstrained classical $SU(2)$ Yang-Mills mechanics
with theta angle}
\subsection{Hamiltonian formulation}
It is well known \cite{Jackiw} that the theta angle can be included
already at the level of the classical action
\begin{equation}
\label{eq:act}
S [A] : = - \frac{1}{4}\ \int d^4x\ \left(F^a_{\mu\nu} F^{a\mu \nu}
- {\alpha_s\theta\over 2\pi} F^a_{\mu\nu} \tilde{F}^{a\mu \nu}\right)~,
\end{equation}
with the $SU(2)$ Yang-Mills field strengths
$F^a_{\mu\nu} : = \partial_\mu A_\nu^a - \partial_\nu A_\mu^a
+ g \epsilon^{abc} A_\mu^b A_\nu^c~$, ($a=1,2,3$), the dual
$\tilde{F}_{a\mu \nu}:= {1/ 2} \epsilon_{\mu\nu\sigma\rho}
F^{a\sigma \rho}$ and $\alpha_s={g^2/ 4\pi}$.
For the special case of spatially homogeneous fields the Lagrangian
in (\ref{eq:act}) reduces to
\footnote{ Everywhere in the paper we put the spatial volume $V= 1$.
As result the coupling constant $g$ becomes dimensionful
with $g^{2/3}$ having the dimension of energy. The volume dependence
can be restored in the final results by replacing $g^2$ by $g^2/V$. }
\begin{equation} \label{hl}
L={1\over 2}\left(\dot{A}_{ai}-g\epsilon_{abc}A_{b0} A_{ci}\right)^2
-{1\over 2} B_{ai}^2 -{\alpha_s\theta\over 2\pi}
\left(\dot{A}_{ai}-g\epsilon_{abc}A_{b0} A_{ci}\right)B_{ai}~,
\end{equation}
with the magnetic field
$B_{ai}= (1/2)g\epsilon_{abc}\epsilon_{ijk}A_{bj}A_{ck}$.
After the supposition of spatial homogeneity of the fields
the $SU(2)$ gauge invariance of the Yang-Mills action
Eq. (\ref{eq:act}) reduces to the symmetry under the $SO(3)$
local transformations
\begin{eqnarray}
A_{a0}(t)& \longrightarrow & A^{\omega}_{a0}(t)=
O(\omega(t))_{ab}A_{b0}(t) -\frac{1}{2g}
\epsilon_{abc}\left(O(\omega(t))\dot O(\omega(t)) \right)_{bc}\,,\nonumber\\
A_{ai}(t)& \longrightarrow & A^{\omega}_{ai}(t)=
O(\omega(t))_{ab}A_{bi}(t)\label{tr}
\end{eqnarray}
and as a result the Lagrangian (\ref{hl}) is degenerate.
From the calculation of the canonical momenta
\begin{equation}
P_{a} := \partial L/\partial (\partial_0{A}_{a0} ) = 0 \,,\,\,\,\,\,\,\,
\Pi_{ai} := \partial L/\partial (\partial_0 {A}_{ai})
= \dot{A}_{ai}-g\epsilon_{abc}A_{b0} A_{ci}
- {\alpha_s\theta\over 2\pi} B_{ai}~,
\end{equation}
one finds that the phase space spanned by the variables
\( (A_{a0}, P_a) \) and \( (A_{ai}, \Pi_{ai}) \)
is restricted by the primary constraints
$P_a (x) = 0~$. The evolution of the system is governed
by the total Hamiltonian \cite{DiracL} with three arbitrary functions
\(\lambda_a (x)\)
\begin{equation}
\label{Htot}
H_T := \frac{1}{2}\Pi_{ai}^2
+ \frac{1}{2} \left(1+ \left({\alpha_s\theta\over 2\pi}\right)^2\right)
B_{ai}^2 (A) +\theta Q(\Pi,A)
- g A_{a0} \epsilon_{abc}A_{bi}\Pi_{ci} +
\lambda_a (x) P_a (x)~,
\end{equation}
where the topological charge has been introduced
\begin{equation}
Q := -{\alpha_s\over 2\pi}\Pi_{ai}B_{ai}~.
\end{equation}
Apart from the primary constraints $P^a = 0~$
the phase space is restricted also by the
non-Abelian Gauss law, the secondary constraints
\begin{equation}
\label{eq:secconstr}
\Phi_a : = g\epsilon_{abc} A_{ci} \Pi_{bi} = 0~,
\qquad
\{\Phi_i , \Phi_j \} = g\epsilon_{ijk}\Phi_k ~,
\end{equation}
which follow from the maintenance of the primary constraints in time.
To overcome the problems of the existence of these constraints and the
nonunique character of the dynamics governed by the
total Hamiltonian (\ref{Htot}) we will follow the method of
Hamiltonian reduction to constuct the unconstrained system
with uniquely predictable dynamics.
As in the recent paper \cite{GKMP} we shall use a special set of
of coordinates which is very suitable for the
implementation of Gauss law constraints and the
derivation of the physically relevant theory
equivalent to the initial degenerate theory.
This will be the subject of the following Subsection.
\subsection{Canonical transformation to adapted coordinates
and projection to Gauss law constraint}
The local symmetry transformation
(\ref{tr}) of the gauge potentials
\( A_{ai} \) promts us with the set of coordinates in terms of which the
separation of the gauge degrees of freedom occurs.
As in \cite{GKMP} we use the polar decomposition for arbitrary
\(3\times 3\) quadratic matrices \cite{Marcus}
\begin{equation}
\label{eq:pcantr}
A_{ai} \left(\chi, S \right)
= O_{ak}\left( \chi \right) S_{ki}~,
\end{equation}
with the orthogonal matrix \( O (\chi) \),
parametrized by the three angles \(\chi_i\) and the positive definite
\(3\times 3\) symmetric matrix \( S \).
The representation (\ref{eq:pcantr}) can be regarded as transformation
from the gauge potentials \( A_{ai}\) to
a the set of coordinates \(\chi_i\) and \( S_{ik} \).
The corresponding canonical conjugate momenta \( (p_{\chi_i}, P_{ik} )\)
can be obtained using the generating function
\begin{equation}
{F} \left( \Pi; \chi, S \right)=
\sum_{a,i}^3 \Pi_{ai} A_{ai} \left(\chi, S\right) =
\mbox{tr}
\left( \Pi^T O (\chi) S \right)
\end{equation}
as
\begin{eqnarray}
p_{\chi_j} & = & \frac{\partial F}{\partial \chi_j} =
\sum_{a,s,i}^3 \Pi_{ai} \ \frac{\partial O_{as}}{\partial \chi_j } \
S_{ si} =
\mbox{tr}
\left[ \Pi^T \frac{\partial O}{\partial \chi_j} \ S
\right],\\
P_{ik} & = & \frac{\partial F}{\partial S_{ik}} =
\frac{1}{2} \left( O \Pi^T + \Pi O^T \right)_{ik}~.
\end{eqnarray}
A straightforward calculation \cite{GKMP} yields the following expressions
for the field strengths \( \Pi_{ai} \)
in terms of the new canonical variables
\begin{equation} \label{eq:potn}
\Pi_{ai} = O_{ak}(\chi) \biggl[{\, P_{ki} +
\epsilon _{kli}
( s^{-1})_{lj} \left[
\left(\Omega^{-1}(\chi) p_{\chi} \right)_{j} - \epsilon_{mjn} \left(
PS\right)_{mn}\,\right]\,
}\biggr]~,
\end{equation}
with
\begin{equation}
\Omega_{ij}(\chi) \, : = \,\frac{1}{2} \, \epsilon_{min}
\left[ \frac{\partial O^T \left(\chi\right)}{\partial \chi_j}
\, O (\chi) \right]_{mn}~,
\end{equation}
and
\begin{equation}
s_{ik} : = S_{ik} - \delta_{ik}
\mbox{tr} S~.
\end{equation}
Using the representations (\ref{eq:pcantr}) and (\ref{eq:potn})
one can easily convince oneself that the
variables \( S \) and \(P \) make no contribution to the
Gauss law constraints (\ref{eq:secconstr})
\begin{equation}
\Phi_a : = O_{as}(\chi) \Omega^{-1}_{\ sj}(\chi) p_{\chi_j} = 0~.
\label{eq:4.54}
\end{equation}
Hence, assuming the invertibility of the matrix
$ \Omega$, the non-Abelian Gauss law constraints are
equivalent to the set of Abelian constraints
\begin{equation}
p_{\chi_a} = 0~.
\end{equation}
After having rewritten the model in terms of
adapted canonical pairs and after Abelianization of the Gauss
law constraints (\ref{eq:secconstr}) the construction of
the unconstrained Hamiltonian system
can be obtained as follows.
The physical unconstrained Hamiltonian, defined as
\[
H_{\theta}(S,P):= H_T( S, P) \,
\Bigl\vert_{ p_{\chi_a}= 0}~,
\]
takes the form
\begin{equation} \label{eq:uncYME}
H_{\theta} \, = \,
\frac{1}{2} \mbox{tr}({\cal{E}}^2)
+ \frac{g^2}{4} (1+{\alpha_s^2\over 4\pi^2}\theta^2) \left[
\mbox{tr}{}^2 ( S )^2 -
\mbox{tr} (S)^4
\right]
+ \theta Q(S,P) \, ,
\end{equation}
where the ``physical'' electric field strengths
\( {\cal{E}}_{ai} \) are
\begin{equation}
\Pi_{ai}\ \biggl\vert_{\pi_a = 0} =: \ O_{ak}(q) \
{\cal{E}}_{ki}
(S, P)~,
\end{equation}
and the topological charge
\begin{equation}
Q(S,P) = -{\alpha_s\over 2\pi}\mbox{tr}\left(PS\right)~.
\end{equation}
Using the representation (\ref{eq:potn})
for the electric field one can express the \( {\cal{E}}_{ai} \)
in terms of the physical variables \( P \) and \( S \)
\begin{equation} \label{eq:els}
{\cal{E}}_{ki}(S , P) =
P_{ik} + \frac{1}{\det s}
\left( s {\cal M} s \right)_{ik}\,.
\end{equation}
where \({\cal M}\) denotes the spin part of the angular angular
momentum tensor of the initial gauge field
\begin{equation}
{\cal M}_{mn}:= \left( S P - PS \right)_{mn}~.
\end{equation}
Using (\ref{eq:els}) the unconstrained Yang-Mills Hamiltonian reads
\begin{equation} \label{eq:uncYMP}
H_{\theta} \left(S,P\right) =
\frac{1}{2} \mbox{tr}(P)^2 +
\frac{1}{2 \det^2 s }
\mbox{tr}\, \left(s {\cal M} s \right)^2
+ \frac{g^2}{4}\left(1+{\alpha_s^2\over 4\pi^2}\theta^2\right) \left[
\mbox{tr}{}^2 (S)^2 - \mbox{tr} (S)^4
\right] + \theta Q(S,P) \, .
\end{equation}
\subsection{Unconstrained Hamiltonian in terms of
rotational and scalar degrees of freedom}
In order to achieve a more transparent form for the reduced Yang-Mills
system (\ref{eq:uncYMP}) it is convenient
to decompose the positive definite symmetric matrix \( S\) as
\begin{equation}
S = R^{T}(\alpha,\beta,\gamma)\ D (x_1,x_2,x_3) \
R(\alpha,\beta,\gamma)~,
\end{equation}
with the \( SO(3)\) matrix \({R}\)
parametrized by the three Euler angles \((\alpha,\beta,\gamma )\),
and the diagonal matrix
\begin{equation}
D : = \mbox{diag}\ ( x_1 , x_2 , x_3 )~.
\end{equation}
Using the $x_i$ and the Euler angles
\((\alpha,\beta,\gamma )\) and the corresponding canonical momenta $p_i$
and $p_\alpha,p_\beta, p_\gamma $
as the new set of canonical variables on the unconstrained phase space
we get the following physical Hamiltonian
\begin{equation}
\label{eq:PYM}
H_{\theta}\left(x_i,p_i;\xi_i\right) = \frac{1}{2} \sum_{cyclic}^3
\left[
p^2_i + \xi^2_i \frac{x_j^2 + x_k^2}{\left(x_j^2 -
x_k^2\right)^2} +
g^2 (1+{\alpha_s^2\over 4\pi^2}\theta^2) \
x_j^2 x_k^2 \right] + \theta Q(p,x)~.
\end{equation}
In (\ref{eq:PYM}) all rotational variables are
are combined into the quantities $\xi_i$
\begin{eqnarray}
&& \xi_1 : =
\frac{\sin\gamma}{\sin\beta}\ p_\alpha +
\cos\gamma \ p_\beta - \sin\gamma \cot\beta \ p_\gamma~,\\
&& \xi_2 : =
-\frac{\cos\gamma}{\sin\beta}\ p_\alpha +
\sin\gamma \ p_\beta + \cos\gamma \cot\beta \ p_\gamma~, \\
&& \xi_3 : = p_\gamma~.
\end{eqnarray}
representing the $SO(3)$ invariant Killing vectors with the Poisson
brackets algebra
\begin{equation}
\{\xi_i,\xi_j\}=-\epsilon_{ijk}\xi_k~.
\end{equation}
The topological charge $Q$ is independent of the rotational
degrees of freedom and depends on the diagonal
canonical pairs in the particularly simple cyclic form
\begin{equation}
Q
=-g{\alpha_s\over 2\pi}\left(x_1 x_2 p_3 + x_2 x_3 p_1 + x_3 x_1 p_2
\right)~.
\end{equation}
This completes our reduction of the spatially homogeneous constrained
Yang-Mills system with theta angle to the equivalent
unconstrained system
describing the dynamics of the physical degrees of freedom.
If we would restrict our consideration only to the classical level,
the above generalization to arbitrary theta angle would be unnecessary,
because the theta dependence
enters the initial Lagrangian in the form of a total time derivative
and thus the value of the theta angle has no influence
on the classical equations of motion.
In the Hamiltonian formulation one can easily verify that the theta
dependence can be removed from the Hamiltonian $H_{\rm \theta}$ by
the canonical transformation to the new variables
\begin{eqnarray}
\tilde{p}_i &:=& p_i - g{\alpha_s\theta\over 2\pi} x_j x_k\,,
\qquad i,j,k\ {\rm cyclic}~,\nonumber\\
\tilde{x}_i &:=& x_i~.
\label{clctr}
\end{eqnarray}
However, the transition to the quantum level requires a more careful
treatment of the problem. It is necessary to clarify
whether the operator corresponding to (\ref{clctr})
acting on the quantum states is unitary.
In subsequent Sections we shall consider the quantum treatment of
the obtained classical system and shall
discuss the theta dependence of the vacuum in this model.
\section{Quantization, symmetries and boundary conditions}
The Hamilton operator corresponding to (\ref{eq:PYM}) is obtained
in the Schr\"odinger configuration representation by the
conventional representation for the canonical
momenta \(p_k= -i \partial/\partial x_k\)
\begin{equation}
\label{HHHq}
H_\theta:={1\over 2}\sum_{\rm cyclic}^3
\left[-{\partial^2\over\partial x_i^2} + \xi^2_i
{x_j^2+x_k^2 \over (x_j^2-x_k^2)^2}
+
g^2\left(1+{\alpha_s^2\over 4\pi^2}\theta^2
\right)x_j^2 x_k^2~\right] + \theta Q~,
\end{equation}
with the topological charge operator
\begin{equation}
Q= ig\frac{\alpha_s}{2\pi}
\sum_{\rm cyclic}^3{x_ix_j}{\partial\over\partial x_k}~,
\end{equation}
and the intrinsic angular momenta $\xi$
obeying the commutation relations
\begin{equation}
[\xi_i,\xi_j]=-i\epsilon_{ijk}\xi_k~.
\end{equation}
The transition to the quantum system
in this adapted basis is free from operator ordering ambiguities.
As already mentioned in the last section the parameter
theta is unphysical on the classical level, since it can be
removed from Hamiltonian $H_{\rm \theta}$ by the canonical
transformation (\ref{clctr}).
One can easily convince oneself that the quantum Hamiltonians
$H_{\rm \theta}$ and $H_{\rm \theta=0}$ can be related to each other
via the transformation
\begin{equation}
\label{UHU1}
H_{\rm \theta} = U(\theta) H_{\rm \theta=0} U^{-1}(\theta)~,
\end{equation}
with
\begin{equation}
\label{UHU2}
U(\theta)= \exp[ig{\alpha_s\over 2\pi}\theta x_1 x_2 x_3]~.
\end{equation}
The question is whether this operator is unitary in the
domain of definition of the Hamiltonians $H_{\rm \theta}$
and $H_{\rm \theta=0}$,
which is determined by their respective symmetries and the boundary
conditions to be imposed on the corresponding wave functions.
\subsection{ Boundary conditions}
Due to the positivity of the coordinates $x_i$ in the polar decomposition
(\ref{eq:pcantr}) the configuration space is $R^{+}_3$
after the elimination of the pure gauge degrees of freedom.
Thus the implementation of the canonical rules of quantization to the
unconstrained classical system requires the specification of the
boundary conditions both at positive infinity and on the three
boundary planes $x_i=0~,\ \ i=1,2,3$.
The requirement of Hermiticity of the Hamiltonian $H_{\rm \theta}$
(\ref{HHHq}) leads to the condition
\begin{equation}
\label{bctheta}
\left(\Psi^*_\theta\partial_k\Phi_\theta-\partial_k\Psi^*_\theta\Phi_\theta
+2ig{\alpha_s\over 2\pi}\theta x_i x_j\Psi^*_\theta\Phi_\theta
\right)\Big|_{x_k=0}=0~,\ \ \ \ i,j,k\ {\rm cyclic}~.
\end{equation}
Using the relation $\Psi_\theta=U(\theta)\Psi_{\theta=0}$ with $U(\theta)$
given in (\ref{UHU2}), this reduces to the corresponding requirement
for the Hermiticity of $H_{\rm \theta=0}$
\begin{equation}
\label{bctheta0}
\left(\Psi^*_{\theta=0}\partial_k\Phi_{\theta=0}
-\partial_k\Psi^*_{\theta=0}\Phi_{\theta=0}
\right)\Big|_{x_k=0}=0~,\ \ \ \ i,j,k\ {\rm cyclic}.
\end{equation}
It is satisfied for ($\kappa$ arbitrary c-number)
\begin{equation}
\label{bctheta0e}
\left(\partial_k\Psi_{\theta=0}+\kappa\Psi_{\theta=0}\right)
\Big|_{x_k=0}=0~,\ \ \ \ k=1,2,3~,
\end{equation}
which includes the two limiting cases of vanishing wave function
($\kappa\to\infty$) or vanishing derivative of the wave function
($\kappa=0$) at the boundary.
The requirement of the Hermiticity of the momentum operators
in the Schr\"odinger configuration representation
$ p_i := -i\partial/ \partial x_i $ on $R^{+}_3$
requires the wave function to obey the boundary conditions
\begin{eqnarray}
\label{bc1}
&&\Psi_{\theta=0}\Big|_{x_i=0}= 0~, \ \ \ \ \ \ \ i=1,2,3~,\\
\label{bc2}
&&\Psi_{\theta=0}\Big|_{x_i \to \infty} =0~,\ \ \ \ \ \ \ i=1,2,3~.
\end{eqnarray}
In particular, they also imply the Hermiticity and the existence of a real
eigenspectrum of the topological charge operator $Q$.
Its eigenstates, however, given explicitly in Appendix A, do not satisfy
the boundary conditions (\ref{bc1}) and (\ref{bc2}), similar to the
eigenstates of the momentum operator $-i\partial/ \partial x_i $.
Furthermore, it is interesting to note that the characteristics
of the $Q$ operator coincide with the Euclidean self(anti-)dual zero-energy
solutions of the classical equations of motion. They are the
analogs of the instanton solutions, but do not correspond to
quantum tunnelling between different vacua (see Appendix A).
\subsection{Symmetries of the Hamiltonians $H_{\rm \theta}$
and $H_{\rm \theta=0}$}
As a relict of the rotational invariance of the initial gauge field theory
the Hamiltonian (\ref{HHHq}) possesses the symmetry
\begin{equation}
\label{cHI}
[H,J_k]=0~,
\end{equation}
where $J_i= R_{ij}\xi_j$ are the spin part of the generators of the angular
momentum of Yang-Mills fields satisfying the $so(3)$ algebra
\begin{equation}
[J_i, J_j] = i \epsilon_{ijk} J_k~,
\end{equation}
and commuting with the intrinsic angular momenta, $[J_i, \xi_j] = 0~$.
Hence the eigenstates can be classified
according to the quantum numbers $J$ and $M$ as the eigenvalues
of the spin $\vec{J}^2= J_1^2 + J_2^2+ J_3^2 $ and $J_3$.
The Hilbert spaces of states with different spin $J$
are each invariant subspaces under the action of
all generators \(J_i\) and can therefore be considered
as separate eigenvalue problems.
Apart from this continous rotational symmetry the Hamiltonians
$H_{\rm \theta}$ and $H_{\rm \theta=0}$ possess the following
discrete symmetries.
Both $H_{\rm \theta=0}$ and $Q$
are invariant under arbitrary permutations of any two of the
variables $\sigma_{ij}x_i = x_j\sigma_{ij}, \,
\sigma_{ij}p_i =p_j\sigma_{ij}$
\begin{eqnarray}
[H_{\theta=0},\sigma_{ij}]=0~,\ \ \ \ \ \ \ [Q,\sigma_{ij}]=0~.
\end{eqnarray}
However, under time reflections
$Tx_i = x_iT,\,\,Tp_i = - p_iT$, as well as
under parity reflections ${\cal P} x_i= -x_i {\cal P},\,
{\cal P} p_i= -p_i {\cal P}$,
$H_{\rm \theta=0}$ commutes with $T$ and ${\cal P}$,
\begin{equation}
[H_{\rm \theta=0},T]=0~,\ \ \ \ \ \ \ [H_{\theta=0},{\cal P}]=0~,
\end{equation}
but $Q$ anticommutes with $T$ and ${\cal P}$,
\begin{equation}
QT =-TQ~,\ \ \ \ \ \ \
Q{\cal P} =-{\cal P}Q~.
\end{equation}
Hence for the $H_{\rm \theta=0}$ Schr\"odinger eigenvalue problem we can
restrict to the Hilbert space of real and parity odd wave functions
which automatically satisfy the boundary conditions (\ref{bc1}).
Observe that the transformation (\ref{UHU2}) leads out of the
corresponding Hilbert space and is therefore not unitary.
\subsection{Independence of the energy spectrum of the theta angle}
Due to the relation (\ref{UHU1}) between the Hamiltonians $H_{\rm \theta}$,
and $H_{\rm \theta=0}$ and the corresponding compatibility of the boundary
conditions discussed above the energy spectrum
should be independent of the theta angle.
In particular the topological susceptibility of the vacuum should vanish.
Using the Witten formula \cite{Witten},\cite{Diakonov}, the
topological susceptibility can be represented as the sum of a propagator
term involving the transition matrix elements of the topological
operator $Q$ and a contact term prportional to the vacuum expectation
value of the square of the magnetic field.
Independence of the groundstate energy of the theta angle
and hence vanishing topological susceptibility should therefore imply
\begin{equation}
\label{WF0}
{d^2 E_0(\theta)\over d\theta^2}\Big|_{\theta = 0}=
-2\sum_n{|\langle 0|Q|n\rangle |^2\over E_n-E_0}
+ \langle 0|\left({\alpha_s\over 2\pi}\right)^2 B^2|0 \rangle = 0~,
\end{equation}
where $|n>$ are eigenstates of the Hamiltonian $H_{\rm \theta=0}$
with energy eigenvalues $E_n$.
As we shall see below that our calculation of the
low energy part of the spectrum of $H_{\rm \theta=0}$ using the
variational technique is in full accordance with (\ref{WF0}).
\section{Schr\"odinger eigenvalue problem for vanishing theta}
\subsection{Low energy spin-0 spectrum from variational calculation}
The Hilbert space of states with zero spin $\vec{J}^2=0$
is an invariant subspace under the action of
all generators \(J_i\) and one can consider the eigenvalue
problem separately from states characterized by higher
spin value.
Thus in the sector of zero spin $\vec{J}^2=\vec{\xi}^2=0$ the
Schr\"odinger eigenvalue problem (\ref{HHHq}) reduces to
\begin{equation}
\label{H--0}
H_0 \Psi_E \equiv
{1\over 2}\sum_{\rm cyclic}^3\left[-{\partial^2\over\partial x_i^2}
+ g^2 x_j^2 x_k^2\right]\Psi_E = E \Psi_E~.
\end{equation}
We shall use the boundary conditions (\ref{bc1}) and (\ref{bc2}).
Already a long time ago it has been proven by F. Rellich \cite{Rellich}
that Hamiltonians of the type (\ref{H--0}) have a
discrete spectrum due to quantum fluctuations, although the classical problem
allows for scattering trajectories (see discussion in \cite{Simon}).
Related and simplified versions of the eigenvalue problem (\ref{H--0})
have been studied extensively by many authors
using different methods \cite{Simon}-\cite{Gaba2}.
In particular, in \cite{Medvedev}-\cite{BartBrunRaabe} the eigenstates and
eigenvalues have been found in the semiclassical approximation for the
special two dimensional case $x_3=0$.
\footnote{ It is interesting that for the three dimensional case
one can write the potential term in
(\ref{H--0}) in the form
$V=\sum_{i=1}^3 (\partial_i W)^2$
with the "superpotential" \(~W(x_1, x_2, x_3)=x_1x_2x_3 \).
Note that in the simplified two dimensional case
there is no such superpotential. The two-dimensional superpotential
$W^{(2)}=xy$ corresponds to the two-dimensional harmonic oscillator
$V^{(2)}=x^2+y^2$.
From the form of the superpotential it follows that the wave function
$\Psi_0 =\exp[-gW]$
solves the Schr\"odinger eigenvalue problem with energy eigenvalue $E=0$.
It is the unconstrained, strong coupling form of the well-known exact but
nonnormalizable zero-energy solution \cite{Loos} of the Schr\"odinger
equation of Yang-Mills field theory .
Obviously it is also not satisfying the boundary conditions (\ref{bc1}),
(\ref{bc2}) and has to be disregarded as a false groundstate.}
To obtain the approximate low energy spectrum of the Hamiltonian
in the spin-0 sector
we will use the Rayleigh-Ritz variational method \cite{ReedSimon}
based on the minimization of the energy functional
\begin{equation}
\label{energyf}
{\cal E}[\Psi] := \frac{\big<\Psi|H_0|\Psi\big>}
{\big<\Psi|\Psi\big>}~.
\end{equation}
The key moment in all variational calculations
is the choice of the trial functions.
Guided by the harmonic oscillator form of the valleys of the potential
in (\ref{H--0}) close to the bottom a simple first choice for a trial
function compatible with the boundary conditions (\ref{bc1}) and
(\ref{bc2}) is to use \cite{gymqm} the lowest state of three
harmonic quantum oscillators on the positive half line
\begin{equation}
\label{Psi000}
\Psi_{000}=
8\prod_{i=1}^3 \left({\omega_i\over \pi}\right)^{1/4}\sqrt{\omega_i}x_i
e^{-\omega_ix_i^2/2}~.
\end{equation}
The stationarity conditions for the energy functional
of this state,
\begin{eqnarray}
{\cal E}[\Psi_{000}]=
\sum_{cyclic}^3 \left({3\over 4}\omega_i+
{9\over 8}g^2{1\over \omega_j\omega_k}\right)~,\nonumber
\end{eqnarray}
lead to the isotropic optimal choice
\begin{equation} \label{fr}
\omega :=\omega_1=\omega_2=\omega_3=3^{1/3}g^{2/3}~.
\end{equation}
As a first upper bound for the groundstate energy of the Hamiltonian we
therefore find
\begin{eqnarray}
\label{1stest}
E_0 \le {\cal E}[\Psi_{000}]=
{27\over 8}3^{1/3}g^{2/3} = 4.8676~ g^{2/3}.
\end{eqnarray}
The upper bound (\ref{1stest}) is in agreement with the lower bound of
the energy functional for separable functions
\begin{equation}
{\cal E}[\Psi_{\rm sep}] \ge 4.5962~ g^{2/3}~,
\end{equation}
derived in Appendix B.
In order to improve the upper bound for the groundstate energy
of the Hamiltonian $H_0$ we extend the space of trial functions
(\ref{Psi000})
and consider the Fock space of the orthonormal set of products
\begin{equation}
\label{bel}
\Psi_{n_1 n_2 n_3}:=
\prod_{i=1}^3 \Psi_{n_i}(\omega, x_i)~,
\end{equation}
of the odd eigenfunctions of the harmonic oscillator
\begin{eqnarray}
\Psi_{n}(\omega,x):=
{(\omega/\pi)^{1/4}\over \sqrt{2^{2n}(2n+1)!}}
e^{-\omega x^2/2}
H_{2n+1}(\sqrt{\omega}x)~,\nonumber
\end{eqnarray}
with the frequency fixed by (\ref{fr}).
Furthermore the variational procedure becomes much more effective,
if the space of trial functions is decomposed into the irreducible
representations of the residual discrete symmetries of the Hamiltonian
(\ref{H--0}). As has been discussed in Section III B, it is invariant under
arbitrary permutations of any two of the
variables $\sigma_{ij}x_i = x_j\sigma_{ij}, \,
\sigma_{ij}p_i =p_j\sigma_{ij}$ and
under time reflections $Tx_i = x_iT,\,\,Tp_i = - p_iT$,
\begin{eqnarray}
[H_0,\sigma_{ij}]=0,\,\,\,\,\qquad [H_0,T]=0\,.\nonumber
\end{eqnarray}
We shall represent these by the permutation operator $\sigma_{12}$,
the cyclic permutation operator $\sigma_{123}$ and the time reflection
operator $T$, whose action on the states is
\begin{eqnarray}
\sigma_{123}\Psi(x_1,x_2,x_3)&=&\Psi(x_2,x_3,x_1)~,\nonumber\\
\sigma_{12}\Psi(x_1,x_2,x_3)&=&\Psi(x_2,x_1,x_3)~,\nonumber\\
T\Psi(x_1,x_2,x_3)&=&\Psi^*(x_1,x_2,x_3)~,\nonumber
\end{eqnarray}
and decompose the Fock space spanned by the functions (\ref{bel})
into the irreducible representations of the
permutation group and time reflection $T$.
For given $(n_1,n_2,n_3)$ we define
\begin{eqnarray}
\label{typeI}
\Psi_{nnn}^{(0)+}:=\Psi_{n n n}~,
\end{eqnarray}
if all three indices are equal (type I), the three states $(m=-1,0,1)$
\begin{eqnarray}
\label{typeII}
\Psi^{(m)+}_{nns}:=
{1\over\sqrt{3}}\sum_{k=0}^2 e^{-2km\pi i/3}
\left(\sigma_{123}\right)^k\Psi_{n n s}~,
\end{eqnarray}
when two indices are equal (type II), and the two sets of three states
$(m=-1,0,1)$
\begin{equation}
\label{typeIII}
\Psi^{(m)\pm}_{n_1n_2n_3}:=
{1\over\sqrt{6}}\sum_{k=0}^2 e^{-2km\pi i/3}\left(\sigma_{123}\right)^k
\left(1\pm \sigma_{12}\right)\Psi_{n_1 n_2 n_3}~,
\end{equation}
if all $(n_1,n_2,n_3)$ are different (type III).
In this new orthonormal set of irreducible basis states
$\Psi^{(m)\alpha}_{\bf N}$,
the Fock representation of the Hamiltonian $H_0$ reads
\begin{eqnarray}
\label{H_0irr}
H_0=\sum |\Psi^{(m)\alpha}_{\bf M}\rangle
\langle\Psi^{(m)\alpha}_{{\bf M}}|H_0|\Psi^{(m)\alpha}_{\bf N}\rangle
\langle\Psi^{(m)\alpha}_{\bf N}|~.\nonumber
\end{eqnarray}
The basis states $\Psi^{(m)\alpha}_{\bf N}$ are eigenfunctions of
$\sigma_{123}$ and $\sigma_{12}T$
\begin{eqnarray}
\sigma_{123}\Psi^{(m)\pm}_{\bf N}
&=&e^{2m\pi i/3}\Psi^{(m)\pm}_{\bf N}~,\nonumber\\
\sigma_{12}T\Psi^{(m)\pm}_{\bf N}
&=&\pm \Psi^{(m)\pm}_{\bf N}~.
\end{eqnarray}
Under $\sigma_{12}$ and $T$ separately, however,
they transform into each other
\begin{eqnarray}
\sigma_{12}\Psi^{(m)\pm}_{\bf N}&=&\pm \Psi^{(-m)\pm}_{\bf N}~,\nonumber\\
T\Psi^{(m)\pm}_{\bf N}&=& \Psi^{(-m)\pm}_{\bf N}~.\nonumber
\end{eqnarray}
We therefore have the following irreducible representations.
The singlet states $\Psi^{(0)+}$, the ``axial'' singlet states
$\Psi^{(0)-}$, the doublets $(\Psi^{(+1)+};\Psi^{(-1)+})$ and
the ``axial'' doublets $(\Psi^{(+1)-};\Psi^{(-1)-})$.
Since the partner states of the doublets transform into each
other under the symmetry operations $\sigma_{12}$ or $T$, the corresponding
values of the energy functional are equal.
The energy matrix elements of the irreducible states can then be expressed
in terms of the basic matrix elements as given in Appendix C.
Due to this decomposition of the Fock space into the irreducible
sectors, the variational approach allows us to give
upper bounds for states in each sector.
The values of the energy functional for the states in each irreducible
sector with the smallest number of knots
${\cal E}[\Psi_{000}^{(0)+}]= 4.8676~ g^{2/3}$,
${\cal E}[\Psi_{100}^{(\pm 1)+}]= 7.1915~ g^{2/3}$,
${\cal E}[\Psi_{012}^{(0)-}]= 13.8817~ g^{2/3}$, and
${\cal E}[\Psi_{012}^{(\pm 1)-}]= 15.6845~ g^{2/3}$
give first upper bounds for the lowest energy eigenvalues
of the singlet, the doublet, the axial singlet, and the
axial doublet states.
In order to improve the upper bounds for each irreducible sector,
we truncate the Fock space at a certain number of knots
of the wave functions and search for the corresponding states
in the truncated space with the lowest value of the energy functional.
We achieve this by diagonalizing
the corresponding truncated Hamiltonian $H_{\rm trunk}$ to
find its eigenvalues and eigenstates. Due to the orthogonality of the
truncated space to the remaining part of Fock space the value of the energy
functional (\ref{energyf}) for the eigenvectors of $H_{\rm trunk}$
coincides with the $H_{\rm trunk}$ eigenvalues.
Including all states in the singlet sector with up to $5$ knots
we find rapid convergence to the following energy expectation values for
the three lowest states
$S_1,S_2,S_3$
\begin{eqnarray}
{\cal E}[S_1] &=& 4.8067~ g^{2/3}\ (4.8070~ g^{2/3}) ,\nonumber\\
{\cal E}[S_2] &=& 8.2515~ g^{2/3}\ (8.2639~ g^{2/3}) ,\nonumber\\
{\cal E}[S_3] &=& 9.5735~ g^{2/3}\ (9.6298~ g^{2/3}) ,
\end{eqnarray}
where the numbers in brackets show the corresponding result when
including only states up to $4$ knots into the variational calculation.
The lowest state $S_1$, given explicitly as
\begin{eqnarray}
\label{groundstate}
S_1 &=& 0.9946~ \Psi_{000}^{(0)+} + 0.0253~ \Psi_{001}^{(0)+}
- 0.0217~ \Psi_{002}^{(0)+} - 0.0970~ \Psi_{110}^{(0)+}\nonumber\\
&& - 0.0005~ \Psi_{003}^{(0)+} - 0.0033~ \Psi_{012}^{(0)+}
- 0.0146~ \Psi_{111}^{(0)+} - 0.0005~ \Psi_{004}^{(0)+}\nonumber\\
&& + 0.0040~ \Psi_{013}^{(0)+} - 0.0080 ~\Psi_{220}^{(0)+}
- 0.0038~ \Psi_{112}^{(0)+} + 0.0001~ \Psi_{005}^{(0)+}\nonumber\\
&& - 0.0004~ \Psi_{014}^{(0)+} + 0.0011~ \Psi_{023}^{(0)+}
-0.0004~ \Psi_{113}^{(0)+} + 0.0031~ \Psi_{221}^{(0)+}~,
\end{eqnarray}
nearly coincides with the state $\Psi_{000}^{(0)+}$, the contributions
of the other states are quite small.
Similarly including all states in the doublet sector with up to $6(5)$
knots the
following energy expectation values for the three lowest states
$D_1^{(\pm 1)},D_2^{(\pm 1)},D_3^{(\pm 1)}$
\begin{eqnarray}
{\cal E}[D_1^{(\pm 1)}] &=& 7.1682~ g^{2/3}\ (7.1689~ g^{2/3}) ,\nonumber\\
{\cal E}[D_2^{(\pm 1)}] &=& 9.6171~ g^{2/3}\ (9.6394~ g^{2/3}) ,\nonumber\\
{\cal E}[D_3^{(\pm 1)}] &=& 10.9903~ g^{2/3}\ (10.9951~ g^{2/3}).
\end{eqnarray}
have been obtained.
Including all states in the axial singlet sector with up to $8(7)$ knots
we find the following energy expectation values for the three lowest
states $A_1,A_2,A_3$
\begin{eqnarray}
{\cal E}[A_1] &=& 13.2235~ g^{2/3}\ (13.2275~ g^{2/3}),\nonumber\\
{\cal E}[A_2] &=& 16.6652~ g^{2/3}\ (16.7333~ g^{2/3}),\nonumber\\
{\cal E}[A_3] &=& 19.1470~ g^{2/3}\ (19.3028~ g^{2/3}).
\end{eqnarray}
Finally taking into account all states in the axial doublet sector with up
to $8(7)$ knots
we find the following energy expectation values for the three lowest states
$C_1^{(\pm 1)},C_2^{(\pm 1)},C_3^{(\pm 1)}$
\begin{eqnarray}
{\cal E}[C_1^{(\pm 1)}] &=& 14.8768~ g^{2/3}\ (14.8796~ g^{2/3}),\nonumber\\
{\cal E}[C_2^{(\pm 1)}] &=& 17.6648~ g^{2/3}\ (17.6839~ g^{2/3}),\nonumber\\
{\cal E}[C_3^{(\pm 1)}] &=& 19.9019~ g^{2/3}\ (19.9914~ g^{2/3}),
\end{eqnarray}
We therefore obtain rather good estimates for the energies of the lowest
states in the spin-0 sector. Extending to higher and higher numbers of
knots in each sector we should be able to obtain the low energy spectrum
in the spin-zero sector to arbitrarily high numerical accuracy.
In summary comparing our results for the first few states in all sectors,
we find that the lowest state appears in the singlet sector with energy
\begin{equation}
\label{groundsten}
E_{0} = 4.8067~ g^{2/3}~,
\end{equation}
with expected accuracy up to three digits after the dot.
Its explicit form is given in (\ref{groundstate}) to the accuracy
considered.
For comparison with other work we remark that due to our boundary
condition (\ref{bc1})
all our spin-0 states correspond to the $0^-$ sector in the work of
\cite{Martin} where a different gauge invariant representation of
Yang-Mills mechanics has been used.
Their state of lowest energy in this sector is $9.52~ g^{2/3}$.
Furthermore in \cite{Gaba2}, using an analogy of $SU(N)$
Yang-Mills quantum mechanics in the large $N$ limit to membrane theory,
obtain the energy values $6.4690~ g^{2/3}$ and $19.8253~ g^{2/3}$ for
the groundstate and the first excited state.
The expectation values for the squares of the electric and the magnetic
fields for the groundstate (\ref{groundstate}) are found
to be
\begin{equation}
\langle 0|E^2|0\rangle = 6.4234~g^{2/3},\ \ \ \ \ \ \ \
\langle 0|B^2|0\rangle =3.1900~g^{2/3} ~,
\end{equation}
and the value for the "gluon condensate" is therefore
\begin{equation}
\langle 0|G^2|0\rangle:= 2\left(\langle 0|B^2|0\rangle
-\langle 0|E^2|0\rangle \right) = - 6.4669 ~g^{2/3}~.
\end{equation}
These results are expected to be accurate up to three digits after the dot.
Hence the variational calculation shows
that the vacuum is not self(anti-)dual and that a nonperturbative
"gluon condensate" appears.
\bigskip
\subsection{Higher spin states}
For the discussion of the eigenstates of the Hamiltonian $H_{\theta =0}$
with arbitrary spin we write
\begin{equation}
\label{HHHs}
H_{\theta =0} = H_0 + H_{\rm spin}
\end{equation}
with the spin-0 Hamiltonian (\ref{H--0}) discussed in the last
subsection and the spin dependent part
\begin{equation}
H_{\rm spin} = {1\over 2}\sum_{i=1}^3\xi_i^2 V_i~,\ \ \ \
V_i := {x_j^2+x_k^2\over (x_j^2-x_k^2)^2}~,\qquad i,j,k\ {\rm cyclic}~.
\end{equation}
Introducing the lowering and raising operators
$\xi_{\pm} :=\xi_1\pm i\xi_2~,$
the spin dependent part $H_{\rm spin}$ of the Hamiltonian (\ref{HHHs})
can be written in the form
\begin{equation}
\label{Hrot}
H_{\rm spin} = {1\over 8}\left(\xi^2_+ + \xi^2_-\right)(V_1-V_2)
+{1\over 8}\left(\xi_+\xi_- +\xi_-\xi_+\right) (V_1+V_2)
+{1\over 2}\xi^2_3 V_3~,
\end{equation}
Since the Hamiltonian (\ref{HHHs}) commutes with $\vec{J}^2$ and $J_z$,
the energy eigenfunctions $\Psi_{JM}$ can be characterized by the two
quantum numbers $J$ and $M$.
Furthermore we shall expand the wave function $\Psi_{JM}$ in the basis of
the well-known $D$ functions \cite{Brink}, which are the common eigenstates
of the operators $\vec{J}^2=\vec{\xi}^2$, $J_z$ and $\xi_3$
with the eigenvalues $J,M$ and $k$ respectively,
\begin{equation}
\Psi_{JM}(x_1,x_2,x_3;\alpha,\beta,\gamma)
=\sum_{k=-J}^J i^J\sqrt{{2J+1\over 8\pi^2}}
\Psi_{JMk}(x_1,x_2,x_3)D_{kM}^{(J)}(\alpha,\beta,\gamma)~,
\end{equation}
where $(\alpha,\beta,\gamma)$ are the Euler angles.
We have the relations
\begin{equation}
\xi_3D_{kM}^{(J)}=k D_{kM}^{(J)}\ ,\ \
\xi_{\pm} D_{kM}^{(J)}=\sqrt{(1\pm k +1)(1\mp k)} D_{k\pm 1\ M}^{(J)}~.
\end{equation}
The task to find the spectrum of the Hamiltonian (\ref{HHHs})
then reduces to the following eigenvalue problem
for the expansion coefficients $\Psi_{JMk}$ for fixed values of $J$ and $M$
\begin{equation}
\label{eigenvp}
\sum_{k=-J}^J \left[
\left(H_0-E\right) \delta_{k^\prime,k}
+(-1)^J (2J+1)\int {\sin\beta d\alpha d\beta d\gamma \over 8\pi^2}
D_{k^\prime M}^{(J)\ast}(\alpha,\beta,\gamma)H_{\rm spin}
D_{kM}^{(J)}(\alpha,\beta,\gamma)
\right] \Psi_{JMk} = 0~.
\end{equation}
Since the spin part $H_{\rm spin}$ of the Hamiltonian does not commute
with $\xi_3$, nondiagonal terms arise, coupling different values of $k$.
We shall in the following limit ourselves to the case of spin-1.
Using the linear combinations \cite{Landau}
\begin{eqnarray}
\Psi_{1}(x_1,x_2,x_3) &:=&
{1\over \sqrt{2}}\left[\Psi_{J=1,M,k=1}(x_1,x_2,x_3)
-\Psi_{J=1,M,k=-1}(x_1,x_2,x_3)\right]~,\\
\Psi_{2}(x_1,x_2,x_3) &:=&
{1\over \sqrt{2}}\left[\Psi_{J=1,M,k=1}(x_1,x_2,x_3)
+\Psi_{J=1,M,k=-1}(x_1,x_2,x_3)\right]~,\\
\Psi_{3}(x_1,x_2,x_3) &:=& \Psi_{J=1,M,k=0}(x_1,x_2,x_3)~,
\end{eqnarray}
the corresponding eigenvalue problem (\ref{eigenvp}) for spin-1 decouples
to the following three Schr\"odinger equations
for the wave functions $\Psi_a(x_1,x_2,x_3)$
\begin{equation}
\label{effSch1}
\left[-{1\over 2}\sum_{i=1}^3{\partial^2\over\partial x_i^2}
+{g^2\over 2}\sum_{i<j} x_i^2 x_k^2
+ V^{\rm eff}_a(x_1,x_2,x_3)
\right]\Psi_a(x)=E\Psi_a(x)~,
\ \ \ \ a=1,2,3 ~,
\end{equation}
with the effective potential
\begin{equation}
\label{effSch2}
V^{\rm eff}_a(x_1,x_2,x_3):={1\over 2}(V_b+V_c)= {1\over 2}
\left({x_a^2+x_c^2\over (x_a^2-x_c^2)^2}+
{x_a^2+x_b^2\over (x_a^2-x_b^2)^2}\right)~,\qquad a,b,c\ {\rm cyclic}~.
\end{equation}
In the spin-1 sector we have therefore succeeded to reduce the
Schr\"odinger equation to three effective Schr\"odinger equations
for the scalar degrees of freedom with an additional effective potential
induced by the rotational degrees of freedom.
Since the effective potentials $V_i^{\rm eff}$ are related via
cyclic permutation
\begin{equation}
\sigma_{123}V_1^{\rm eff}= V_2^{\rm eff}\sigma_{123}~,\ \ \ \
\sigma_{123}V_2^{\rm eff}= V_3^{\rm eff}\sigma_{123}~,\ \ \ \
\sigma_{123}V_3^{\rm eff}= V_1^{\rm eff}\sigma_{123}~,
\end{equation}
all energy levels in the spin-1 sector are threefold degenerate.
As in the spin-0 sector we may use the variational approach to obtain
an upper bound for the lowest spin-1 state.
The variational ansatz
\begin{equation}
\Psi_a(x_1,x_2,x_3):= (x_a^2-x_b^2)(x_a^2-x_c^2)
\prod_{i=1}^3 \Psi_{0}(\omega_i,x_i)
\end{equation}
satisfies both the boundary conditions (\ref{bc1})
and (\ref{bc2}) and vanishes at the
singularities of the additional effective spin-1 potential $V_{\rm eff}$.
For the optimal values
\begin{equation}
\omega_a=1.1814~ g^{2/3},\quad\quad \omega_b=\omega_c=2.34945~g^{2/3}~,
\end{equation}
we obtain the energy minimum
\begin{equation}
\label{varres}
E_{\rm spin-1}=8.6044~ g^{2/3}~.
\end{equation}
Analogous treatments of higher spin states can be carried out
correspondingly.
Using the linear combinations \cite{Landau}
\begin{eqnarray}
\Psi_{J|k|}^{\pm}(x_1,x_2,x_3) &:=&
{1\over \sqrt{2}}\left[\Psi_{J,M,k}(x_1,x_2,x_3)
\pm\Psi_{J,M,-k}(x_1,x_2,x_3)\right]~,\ \ \ \ \ k\ne 0~,\\
\Psi_{J0}(x_1,x_2,x_3) &:=& \Psi_{J,M,k=0}(x_1,x_2,x_3)~,
\end{eqnarray}
and noting that there are no transitions between the states
$\Psi_{J|k|}^{\pm}$ with even and odd $k$, and with $+$ and $-$ index,
the corresponding eigenvalue problem (\ref{eigenvp}) for spin-J decouples
into four separate Schr\"odinger eigenvalue problems.
For spin-2 one finds one cyclic triplet of degenerate eigensates and
two singlets under cyclic permutation, for spin-3 two cyclic triplets
each consisting of three degenerate states and one singlet, and so on.
The corresponding reduction on the classical level
using the integrals of motion (\ref{cHI}) has been done in \cite{GKMP}.
We conclude this subsection by pointing out that our variational result
(\ref{varres}) shows that the higher spin states appear already at rather
low energies
and therefore have to be taken into account in calculations of the
low energy spectrum of Yang-Mills theories.
\section{Calculation of the topological susceptibility}
The explicit evaluation of the Witten formula (\ref{WF0})
for the topological susceptibility allows us to
check the consistency of the results for the low energy spectrum obtained
in Section IV using the variational approach.
Using the groundstate $S_1$ in (\ref{groundstate}), obtained from
minimization of the energy functional in the singlet sector including
irreducible states with up to $5$ knots, and the expressions for
the matrix elements of $B^2$ in the basis of irreducible states
given in Appendix C, we obtain
\begin{equation}
\label{WF01}
{d^2E_0(\theta)\over d\theta^2}\big|_{\theta =0}^{\rm contact}=
+ \langle 0|\left({\alpha_s\over 2\pi}\right)^2 B^2|0\rangle
=+0.0005117 ~ g^{14/3}~(+0.0005119~ g^{14/3})
\end{equation}
for the contact term in the Witten formula.
The number in brackets gives the corresponding result for up to $4$ knots.
Since the $Q$-operator is a spin-0 operator
and symmetric under cyclic permutations,
the propagator term involves only the singlet states in the spin-0 sector.
Using the formula for the matrix elements of the topological operator
$Q$ stated in Appendix C and including the lowest fifteen (ten) excitations
$S_2,\dots ,S_{16}$ ($S_2,\dots ,S_{11}$)
obtained approximately in the variational calculation
as eigenvectors of the truncated Fock space including irreducible
singlet states up to $5$ knots ($4$ knots), we obtain \footnote{
Here the lowest six excitations $S_2,\dots ,S_7$ are found to give the
contributions, $-103.3~\cdot 10^{-6}~g^{14/3}$
($-107.7~\cdot 10^{-6}~g^{14/3}$),
$-201.6~\cdot 10^{-6}~g^{14/3}$ ($-205.3~\cdot 10^{-6}~g^{14/3}$),
$-124.1~\cdot 10^{-6}~g^{14/3}$($-120.4~\cdot 10^{-6}~g^{14/3}$),
$-8.8~\cdot 10^{-6}~g^{14/3}$($-9.3~\cdot 10^{-6}~g^{14/3}$),
$-27.3~\cdot 10^{-6}~g^{14/3}$($-18.4~\cdot 10^{-6}~g^{14/3}$)
and $-0.16~\cdot 10^{-6}~g^{14/3}$ ($-4.1~\cdot 10^{-6}~g^{14/3}$)
respectively. The contributions from the remaining higher
excitations $S_8,\dots ,S_{16}$ ($S_8,\dots ,S_{11}$ for up to $4$ knots)
are of the order of $5\cdot 10^{-6}$ or less and form a series which is
rapidly decreasing with the number of knots.}
\begin{equation}
\label{WF02}
{d^2 E_0(\theta)\over d\theta^2}\big|_{\theta =0}^{\rm prop}=
-2\sum_n{|\langle 0|Q|n\rangle|^2\over E_n-E_0}=
-0.0004819~g^{14/3}~(-0.0004622~ g^{14/3})~.
\end{equation}
We see that the sum of the contact contribution (\ref{WF01})
and the propagator contribution (\ref{WF02}) seem to tend to zero
when extending the variational calculation to Fock states of higher
and higher number of knots.
For comparison we point out that using the irreducible singlet states
$\Psi_{000}^{(0)+},\Psi_{001}^{(0)+},\dots $ up to $5$ knots ($4$ knots)
in (\ref{typeI})-(\ref{typeIII}) directly,
instead of the eigenstates $S_1,S_2,\dots ,S_{16}$
($S_1,S_2,\dots ,S_{11}$), we get
$-0.0005205~ g^{14/3}$ for the contact contribution (\ref{WF01}) and
$-0.0003808~ g^{14/3}$ ($-0.0003761~ g^{14/3}$) for the propagator
contribution (\ref{WF02}).
We herewith find strong support that our variational results
are in accordance with vanishing topological susceptibility (\ref{WF0}).
\section{Concluding remarks}
In this paper we have analysed the quantum mechanics of spatially
homogeneous gauge invariant $SU(2)$ gluon fields with theta angle.
We have reduced the eigenvalue problem of the Hamiltonian of this toy
model for arbitrary theta angle to the corresponding problem with zero
theta angle.
The groundstate, its energy eigenvalue, its magnetic and electric
properties, as well the corresponding value of the "gluon condensate"
and the lowest excitations have been obtained with high accuracy
using the variational approach. Furthermore it has been shown that
higher spin states become already relevant at rather low energy.
The groundstate energy has been found to be independent of the
theta angle by construction of the explicit transformation
relating the Hamiltonians with different theta parameter.
An explicit calculation of the Witten formula for the topological
susceptibility using our variational results for the
groundstate and the low lying excitations give strong support
for the independence of the groundstate energy of theta thus
indicating the consistence of our results.
We have found a continuous spectrum and the corresponding eigenstates
of the topological operator in this approximation and shown that its
characteristics coincide with the Euclidean self(anti-)dual zero-energy
solutions of the classical equations of motion. They are the
analogs of the instanton solutions, but do not correspond to
quantum tunneling between different vacua.
The generalization of these investigations to $SU(2)$ field theory
following \cite{KP3} is under present investigation.
\acknowledgments
We are grateful for discussions with S.A. Gogilidze, J. Hoppe, D. Mladenov,
H. Nicolai, P. Schuck, M. Staudacher, A.N.Tavkhelidze and J. Wambach.
A.M.K. would like to thank Prof. G.R\"opke
for kind hospitality at the MPG AG ''Theoretische Vielteilchenphysik''
Rostock where part of this work has been done
and the Deutsche Forschungsgemeinschaft for providing a
stipendium for the visit.
This work was supported also by the Russian Foundation for
Basic Research under
grant No. 98-01-00101 and by the Heisenberg-Landau program.
H.-P. P. acknowledges support by the Deutsche Forschungsgemeinschaft
under grant No. RO 905/11-3.
\begin{appendix}
\section{Topological charge operator,
zero-energy solutions of the classical
Euclidean equations of motion and tunneling
amplitudes}
In this Appendix the solution of the eigenvalue problem
for the topological charge operator is described and the relation
between its characteristics and the Euclidean zero energy trajectories
of the unconstrained Hamiltonian (\ref{eq:PYM}) discussed.
We shall also discuss the role of these Euclidean zero energy solutions
of the classical equations of motion to tunneling from one
valley to another.
\subsection{The eigenvalue problem for the $Q$- operator}
The eigenvalue problem for the topological charge operator
\begin{equation}
\label{Qeigen}
Q |\Psi(t)\big>_\lambda =
\lambda |\Psi(t)\big>_\lambda
\end{equation}
in the Schr\"odinger representation reduces
to the solution for the following linear partial differential equation
\begin{eqnarray}
x_1x_2\frac{\partial}{\partial x_3}\Psi_\lambda(x_1, x_2, x_3)
+x_2x_3\frac{\partial}{\partial x_1}\Psi_\lambda(x_1, x_2, x_3)
+x_3x_1\frac{\partial}{\partial x_2}\Psi_\lambda(x_1, x_2, x_3)
=-i{8\pi^2\over g^3} \lambda \Psi_\lambda(x_1, x_2, x_3)\label{eve}~.
\end{eqnarray}
The conventional method of characteristics
relates this problem to the solution of the set of
ordinary differential equations
\begin{equation}
\label{4equs}
\frac{dx_1}{x_2x_3}= \frac{dx_2}{x_3x_1}=\frac{dx_3}{x_2x_1}
\end{equation}
The integral curves corresponding to (\ref{4equs}) can be written
in the form
\begin{equation}
\label{iom}
I_1 =x_2^2-x_1^2\ ,\ \ \ I_2=x_3^2-x_1^2~.
\end{equation}
These integral curves promt us
with the introduction of the new adapted coordinates
$(\zeta,\eta,\rho)$
\begin{equation}
\label{newcoord}
\zeta:=x_1\,, \quad \eta :=x_2^2-x_1^2\,, \quad \rho :=x_3^2-x_1^2\,.
\end{equation}
Such functions can be used
as a suitable coordinates on the subset
\begin{equation}\label{domain}
0< x_1 < x_2 < x_3 < \infty
\end{equation}
of the whole configuration space $R^+_3$.
The subset (\ref{domain}) corresponds to the domain
$0< \zeta < \sqrt{\eta+\zeta^2} < \sqrt{\rho+ \zeta^2}$.
Due to the symmetry of the $Q$ -operator under arbitrary permutations
of the canonical pairs $x_i, p_i$ the results can be
extended to the whole $ R^+_3$.
Writing the wave function in terms of new variables
\begin{equation}
\Psi_\lambda(x_1,x_2,x_3)=: W_\lambda(\zeta,\eta,\rho)
\end{equation}
the partial differential equation (\ref{eve}) reduces to the following
ordinary differential equation
\begin{equation}
\label{ode}
\sqrt{\zeta^2+\eta}\sqrt{\zeta^2+\rho}\frac{\partial}{\partial\zeta}
W_\lambda(\zeta,\eta,\rho)=-i\lambda
{8\pi^2\over g^3}
W_\lambda(\zeta,\eta,\rho)~.
\end{equation}
The general solution of this equation can be written in the form
\begin{equation}
\label{gensol1}
W_\lambda(\zeta,\eta,\rho)=\Psi_0(\eta,\rho)
\exp[i\lambda\frac{8\pi^2}{ g^3 \sqrt{\rho}}
F\left(\arctan
\left({\zeta\over\sqrt{\eta}}\right),
{\sqrt{\rho -\eta\over \rho}}\right)]
\end{equation}
with the arbitray function $\Psi_0(\eta,\rho)$ and the Jacobi elliptic
integrals $F(z,k)$ of the first kind \cite{Bateman}.
In terms of the original coordinates $(x_1,x_2,x_3)$
the eigenfuctions for the topological charge operator
in the sector $x_1 < x_2 < x_3$ therefore have the form
\begin{equation}
\label{gensol2}
\Psi_\lambda(x_1, x_2, x_3)\ =\Psi_0(x_2^2-x_1^2,x_3^2-x_1^2)
\exp\left[i\lambda
\frac{8\pi^2}{ g^3 \sqrt{x_3^2-x_1^2}}
F\left(\arctan\left({x_1\over\sqrt{x_2^2-x_1^2}}\right),
\sqrt{{x_3^2-x_2^2\over x_3^2-x_1^2}}\right)\right]~.
\end{equation}
In the other sectors the corresponding wavefunction is obtained from
(\ref{gensol2}) by cyclic permutation.
Note that the eigenfunctions (\ref{gensol2}), which constitute
the most general solution of the eigenvalue problem (\ref{Qeigen}) for the
$Q$ operator, do not satisfy the
boundary conditions (\ref{bc1}) and (\ref{bc2}) necessary for the
Hermiticity of the $Q$ operator.
In the next paragraph we will show that the characteristics
of the topological charge operator
(\ref{4equs}) coincide with
of the equations which determine the zero-energy solutions of the Euclidean
classical equations of motion.
\subsection{Euclidean zero energy trajectories in Yang-Mills mechanics}
The Euclidean action
\begin{equation}
S^{\rm Eucl}= \int d\tau \left[{1\over 2}\left(\frac{dx}{d\tau}\right)^2
+ V(x) \right]~,
\end{equation}
is obtained from the corresponding action in Minkowski space
by inverting the potential $ V(x) \longrightarrow -V(x)~$.
In the one dimensional case the solutions of equation
\begin{equation}
\frac{dx}{d\tau} =\pm \sqrt{2V(x)}~,
\end{equation}
coorespond to trajectories with zero Euclidean energy
\begin{equation}
E^{\rm Eucl}={1\over 2}\dot{x}^2 - V(x)~,
\end{equation}
and the same time satisfies the classical Euclidean equations of
motion
\begin{equation}
\label{EEOM}
\frac{d^2x}{d\tau^2} = {dV\over dx}~.
\end{equation}
Such type of trajectories play an important role in the description
of quantum mechanical tunneling phenomena \cite{Coleman}.
In the case that the potential $V(x)$ has at least
two local minima, say at $x=-a$ and $x=+a$, with $V=0$, the Euclidean
zero energy trajectories starting at $-a$ and ending at $a$ correspond
to quantum tunneling into the classically forbidden region.
The Euclidean action for these classical $E^{Eucl}=0$ trajectories
\begin{equation}
S^{\rm Eucl}\big\vert_{E=0}= \int dt\left({dx\over dt}\right)^2 =
\int_{-a}^a dx\sqrt{2V(x)}~,
\end{equation}
determine in the semiclassical limit the WKB amplitude for a particle
to tunnel from $x=-a$ to $x=+a$
\begin{equation}
|T(E=0)| = \exp \left[ -{1\over \hbar}\int_{-a}^a dx\sqrt{2V(x)}
\right](1+O(\hbar))~.
\end{equation}
The potential of the unconstrained system considered in the
present article has three valleys.
The question is whether there exist the trajectories
corresponding to tunneling between the valleys.
To answer the question let us rescale the coordinates
$x_i \rightarrow g^{-1}x_i $ and write down the Euclidean action
of the model in the form
\begin{equation}
S^{\rm Eucl}= \frac{1}{2g^2}
\int d\tau \sum_{cycl.}\left(\dot{x}_i^2 + x_j^2x_k^2\right)~,
\end{equation}
The equations of motion then read
\begin{equation} \label{eqmot}
\ddot{x}_i= x_i(x_j^2+x_k^2) ;\qquad i,j,k\ {\rm cyclic}~.
\end{equation}
The class of trajectories with zero energy
\begin{equation}
E^{\rm Eucl}=\frac{1}{2g^2}\sum_{cycl.}\left(\dot{x}_i^2-x_j^2x_k^2\right)~,
\end{equation}
can be chosen as the solutions of the following system
of first order equations
\begin{equation} \label{zeeq}
\dot{x}_1=\pm x_2x_3\ ,\ \ \ \ \dot{x}_2=\pm x_3x_1\ ,\ \ \ \
\dot{x}_3=\pm x_1x_2\ .\ \
\end{equation}
If we fix one and the same sign on the r.h.s.
of all equations (\ref{zeeq})
then they completely coincide with
the characteristic equations (\ref{4equs}) of the $Q$-operator.
Furthermore from (\ref{iom}) we see that the
zero energy Euclidean solutions admit no trajectories from one
$V=0$ minimum to another and they have no relation
to the quantum tunneling phenomena.
\subsection{$Q$- operator and selfdual states}
The commutator of the Hamiltonian $H_0$ of (\ref{H--0}) in the spin-0 sector
and the topological charge $Q$
\begin{equation}
[H_0,Q] = -i{g^3\over 4\pi^2}\left((p_1p_2x_3 + p_2p_3x_1 + p_3p_1x_2)
-gx_1x_2x_3(x_1^2+x_2^2+x_3^2)\right)
\end{equation}
vanishes only in the subspace of states $|\Psi>$ which satisfy
the Euclidean self(anti-)duality conditions
\begin{equation}
\label{SD}
p_i|\Psi\rangle =\pm gx_jx_k |\Psi\rangle ;\qquad i,j,k\ {\rm cyclic}~,
\end{equation}
which are the quantum analogs of the Euclidean $E=0$
constraints (\ref{zeeq}) discussed before.
Rewriting the Hamiltonian $H_0$ in (\ref{H--0}) in the form
\begin{equation}
H_0={1\over 2}\sum_{i,j,k\ cycl.} \left(p_i^2+g^2x_j^2x_k^2\right)=
\sum_{i,j,k\ cycl.} \left(p_i\pm gx_jx_k\right)^2
\pm {8\pi\over g^2}Q ~.
\end{equation}
we see that the Hamiltonian $H_0$ and the topological operator $Q$
coincide
on the subspace of the Euclidean self(anti-)dual states (\ref{SD})
\begin{equation}
H|\Psi\rangle =\mp {8\pi\over g^2}Q|\Psi\rangle ~.
\end{equation}
Comparing this discussion with the corresponding original situation
$H=(1/2)(\Pi_i^{a2}+B_i^{a2})$ and
$Q=-(\alpha_S / 2\pi)\Pi_i^{a}B_i^{a}$ in terms of the
constrained fields $\Pi_i^a$ and $A_i^a$, where
$H_0=\mp (8\pi/ g^2)Q $ only in the Euclidean
self(-anti)dual case $\Pi_i^{a}=B_i^{a}$, we see that Eq. (\ref{SD})
correspond to the
unconstrained analogs of the self(anti-)dual configurations in Euclidean
space.
The question arises whether there are any self(anti-)dual states (\ref{SD})
which are both eigenstates of $Q$ and $H$.
The solution
\begin{equation}
\Psi^{\pm}_{\rm SD}:= \exp[\mp igx_1x_2x_3]~,
\end{equation}
of the self(anti-)duality conditions (\ref{SD}) in Euclidean space
is neither eigenfunction of $H_0$ nor of $Q$
\begin{equation}
H_0\Psi^{\pm}_{\rm SD}=\mp {8\pi\over g^2} Q \Psi^{\pm}_{\rm SD}
= \pm 2V\Psi^{\pm}_{\rm SD}~,
\end{equation}
The well-known exact nonnormalizable zero-energy solution of the
spin-0 Schr\"odinger equation (\ref{H--0}),
\begin{equation}
\Psi_0^\pm = A\exp[\mp gx_1x_2x_3]~ ,
\end{equation}
which differs from functional $\Psi_{\rm SD}$
up to a factor of $i$ in the exponent, actually satisfies the
self(-anti)duality conditions in Minkowski space
\begin{equation}
-i{\partial\over \partial x_i}\Psi^{\pm}_0=
\pm i g x_jx_k \Psi^{\pm}_0 ;\qquad i,j,k\ {\rm cyclic} ~,
\end{equation}
(corresponding to $\Pi^a_i=\pm i B^a_i$), but
is not an eigenfunction of the topological operator $Q$
\begin{equation}
-{8\pi\over g^2}Q\Psi_0 =
\pm i g^2(x_1^2x_2^2+x_2^2x_3^2+x_3^2x_1^2)\Psi_0=
\pm 2iV(x_1,x_2,x_3)\Psi_0~.
\end{equation}
Finally we point out that the (approximate) groundstate wave function
(\ref{groundstate}) obtained in the variational approach is not
self(anti-)dual.
\section{Lower bound for the spin-0 Hamiltonian $H_0$}
In this appendix we would like to derive a lower bound for the spin-0
Hamiltonian $H_0$ in (\ref{H--0}) along the line of Ref. \cite{Simon}.
Using the boundary conditions (\ref{bc1}) and (\ref{bc2}) and
based on the well-known operator inequality
for oscilator on positive half line
\begin{eqnarray}
-{\partial^2\over\partial x^2} + y^2x^2 \ge 3|y|~,\nonumber
\end{eqnarray}
it follows that
\begin{equation}
\label{HH_0}
H_0\ge {1\over 4}\left(-\Delta + 3\sqrt{2}g(x_1+x_2+x_3)\right)
=:{1\over 2} H^\prime~.
\end{equation}
Since the Hamiltonian $H^\prime$ is known \cite{Simon}
to have a discret spectrum, this is true also for $H_0$.
An important open question is at which energy the groundstate is.
The knowledge of the groundstate energy of $H^\prime$ in inequality
(\ref{HH_0}) would provide a lower bound for the groundstate energy
of $H_0$. Due to the additive structure of the potential term in
$H^\prime$ one can make a separable ansatz for the solution of
the corresponding eigenvalue problem.
The energy of the lowest such separable $H^\prime$ eigenstate
satisfying the above boundary conditions (\ref{bc1}) and
(\ref{bc2}) is
\begin{equation}
\label{sepest}
E^{\prime}_{\rm sep} = 6|\xi_0|(3g/2)^{2/3}= 9.1924~ g^{2/3}~,
\end{equation}
where $\xi_0 = -2.3381 $ is the first zero of the Airy function.
From the operator inequality (\ref{HH_0}) and
the lower bound (\ref{sepest})
for separable solutions of $H^\prime$ we therefore obtain the lower
bound of the energy functional for separable functions
\begin{equation}
\label{sepestPsi}
{\cal E}[\Psi_{\rm sep}]
\ge {1\over 2} E^{\prime}_{\rm sep}= 4.5962~ g^{2/3}~.
\end{equation}
Finally we remark, as has been pointed out already in \cite{gymqm},
that an analogous variational calculation for $H^\prime$
shows that also the
groundstate energy of the Hamiltonian $H^\prime$ in Eq. (\ref{HH_0})
is lower than the value $E^\prime_{\rm sep}$
of (\ref{sepest}) for the lowest separable solution.
\section{Matrix elements}
For the evaluation of the energy functional, the calculation of
the value of the "gluon condensate" of the groundstate,
as well as the propagator term in the
Witten formula we need the matrix elements
of $E^2$,$B^2$ and $Q$ with respect to the irreducible Fock space states
(\ref{typeI})-(\ref{typeIII}) built from the basic Fock space states
(\ref{bel}).
\subsection{Basic matrix elements for Hamiltonian and topological charge}
The matrix elements of $E^2$ and $B^2$ with respect to the basic Fock space
states (\ref{bel}) with ($\omega_1=\omega_2=\omega_3=3^{1/3}~g^{2/3}$)
are given by
\begin{equation}
\langle \Psi_{m_1; m_2; m_3}|E^2|\Psi_{n_1; n_2; n_3}\rangle
= 3^{1/3}g^{2/3}
\sum_{\rm cyclic}{\cal H}_{m_in_i}^-\delta_{m_jn_j}\delta_{m_kn_k}~,
\end{equation}
and
\begin{equation}
\langle\Psi_{m_1; m_2; m_3}|B^2|\Psi_{n_1; n_2; n_3}\rangle =
3^{1/3}g^{2/3}
\sum_{\rm cyclic}
{1\over 3}\delta_{m_in_i}{\cal H}_{m_jn_j}^+ {\cal H}_{m_kn_k}^+~,
\end{equation}
where
\begin{equation}
{\cal H}_{mn}^\pm := \delta_{mn}(2n+3/2)
\ \pm\ \delta_{m(n+1)}\sqrt{n+3/2}\sqrt{n+1}
\ \pm\ \delta_{(m+1)n}\sqrt{n+1/2}\sqrt{n}~.
\end{equation}
For the topological operator $Q$ we have
\begin{equation}
\langle\Psi_{m_1; m_2; m_3}|Q|\Psi_{n_1; n_2; n_3}\rangle =
{2ig^{8/3}\over \pi^{7/2}3^{1/6}}
\sum_{\rm cyclic}
{\cal Q}_{m_in_i}^-{\cal Q}_{m_jn_j}^+{\cal Q}_{m_kn_k}^+~,
\end{equation}
where
\begin{eqnarray}
{\cal Q}_{mn}^+ &:=& {1\over 1- 4(m-n)^2}~
{(-1)^{m+n}(2m+1)!!(2n+1)!!\over \sqrt{(2m+1)!(2n+1)!}}~,\\
{\cal Q}_{mn}^- &:=& {m-n\over 1- 4(m-n)^2}~
{(-1)^{m+n}(2m+1)!!(2n+1)!!\over \sqrt{(2m+1)!(2n+1)!}}~.
\end{eqnarray}
\subsection{The irreducible matrix elements in terms of the basic ones}
For any operator $O$ invariant under the permutations $\sigma_{ij}$,
such as $E^2$, $B^2$, the Hamiltonian $H_0$
and the topological operator $Q$,
\begin{equation}
[O,\sigma_{ij}]=0~,
\end{equation}
the matrix elements of the irreducible states
$\langle\Psi^{(k)\pm}_{M}|O|\Psi^{(k)\pm}_{N}\rangle$
of type I (\ref{typeI}),
type II (\ref{typeII}), and type III (\ref{typeIII})
can then be expressed in terms of the basic matrix elements
\begin{equation}
{\cal M}^{m_1m_2m_3}_{n_1n_2n_3}:=
\langle\Psi_{m_1; m_2; m_3}|O|\Psi_{n_1; n_2; n_3}\rangle
\end{equation}
as follows. For the type I, II and III singlet states we have
\begin{equation}
\langle\Psi^{(0)+}_{mmm}|O|\Psi^{(0)+}_{nnn}\rangle =
{\cal M}^{mmm}_{nnn}~,
\end{equation}
\begin{equation}
\langle\Psi^{(0)+}_{mmr}|O|\Psi^{(0)+}_{nns}\rangle =
{\cal M}^{mmr}_{nns}
+2{\cal M}^{mmr}_{nsn}
\end{equation}
and
\begin{eqnarray}
\langle\Psi^{(0)+}_{m_1m_2m_3}|O|\Psi^{(0)+}_{n_1n_2n_3}\rangle
&=&{\cal M}^{m_1m_2m_3}_{n_1n_2n_3}+{\cal M}^{m_1m_2m_3}_{n_3n_1n_2}
+{\cal M}^{m_1m_2m_3}_{n_2n_3n_1}\nonumber\\
&& +{\cal M}^{m_1m_2m_3}_{n_2n_1n_3}
+{\cal M}^{m_1m_2m_3}_{n_3n_2n_1}+{\cal M}^{m_1m_2m_3}_{n_1n_3n_2}
\end{eqnarray}
respectively. The transition elements between the
type I, II and III singlets are
\begin{equation}
\langle\Psi^{(0)+}_{mmm}|O|\Psi^{(0)+}_{nns}\rangle
= \sqrt{3} {\cal M}^{mmm}_{nns}
\end{equation}
\begin{equation}
\langle\Psi^{(0)+}_{mmm}|O|\Psi^{(0)+}_{n_1n_2n_3}\rangle
= \sqrt{6} {\cal M}^{mmm}_{n_1n_2n_3}
\end{equation}
and
\begin{equation}
\langle\Psi^{(0)+}_{mmr}|O|\Psi^{(0)+}_{n_1n_2n_3}\rangle
= \sqrt{2} {\cal M}^{mmr}_{n_1n_2n_3}
+ \sqrt{2} {\cal M}^{mmr}_{n_3n_1n_2}
+ \sqrt{2} {\cal M}^{mmr}_{n_2n_3n_1}~.
\end{equation}
For the doublets, which exist only for type II and III states, we have
\begin{equation}
\langle\Psi^{(1,2)+}_{mmr}|O|\Psi^{(1,2)+}_{nns}\rangle
= {\cal M}^{mmr}_{nns} -{\cal M}^{mmr}_{nsn}
\end{equation}
and
\begin{eqnarray}
\langle\Psi^{(1,2)+}_{m_1m_2m_3}|O|\Psi^{(1,2)+}_{n_1n_2n_3}\rangle
&=&{\cal M}^{m_1m_2m_3}_{n_1n_2n_3}-(1/2){\cal M}^{m_1m_2m_3}_{n_3n_1n_2}
- (1/2){\cal M}^{m_1m_2m_3}_{n_2n_3n_1}\nonumber\\
&& +{\cal M}^{m_1m_2m_3}_{n_2n_1n_3}
- (1/2){\cal M}^{m_1m_2m_3}_{n_3n_2n_1}
- (1/2){\cal M}^{m_1m_2m_3}_{n_1n_3n_2}
\end{eqnarray}
for the type III doublets. Their transition elements are
\begin{equation}
\langle\Psi^{(1,2)+}_{mmr}|O|\Psi^{(1,2)+}_{n_1n_2n_3}\rangle
=\sqrt{2}{\cal M}^{mmr}_{n_1n_2n_3}
-\left({\cal M}^{mmr}_{n_3n_1n_2}
+{\cal M}^{mmr}_{n_2n_3n_1}\right)/\sqrt{2}~.\nonumber\\
\end{equation}
For the axial singlets we have
\begin{eqnarray}
\langle\Psi^{(0)-}_{m_1m_2m_3}|O|\Psi^{(0)-}_{n_1n_2n_3}\rangle
&=&{\cal M}^{m_1m_2m_3}_{n_1n_2n_3}+{\cal M}^{m_1m_2m_3}_{n_3n_1n_2}
+{\cal M}^{m_1m_2m_3}_{n_2n_3n_1}\nonumber\\
&& -{\cal M}^{m_1m_2m_3}_{n_2n_1n_3}
-{\cal M}^{m_1m_2m_3}_{n_3n_2n_1}-{\cal M}^{m_1m_2m_3}_{n_1n_3n_2}
~.\end{eqnarray}
For the axial doublets we have
\begin{eqnarray}
\langle\Psi^{(1,2)-}_{m_1m_2m_3}|O|\Psi^{(1,2)-}_{n_1n_2n_3}\rangle
&=&{\cal M}^{m_1m_2m_3}_{n_1n_2n_3}-(1/2){\cal M}^{m_1m_2m_3}_{n_3n_1n_2}
- (1/2){\cal M}^{m_1m_2m_3}_{n_2n_3n_1}\nonumber\\
&& -{\cal M}^{m_1m_2m_3}_{n_2n_1n_3}
+ (1/2){\cal M}^{m_1m_2m_3}_{n_3n_2n_1}
+ (1/2){\cal M}^{m_1m_2m_3}_{n_1n_3n_2}
~.\end{eqnarray}
\end{appendix}
| proofpile-arXiv_065-8228 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{1.~INTRODUCTION}
One of the most intriguing problems of the modern meson spectroscopy is
the search for the gluon bound states (glueballs). A mass of the
lightest scalar glueball should lie in the region $1500-1750$~MeV
as expected from the lattice QCD calculations~\cite{bali,sexton}.
Scalar glueball candidates have been observed in several experiments.
An analysis of the GAMS data on the $IJ^{PC}=00^{++}$
states together with the data of other experiments~\cite{sim} revealed
the existence of five
scalar resonances in the mass range up to 1.9~GeV.
One of these states is extra for $q\bar{q}$ systematics,
being a good candidate for the lightest scalar glueball.
Nevertheless, despite on considerable experimental and theoretical efforts
the understanding of the scalar meson sector is rather controversial now.
To identify unambiguously the scalar glueball, the structures of the
scalar $q\bar{q}$-nonets need to be clarified,
because
glueball state should be superfluous for $q\bar{q}$-systematics.
The picture may be complicated if glueball is located in the vicinity
of $q\bar{q}$-mesons with identical quantum numbers that raises their
mixing and leads to the dispersing of the glueball component over several
mesons.
The $\pi^0\pi^0$ system looks very attractive for experimental
investigation and for study of the scalar resonances, in particular.
Only even $J^{PC}$ waves are present in this system, which
simplifies greatly analysis and eliminates contributions from the
odd waves as compared to the $\pi^+\pi^-$ system. Study of the $\pi^0\pi^0$
$S$ wave in different processes should
extend our knowledge about the scalar mesons
and help to identify the states with
enhanced gluonic component.
\textheight 21cm
In the present report overview of results on the
$\pi^0\pi^0$ system produced
in charge exchange reaction
\begin{equation}
\begin{array}{c}
\setlength{\unitlength}{1mm}
\begin{picture}(45,9)
\put(2,7){$\pi^-p \rightarrow M^on$}
\put(15.5,5){\line(0,-1){4}}
\put(15.5,0){$\rightarrow \pi^o\pi^o \rightarrow 4\gamma$}
\end{picture}\label{re:cex}
\end{array}
\end{equation}
obtained by the GAMS Collaboration at 38 and 100~GeV/c
is given
and new GAMS and WA102
results on the $\pi^0\pi^0$ system produced in the reaction
\begin{equation}
\begin{array}{c}
\setlength{\unitlength}{1mm}
\begin{picture}(45,10)
\put(2,7){$pp \rightarrow p_fM^op_s$}
\put(16,5){\line(0,-1){4}}
\put(16,0){$\rightarrow \pi^o\pi^o \rightarrow 4\gamma$}
\end{picture}\label{re:cent}
\end{array}
\end{equation}
at 450~GeV/c are presented. The subscripts $f$ and $s$ in (\ref{re:cent})
indicate the
fastest and slowest particles in the laboratory frame, respectively.
\section*{2.~DATA SELECTION}
The reaction (\ref{re:cex}) data have been obtained in two
experiments carried out with the
GAMS-2000 multiphoton spectrometer in 38~GeV/c $\pi^-$ beam
extracted from the 70~GeV IHEP proton accelerator (experiment SERP-E-140
at IHEP, Protvino) and with
the GAMS-4000 spectrometer in 100~GeV/c $\pi^-$ beam of SPS (experiment
NA12 at CERN). The general layout of the experiments,
details of the GAMS-2000 and GAMS-4000 constructions and calibrations
as well as event selection procedures
have been given in previous publications~\cite{pipi-38-2,pipi-100}.
\begin{figure}[t]
\center
\epsfig{figure=fig-13.eps,height=1.9in}
\caption{
Invariant mass spectra of the $\pi^0\pi^0$ system
produced in reaction~(\ref{re:cex}) at
100~GeV/c,
$-t < 0.2$~(GeV/c)$^2$ (a), and in reaction~(\ref{re:cent}),
WA102 data (b).}
\label{fi:fig-1}
\end{figure}
After kinematical
analysis (3C fit, masses of recoil neutron and two mesons being fixed)
a total of $1.5\times10^6$ and
$6.5\times10^5$ $\pi^0\pi^0$ events are selected at 38 and 100~GeV/c,
respectively.
Mass spectrum of the $\pi^0\pi^0$ events
for 4-momentum transfer squared $-t < 0.2$~(GeV/c)$^2$
is shown in fig.~\ref{fi:fig-1}a.
It is dominated by the $f_2(1270)$. A narrow dip
corresponding to the $f_0(980)$ is clearly seen at 1~GeV.
A peak at 2~GeV is identified with
the $f_4(2050)$. A bump around 1.7~GeV is due to the $S$ wave
contribution (see sect.~4). For
$-t > 0.2$~(GeV/c)$^2$, the $f_2(1270)$ peak is also clearly seen,
whereas a shoulder appears on the place of the dip at 1~GeV.
The reaction (\ref{re:cent}) data
come from the NA12/2 and WA102 experiments
performed in 450~GeV/c proton beam of SPS at CERN. The NA12/2 experiment
has been carried out with the GAMS-4000 spectrometer.
The WA102 experiment has been performed using CERN
Omega Spectrometer \cite{omega} and GAMS-4000.
Separation of the $\pi^0\pi^0$ events is carried out on the basis
of kinematical analysis (6C fit, four-momentum conservation being used and
masses of two mesons being fixed). Events containing a fast $\Delta^+(1232)$
are removed by imposing a cut $M(p_{f}\pi^0)>1.5$~GeV, which leave 55 000
centrally produced $\pi^0\pi^0$ events for NA12/2 and
166 000 $\pi^0\pi^0$ events for WA102.
Mass spectrum of the centrally produced $\pi^0\pi^0$ system
is shown in fig.~\ref{fi:fig-1}b. A peak at 1.25~GeV corresponding
to the $f_2(1270)$ and a shoulder at 1~GeV which appears due to the
interference of the $f_0(980)$ with the $S$ wave background
are clearly seen.
\section*{3.~PARTIAL WAVE ANALYSIS}
For reaction (1), the coordinate system axes
are defined in the Gottfried-Jackson frame.
For reaction (2), the $z$-axis is chosen to be
along the direction
of the exchanged particle momentum in ``slow'' vertex
in the $\pi^0\pi^0$ centre of mass system,
the $y$-axis is defined to be along cross product of
the exchanged particle momenta in the $pp$ centre of mass system.
The amplitudes used for PWA are defined in the reflectivity basis
\cite{chung}.
Only amplitudes with spin $z$-projections $|m|=0$ and
1 are taken into account since amplitudes with $|m|>1$ are equal
to zero within the error bars as follows from analysis of the angular
distributions.
The amplitudes with spin $l=0$, 2 and 4 ($S$, $D$ and $G$ waves, respectively)
are used for the reaction (\ref{re:cex}) PWA at 38~GeV/c,
the amplitudes with $l=6$ ($J$ waves)
are added at 100~GeV/c. Only $S$ and $D$ waves are considered
for reaction (\ref{re:cent}). Contribution of the higher waves is
negligibly small in the mass ranges under study for each reaction.
\begin{figure}[t]
\center
\epsfig{figure=fig-4.eps,height=1.9in}
\caption{
The $|S|^2$ for
the $\pi^0\pi^0$ system
produced in reaction~(\ref{re:cex}) at 38 (a) and
100~GeV/c (b),
$-t < 0.2$~(GeV/c)$^2$. The curves show fit with the
sums of four relativistic Breit-Wigner functions and
backgrounds.
}
\label{fi:fig-4}
\end{figure}
\section*{4.~REACTION $\pi^-p\to\pi^0\pi^0n$}
The $S$ wave amplitude module squared
for the physical solution found for $-t < 0.2$~(GeV/c)$^2$
\cite{pipi-100,pipi-38-3}
is shown in fig.~\ref{fi:fig-4}.
The $S$ wave has rather a complicated structure.
It demonstrates a series of bumps separated with dips at 1,
1.5 and 2~GeV. The two first dips point to the existence of
the $f_0(980)$ and $f_0(1500)$ resonances. Rapid variation of the relative
phase of the $S$ and $D_0$ waves
at 1 and 1.5~GeV confirms the presence of these resonances.
The dip at 2~GeV is clearly seen at 100~GeV/c,
it is less prominent at 38~GeV/c due to insufficient
detection efficiency at high mass. This dip
indicates the presence of a new scalar resonance
around 2~GeV. Such conclusion is confirmed by the fast variation
of the $S$ wave phase in this mass region~\cite{pipi-100}.
A simultaneous $K$-matrix analysis of the GAMS data on the $S$ wave in the
$\pi^0\pi^0$, $\eta\eta$ and $\eta\eta^{\prime}$ systems produced in charge
exchange reactions at 38~GeV/c in the mass range below 1.9~GeV
together with the Crystal Barrel, CERN-M\"unich and BNL data
\cite{sim} points to the existence of four
comparatively narrow scalar resonances $f_0(980)$, $f_0(1300)$, $f_0(1500)$
and $f_0(1780)$ and one broad scalar state $f_0(1530)$
with a width of about 1~GeV. The poles of
the partial amplitude corresponding to physical states
are determined by the mixture of input (``bare'')
states related to the $K$-matrix poles via their transition into real
mesons. The analysis \cite{sim} shows that
one bare state in the mass region $1.2-1.6$ GeV
is superfluous for the $q\bar{q}$-classification, being a good candidate
for the lightest scalar glueball. This superfluous bare state is dispersed
over neighbouring physical states: the narrow $f_0(1300)$ and $f_0(1500)$
resonances and the broad $f_0(1530)$.
For $-t > 0.3$~(GeV/c)$^2$, a narrow peak
is seen in the $S$ wave on the place of the dip at 1~GeV observed
at low momentum transfer (fig.~\ref{fi:fig-5})
\cite{pipi-38-3,pipi-38-1}.
A mass $997\pm5$ MeV and a width $48\pm10$ MeV of the peak are in good
agreement with the tabulated $f_0(980)$ parameters \cite{PDG}.
\begin{figure}[t]
\center
\epsfig{figure=fig-5.eps,height=1.9in}
\caption{
The $|S|^2$ for
the $\pi^0\pi^0$ system
produced in reaction~(\ref{re:cex}) at 38~GeV/c,
$-t < 0.2$~(GeV/c)$^2$ (a) and $-t > 0.3$~(GeV/c)$^2$ (b).
The curves show fit with the
sums of relativistic Breit-Wigner functions and polynomial
backgrounds.
}
\label{fi:fig-5}
\end{figure}
A simultaneous analysis of the GAMS data on the $\pi^0\pi^0$
$S$ wave around 1~GeV together with the Crystal Barrel and
CERN-M\"unich data \cite{akssp}
shows that the $f_0(980)$ is strongly related to the
$\pi\pi$ channel and much weaker to the
$K\bar{K}$ channel (ratio of $f_0(980)$
coupling constants squared to the $\pi\pi$ and $K\bar{K}$ channels is
equal to 6). This fact, together with the evidence for the hard component
in the $f_0(980)$ at high $-t$,
makes unconvincing the interpretation of this
scalar meson as a $K\bar{K}$ molecule.
\begin{figure}[t]
\center
\epsfig{figure=fig-7.eps,height=2.3in}
\caption{
The $|G_0|^2$ (a) and $|J_0|^2$ (b)
and phases of the $G_0$ (c) and $J_0$ (d) waves relative
to the $D_0$ wave phase for the $\pi^0\pi^0$ system
produced in reaction~(\ref{re:cex}) at 100~GeV/c,
$-t < 0.2$~(GeV/c)$^2$. The curves show fit with the
sums of relativistic Breit-Wigner functions and polynomial
backgrounds.
}
\label{fi:fig-7}
\end{figure}
Mesons with higher spins, $f_2(1270)$, $f_4(2050)$ and $f_6(2510)$,
are seen as clear peaks in the $D$, $G$ and $J_0$ waves, respectively
(fig.~\ref{fi:fig-7}). For $-t < 0.2$~(GeV/c)$^2$, all the three
mesons are produced
via an one pion exchange with a small absorption.
Ratios of the $f_2(1270)$ amount in the
$D$ waves with $|m| = 0$ and 1 are equal to
$7\%$ at 38~GeV/c and $3\%$ at 100~GeV/c \cite{pipi-100,pipi-38-1}.
The ratios of the $f_4(2050)$ amount in the $G_0$ and $G_{\pm}$ waves
are the same as those for the $f_2(1270)$ in the $D$ waves.
As for the $f_6(2510)$, the upper limit is set for its production in
the $J_{\pm}$ waves as compared to the $J_0$ wave ($<0.1$, $95\%$ C.L.)
\cite{pipi-100}.
With momentum transfer increase, an un\-na\-tu\-ral\--pa\-ri\-ty
exchange is died away.
For $-t > 0.3$~(GeV/c)$^2$, the $f_2(1270)$ and $f_4(2050)$
are produced predominantly
via a natural-parity
exchange ($D_+$ and $G_+$ waves). This shows a similarity of the production
mechanisms of the $f_2(1270)$ and $f_4(2050)$.
\begin{figure}[t]
\center
\epsfig{figure=fig-00.eps,height=3.4in}
\caption{
The ratios of the amounts of resonances with $dP_T < 0.2$~GeV/c to the
amounts with $dP_T > 0.5$~GeV/c.
}
\label{fi:fig-00}
\end{figure}
\section*{5.~A KINEMATICAL $dP_T$ FILTER}
Production of the states with valence gluons
may be enhanced using glue-rich production mechanisms. One such mechanism
is Double Pomeron Exchange (DPE) where the pomeron is thought to be a
multi-gluonic object. With increasing energy,
Double Pomeron Exchange (DPE) becomes
relatively more important in central production
with respect to other
exchange processes (Reggeon-Pomeron and Reggeon-Reggeon exchange)
\cite{kirk}.
Recently it has been proposed~\cite{filter,close}
to analyse the centrally produced resonances in terms of the
difference in transverse momenta of the exchanged particles which
is defined as follows
\begin{equation}
dP_T = \sqrt{(P_{y_1}-P_{y_2})^2+(P_{z_1}-P_{z_2})^2}
\end{equation}
where $P_{y_i}$, $P_{z_i}$ are the $y$ and $z$ components of the momentum
of the $i$-th exchanged particle in the $pp$ centre of mass system.
It has been observed that all the undisputed $q\bar{q}$ states
(i.e. $\rho^0(770)$, $\eta^{\prime}$, $f_1(1285)$, $f_2(1270)$,
$f_2^{\prime}(1525)$ etc.) are suppressed at small $dP_T$, whereas the
glueball candidates $f_0(1500)$, $f_J(1710)$ and $f_2(1900)$ survive.
Figure \ref{fi:fig-00} shows the ratios of the event numbers for
different resonances for small and large $dP_T$ found from the fit
to the efficiency corrected mass spectra. It is clearly seen that all
the undisputed $q\bar{q}$ states which can be produced in DPE
have very small values for this ratio ($\le0.1$).
Some states which can not be produced by DPE (namely those with negative
$G$ parity or $I=1$) have slightly higher values ($\approx0.25$). However, all
of these states are suppressed relative to the non-$q\bar{q}$ candidates
$f_0(980)$, $f_0(1300)$, $f_0(1500)$, $f_J(1710)$ and $f_2(1900)$
which have values for this ratio of about~1.
\section*{6.~REACTION $pp\to{p_f}\pi^0\pi^0p_s$}
A PWA of the reaction $pp\to{p_f}\pi^0\pi^0p_s$ has been carried
out in the mass range from the threshold up to 1.8~GeV.
The $S$ wave amplitude for one of the two PWA solutions
is much smaller than amplitudes of the $D$ waves in the whole
mass range. This solution is rejected as unphysical one.
\begin{figure}[t]
\center
\epsfig{figure=fig-8.eps,height=1.9in}
\caption{
The $|S|^2$ for
the $\pi^0\pi^0$ system
produced in reaction~(\ref{re:cent}) in the
NA12/2 (a) and WA102 (b) experiments.
The curves show fit with the
sums of two relativistic Breit-Wigner functions and
backgrounds.
}
\label{fi:fig-8}
\end{figure}
The $S$ wave for the physical solution is characterized by a broad bump
below 1~GeV and two shoulder, at 1 and 1.4~GeV (fig.~\ref{fi:fig-8}).
The first shoulder is attributed to the $f_0(980)$, the second one
may be explained by the interference of the $f_0(1300)$ and $f_0(1500)$
resonances (see sect.~6). Peaks
corresponding to the $f_2(1270)$ are seen in the $D_0$ and $D_-$ waves,
such peak is absent in the $D_+$ wave.
Ratio of the $D_-$ and $D_0$ wave intensities is about $20\%$
at the $f_2(1270)$ mass.
In order to apply the kinematical $dP_T$ filter,
PWA have been performed in intervals $dP_T < 0.35$~GeV/c,
$0.35 < dP_T < 0.6$~GeV/c and $dP_T > 0.6$~GeV/c.
The shoulders at 1 and 1.4~GeV in the $S$ wave
have approximately the same heights in the three $dP_T$ intervals,
whereas the bump below
1~GeV becomes much more prominent for small
$dP_T$. The $f_2(1270)$ peaks are clearly seen
in the $D_0$ and $D_-$ waves for large $dP_T$
and disappear for small $dP_T$.
\vskip 2mm
\footnotesize
\noindent
Table 1. \\
Resonance production as a function of $dP_T$ expressed
as a percentage of its total contribution. $dP_T$ intervals are
given in~GeV/c.\\[-10mm]
\noindent
\begin{center}
\begin{tabular}{cccc}
\hline\\
&$dP_T < 0.35$&$0.35 < dP_T < 0.6$&$dP_T > 0.6$ \\
\hline
\\
$f_0(980)$ & $34\pm7$ & $42\pm7$ & $24\pm5$ \\
$f_0(1300)$ & $30\pm9$ & $38\pm8$ & $32\pm7$ \\
$f_0(1500)$ & $32\pm8$ & $42\pm7$ & $26\pm7$ \\
$f_2(1270)$ & not seen & $24\pm5$ & $76\pm4$ \\
\hline
\end{tabular}
\end{center}
\normalsize
\vskip 2mm
Relative contributions of the resonances observed in the centrally produced
$\pi^0\pi^0$ system for three $dP_T$ intervals have been estimated from a
simultaneous fit to the NA12/2 and WA102 data (see sect.~7).
These contributions are presented in table 1.
The $f_0(1300)$ and $f_0(1500)$ have a similar behaviour as a function of
the $dP_T$ not consistent with that observed for $q\bar{q}$ states.
It is interesting to note that the enigmatic $f_0(980)$ also
does not behave as a normal $q\bar{q}$ state. In contract, the $f_2(1270)$
is suppressed for small $dP_T$ and enhanced for large $dP_T$ in agreement with
the behaviour of the other $q\bar{q}$ states.
\section*{7.~FIT TO THE $S$ WAVE}
In order to determine the parameters of the scalar resonances,
a fit to the $S$ wave has been performed.
The following parametrisation is used:
\begin{equation}
A(M_{\pi\pi}) = G(M_{\pi\pi}) +
\sum_{n=1}^{N_{res}} a_n e^{i\theta_n} B_n(M_{\pi\pi}),
\label{eq:samp}
\end{equation}
\begin{equation}
G(M_{\pi\pi}) = (M_{\pi\pi}-2m_{\pi})^{\alpha}
e^{-\beta{M_{\pi\pi}}-\gamma{M^2_{\pi\pi}}}
\end{equation}
where $a_n$ and $\theta_n$ are the amplitude and the
phase of the $n$-th resonance,
respectively, $\alpha$, $\beta$ and $\gamma$ are the real parameters,
$B(M_{\pi\pi})$ is the relativistic Breit-Wigner function.
To describe the $|S|^2$, function (\ref{eq:samp}) module squared has been
convoluted with a Gaussian to account the experimental
mass resolution.
First, a fit to the $S$ wave for the $\pi^0\pi^0$ system produced in
reaction (1) at 38 and 100~GeV/c has been carried out
using four resonances (fig.~\ref{fi:fig-4}).
Three of them, $f_0(980)$, $f_0(1500)$ and
$f_0(2000)$, correspond to the dips at 1, 1.5 and 2~GeV. One more resonance,
$f_0(1300)$, is needed to describe the bump around 1.2~GeV. Without
this resonance, quality of the fit deteriorates significantly.
As a result of the $f_0(1300)$ and $f_0(1500)$
interference with the $S$ wave background, the mass of the $f_0(1300)$
turns out to be shifted to higher values as compared to the bump maximum,
the $f_0(1500)$ mass is also shifted as compared to the dip position.
Parameters of the
scalar resonances determined from the fit are presented in table~2.
Second, a fit to the
reaction (2) $S$ wave has been performed
(fig.~\ref{fi:fig-8}).
To begin, two resonances are included in the fit to describe the
shoulders at 1 and 1.4~GeV. A mass $988\pm10$~MeV and
a width $76\pm20$~MeV of the first resonance are consistent
with the tabulated parameters of the $f_0(980)$ \cite{PDG}.
For the second resonance a mass of $1420\pm30$~MeV and
a width of $230\pm50$~MeV are obtained. This state can be
reproduced as a result of the $f_0(1300)$ and $f_0(1500)$ interference.
The amount of the $f_0(1300)$ is found to be about $20\%$ of that
of the $f_0(1500)$.
\textheight 20cm
Finally, a simultaneous fit to the
reaction (1) and (2) $S$ wave has been performed.
The $f_0(980)$, $f_0(1300)$, $f_0(1500)$ and $f_0(2000)$
are introduced to describe the $S$ wave in reaction (1), only first three
resonances are used in the fit to the $S$ wave in reaction (2).
Fit gives the $f_0(980)$ and $f_0(1300)$ masses
consistent with those obtained from the fit to the charge exchange data,
whereas a mass of the $f_0(1500)$ is shifted
to lower values. The $f_0(1500)$ mass
found from the simultaneous fit
agrees well with the tabulated value \cite{PDG}, while the width
turns out to be somewhat larger. As a result of the $f_0(1500)$ mass shift,
a mass of the $f_0(2000)$
also becomes slightly smaller but agrees within
the errors with the value obtained from the reaction (1) $S$ wave fit.
The state with a similar mass and width
was observed by the WA102 Collaboration in the reaction $pp\to
p_f(\pi^+\pi^-\pi^+\pi^-)p_s$~\cite{om-4pi}.
\vskip 2mm
\footnotesize
\noindent
Table 2.\\
Parameters (in MeV)
of the scalar resonances obtained from the fit
to the reaction (1) $S$ wave
(fit I) and the simultaneous fit to the
reactions (1) and (2) $S$ wave (fit~II).\\[-5mm]
\noindent
\begin{center}
\begin{tabular}{ccccc}
\hline\\
&
\multicolumn{2}{l}{~~~Fit I}&
\multicolumn{2}{l}{~~~Fit II} \\
\cline{2-5}\\
& Mass & Width & Mass & Width \\
\hline
\\
$f_0(980)$ & $970\pm10$ & $85\pm20$ & $980\pm10$ & $80\pm20$ \\
$f_0(1300)$ & $1310\pm25$ & $195\pm40$ & $1300\pm25$ & $220\pm40$ \\
$f_0(1500)$ & $1590\pm80$ & $300\pm90$ & $1495\pm35$ & $250\pm60$ \\
$f_0(2000)$ & $2020\pm60$ & $220\pm80$ & $1960\pm60$ & $210\pm80$ \\
\hline
\end{tabular}
\end{center}
\normalsize
\section*{8.~CONCLUSIONS}
The partial wave analyses of the $\pi^0\pi^0$ system produced in
charge exchange $\pi^-p$ reaction at 38 and 100~GeV/c and in central
$pp$ collisions at 450~GeV/c have been carried out.
The $f_0(980)$ and $f_0(1500)$ appear in different ways in the two
processes. These resonances
are observed as dips in the $S$ wave at 1 and 1.5~GeV in
charge exchange reaction, whereas
in central production the $f_0(980)$ and $f_0(1500)$ are seen as
shoulders. The $f_0(1300)$ is essential for the description
of charge exchange data while in central production contribution
of this resonance is less prominent. The centrally produced
$f_0(980)$, $f_0(1300)$ and $f_0(1500)$ have a similar dependence
as a function of the $dP_T$, differed from that observed
for all the undisputed $q\bar{q}$ mesons.
An extra $f_0(2000)$ state is observed in charge exchange reaction.
It is similar to the scalar resonance observed by the WA102 Collaboration
in the reaction $pp\to p_f(\pi^+\pi^-\pi^+\pi^-)p_s$.
Existence of a new scalar state around 2 GeV
may be very
important for understanding of the scalar nonet structure and
for isolating the lightest scalar glueball.
For mesons with higher spins, production mechanisms of the $f_2(1270)$,
$f_4(2050)$ and $f_6(2510)$ as a function of momentum transfer
in reaction (1) have been studied. All the three mesons are produced via
a dominating one-pion exchange for small $-t$, whereas for large
$-t$ the $f_2(1270)$ and $f_4(2050)$ are produced predominantly
through a natural-parity $t$-channel exchange.
The $f_2(1270)$ is clearly seen in the reaction (2)
$D_0$ and $D_-$ waves for large $dP_T$.
Its behaviour as a function of the $dP_T$ is consistent with that expected
for $q\bar{q}$ state.
\section*{REFERENCES}
| proofpile-arXiv_065-8231 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\footnotetext{$^\dagger$Based on a talk presented
at the $5^{th}$ International Workshop on Thermal Field Theories
and their Applications, Regensburg, Germany, August 1998.}
\footnotetext{$^*$E-Mail: Litim@ecm.ub.es}
Most of the physically interesting questions in thermal field theory are outside the domain of validity of perturbation theory. This is true not only for static quantities like the magnetic mass or the equation of state for a quark-gluon plasma, but as well as for dynamical ones, like the plasmon damping rate close to the critical temperature. The reason for this break-down are IR divergences, which are difficult to control perturbatively.
The use of reliable resummation procedures seems therefore mandatory. The Wilsonian or Exact Renormalization Group is precisely a tool that allows for a systematic resummation beyond perturbation theory.
All current implementations have in common, that they use the ``known'' physics in the UV as the starting point (see fig.~1). This region -the upper right corner- corresponds to the bare classical action of the $T=0$ theory. The goal is to find the corresponding effective action of the soft modes at non-vanishing temperature. This region -the lower left corner- describes the IR limit. The resummation problem is thus the question about how these two regions are related.
For vanishing temperature, the flows towards the IR only integrate-out quantum fluctuations, and are depicted by the flows along the vertical boundaries (for the $4d$ and $3d$ limits, respectively). For $T\neq 0$, one might distinguish essentially three scenarios.
1. The {\it dimensional reduction} approach aims at relating the $4d$, $T=0$ parameters to those of an effective $3d$ theory at $T=0$. This reduces the problem to a purely $3d$ one, that is, to the problem of integrating out only quantum fluctuations in $3d$. The temperature enters via the initial parameters of the effective $3d$ theory.
2. Flow equations in the {\it imaginary time} formalism are used to directly relate the $4d$ couplings in the UV region at $T=0$ with the renormalized ones at $T\neq 0$. Both quantum and thermal fluctuations are integrated out, as the imaginary time formalism does not distinguish between them. The Euclidean flow equation in $4d$ can directly be used, with the standard prescriptions of the Matsubara formalism. This scenario corresponds to the flow along the diagonal in fig.~1.
\begin{figure}[ht]
\psfig{file=RG2P.ps,width=\hsize}
\vskip.3cm
\begin{center}
\begin{minipage}{\hsize}
\caption{\small The qualitative difference of renormalization group flows for theories at finite temperature. The arrows indicate the flow towards the infrared.}
\end{minipage}
\end{center}
\end{figure}
3. Flow equations in the {\it real time} formalism are used to relate the $4d$ {renormalized} couplings at $T=0$ with the renormalized ones at $T\neq 0$. A prerequisite of this approach is of course the knowledge of the renormalized $4d$ couplings in the first place. Contrary to the imaginary time approach, this one allows the investigation of non-static properties. The flow equation only integrates-out thermal fluctuations, that is, it describes how modes with momenta around $k$ come into thermal equilibrium at temperature $T$. This RG flow corresponds to the flow along the base line in fig.~1.
The paper is organized as follows: The sections II and III aim at giving an introduction to the key aspects of Wilsonian RGs. Sect.~II reviews the Euclidean formalism, and in particular the implementation at non-vanishing temperature and the inclusion of gauge fields. Sect.~III is reserved for a recently proposed real time implementation. Sect.~IV presents an application of the latter to the U(1)-Higgs model, while sect.~V contains the discussion and an outlook.
\section{Wilsonian flow in Euclidean time}
In this section we will outline the Exact Renormalization Group approach to quantum field theories in Euclidean space time \cite{Wilson,ERG,AverageAction,Flows}. In particular, we discuss the implementation for gauge theories \cite{Abelsch,ReuterWetterich,Ellwanger,Axial,Marchesini}, and the application to thermal field theories \cite{averageT,StevensConnor,LiaoStrickland,TetradisT,FreireLitim} in the imaginary time formalism.
\subsection{Coarse graining}
The main problem of perturbative methods for field theories at vanishing or finite temperature can be linked to the problematic IR behaviour of massless modes in less than four dimensions. It is therefore mandatory to find a regularization for them. Let us consider the case of a bosonic field $\phi$. A particularly simple way of curing the possible IR singular behaviour of its perturbative propagator $P_\phi$ consists in replacing it by a cut-off propagator
\beq\label{euclidcoarse}
P_\phi\to P_\phi\ \Theta_k\left({p^2}/{k^2}\right) \ .
\end{equation}
Here, we introduced a function $\Theta_k$ which depends on a yet unspecified additional momentum scale $k$. $\Theta_k$ is meant to be a (smeared-out) Heavyside step function: for large momenta $p\gg k$ it goes to one (no regularization is needed), while it vanishes (at least with $p^2$) for $p\ll k$. For any $k>0$, the above propagator remains IR finite and could safely be used within loop integrals. Finally, however, we are interested in the $k\to 0$ limit in which the regulator is removed. Thus we need to describe the $k$-dependence of the theory, which brings into life the Exact Renormalization Group. Originally, it has been interpreted as a coarse-graining procedure for the fields, averaging them over some volume $k^{-d}$. In this light, the IR limit corresponds to averaging fields over bigger and bigger volumes, {\it i.e.}~to the limit ${k\to 0}$.
\subsection{The exact renormalization group}
A path integral implementation of these ideas goes back to \cite{Wilson,ERG}, where a regulator is used in order to distinguish between hard $(p^2>k^2)$ and soft modes $(p^2<k^2)$. A slightly different point of view has been taken in \cite{AverageAction} (see also \cite{Flows}), where a smooth cut-off has been employed. Following these lines, one obtains an effective theory for the soft modes only. The starting point is the functional
\beq \label{Schwingerk}
\exp W_k[J]=\int{\cal D}\phi \exp\left(-S_k[\phi] + \int \0{d^d p}{(2\pi)^d}
J(-p) \phi(p) \right)
\end{equation}
Here, $\phi$ stands for all possible fields in the theory which we shall restrict to be bosons, for simplicity. (The extension to fermions is straightforward.) $J$ are the corresponding sources, and $S_k=S+\Delta_k S$ contains the (gauge-fixed) classical action $S[\phi]$ and a quadratic term $\Delta_k S[\phi]$, given by
\beq
\Delta_k S[\phi] = \012 \int \0{d^d p}{(2\pi)^d}\ \phi^*(-p)\ R_k(p)\ \phi(p)
. \label{Rk}
\end{equation}
It introduces a coarse-graining via the operator $R_k(p)$. Performing a Legendre transformation leads to the coarse-grained effective action $\Ga_k[\phi]$,
\beq
\Ga_k[\phi]=-W_k[J]-\Delta_kS[\phi]+\Tr\ J \phi,\ \phi=\0{\de W_k}{\de J},
\end{equation}
where the trace $\Tr$ sums over momenta and indices. It is straightforward to obtain the flow equation for $\Ga_k$ w.r.t.~$t=\ln k/\Lambda$ (with $\Lambda$ some UV scale). The only explicit $k$-dependence in \eq{Schwingerk} stems from the regulator $R_k$, thus
\beq
\label{flowE} \partial_t\Gamma_k=\frac{1}{2}{\rm Tr}
\left\{G_k[\phi]\ \frac{\partial
R_k}{\partial t}\right\}
\end{equation}
with
\beq
G_k[\phi]=\left(\frac{\delta^2\Gamma_k}{\delta \phi\delta \phi^*}+R_k\right)^{-1}
\end{equation}
denoting the full (field-dependent, regularized) propagator. For the time being, the regulator function is kept arbitrary. It can be chosen in such a way (see below), that
\bea
\lim_{k\to \infty}\Ga_k&=&S\label{initial}\\
\lim_{k\to 0}\Ga_k&=&\Ga\ .\label{final}
\eea
Therefore the flow equation connects the (gauge-fixed) classical action $S$ with the full quantum effective action $\Ga$. Solving the path integral \eq{Schwingerk} (for $k=0$) is therefore equivalent to solving \eq{flowE} with the initial condition \eq{initial} given at some UV scale. One might read this approach as a path integral independent definition of a quantum field theory.
Note that the flow equation \eq{flowE} is exact: no approximations have been employed for its derivation. This means in particular that \eq{flowE} is non-perturbative -- it describes unambiguously the resummation of {\it all} (quantum and/or thermal) fluctuations.
{\it Solving} the flow equation necessitates, however, truncations and approximations. One can easily recover the perturbative loop-expansion, or resummations thereof. However, \eq{flowE} allows for more elaborate expansion schemes which are not confined to regions of small coupling constants. Commonly used is the derivative expansion, or an expansion in powers of the fields. Deriving \eq{flowE} w.r.t.~the fields gives then flow equations for the higher order vertices, which parametrize the effective action.
\subsection{The regulator function}
Let us be slightly more explicit about the regulator function $R_k$. We will impose the following constraints on the regulator $R_k$ such that IR finiteness, \eq{initial} and \eq{final} are ensured:
\begin{itemize}
\item[(i)] $R_k$ has a non-vanishing limit for $p^2 \to 0$, typically like a mass term $R_k\to k^2$.
\item[(ii)] $R_k$ vanishes in the limit $k\to 0$, and for $p^2 \gg k^2$.
\item[(iii)] For $k\to \infty$ (or $k\to \Lambda$ with $\Lambda$ being some UV scale much larger than the relevant physical scales), $R_k$ diverges like $\Lambda^2$.
\end{itemize}
Condition (i) reflects the IR finiteness of the propagator at non-vanishing $k$ even for vanishing momentum $p$ (which is why the regulator has been introduced in the first place). Condition (ii) ensures that any dependence on $R_k$ drops out for $k\to 0$ (that is to say that $\Gamma_{k\to 0}$ reduces to the full quantum effective action $\Gamma$), and that large momenta modes are suppressed, ({\it i.e.}~integrated-out) . From condition (iii) we conclude that the saddle point approximation to (\ref{Schwingerk}) becomes exact for $k\to\La$, and $\Gamma_{k\to\Lambda}$ reduces to the (gauge-fixed) classical action $S$. The regulator function is related to $\Theta_k$ in \eq{euclidcoarse} as
\beq\label{theta}
\Theta_k\left(\0{p^2}{k^2}\right)=1-\0{R_k(p^2)}{p^2+R_k(p^2)}.
\end{equation}
One easily verifies that any $R_k$ with the properties (i)-(iii) yields a (smeared-out) Heavyside step function, when inserted in \eq{theta}. We also conclude from \eq{theta} that the operator $\partial_t R_k$ is a (coarse-grained) $\delta$-function. This is consistent with the flow equation \eq{flowE}. At any fixed scale $k$, only loop-momenta $p^2$ around $k^2$ can contribute to the change of $\Ga_k$. All other momenta are suppressed because of $\partial_t R_k$ having support only in the vicinity of $k^2$. This is the essence of a Wilsonian philosophy based on the integration over infinitesimal momentum shells.
Typical classes of smooth regulators used in the literature are exponential ones with
\beq\label{expreg}
R_k(p^2)= \0{p^2}{\exp(p^2/k^2)^n-1}
\end{equation}
or algebraic ones with
\beq
R_k(p^2)= p^2\left(\0{k^2}{p^2}\right)^n\ .
\end{equation}
For $n=1$, both classes have a mass-like limit for small momenta. The limit $n\to\infty$ corresponds to the sharp cut-off limit \cite{Scalar}.
Two comments are in order. The first one concerns the case of a mass-like regulator $R_k=k^2$. This regulator is somewhat special. First of all, it is {\it independent} of momenta. The operator $\partial_t R_k$ in \eq{flowE} is -for this particular regulator- neither peaked nor sufficiently suppressed for large momenta. Thus, {\it all} momenta $p$ do contribute to $\Gamma_k$ at any $k$ ({\it i.e.}~the second part of (ii) is not fulfilled). Typically it is observed that the convergence properties of approximate solutions to the flow equation are worse for a mass-like regulator then for exponential ones. Furthermore, the unsuppressed large momenta modes introduce additional UV divergences. Although analytical computations simplify tremendously with this regulator, it has to be taken with care.
The second comment concerns the (in-)dependence of physical quantities on the shape of the regulator function. Obviously, physical observables should not depend on the particular regulator chosen \cite{Litim97a}, and will not depend on $R_k$, if the flow equation is integrated down to $k=0$. This is an immediate consequence of \eq{final}. However, any computation has to resort to some approximation (typically only a finite number of operators are considered for an expansion of $\Ga_k$). Thus, approximations can introduce a {\it spurious} scheme dependence. Any approximation scheme, that yields large quantitative or even qualitative corrections is not acceptable and has to be discarded. Therefore it is mandatory to compute the scheme dependence within a given approximation. At the same time, this is an efficient way to check the viability of a given Ansatz \cite{Litim97a}.
\subsection{Gauge invariant Green functions}
What can be done for gauge theories? The obvious problem is that the regulator term -being quadratic in the fields- is not gauge invariant, which raised some criticism about the present approach. The question arises as to which extent gauge theories can be handled, although the regulator term \eq{Rk} seems not to be compatible with gauge invariance. This problem has been considered using the background field method \cite{Abelsch,ReuterWetterich}, modified Slavnov-Taylor or Ward Identities \cite{Ellwanger,Axial} or (perturbative) fine tuning conditions \cite{Marchesini}.
We will follow \cite{Axial} to illustrate the problem, and to show that gauge invariance of physical Green functions can indeed be maintained. Let us consider the example of an SU($N$) gauge theory in an axial gauge, with $\phi=A^a_\mu$ and the gauge fixing
\beq
\label{gf}
S_{\rm gf}[A]=\01{2}\Tr \ n_\mu A^a_\mu\ \01{\xi n^2}\ n_\nu A^a_\nu.
\end{equation}
Here, $\xi$ is the gauge fixing parameter (chosen to be momentum independent) and $n_\mu$ is a fixed Lorentz vector. Axial gauges have the nice property that the ghost sector decouples. The problem of Gribov copies is therefore absent, and the number of Feynman diagrams is significantly reduced. Furthermore, the spurious singularities as encountered within a perturbative approach have been shown to be absent \cite{Axial}. Finally, the axial vector $n_\mu$, which appears in the gauge fixing condition, has a natural explanation within thermal field theories as the thermal bath singles-out a rest frame characterized by a Lorentz vector.
Let us now perform an infinitesimal gauge transformation. This leaves the measure in \eq{Schwingerk} invariant and leads to a functional identity, the so-called {\it modified} Ward Identity (mWI) ${\cal W}_k[A]=0$, with
\bea \di {\cal
W}_k^a[A]&=& \di D_\mu^{ab}(x)\frac{\delta
\Gamma_k[A]}{\delta A^b_\mu(x)} \di-\frac{1}{n^2\xi}
n_\mu\partial^x_\mu\ n_\nu A^a_\nu (x) \nonumber \\ && \di -g \int d^dy f^{abc}R^{cd}_{k,\mu\nu}
(x,y) G_{k,\nu\mu}^{db}(y,x).
\label{mWI}
\eea
and $D^{ab}_\mu=\de^{ab}\partial_\mu+ g f^{acb} A_\mu^c$. The mWI contains all terms of the standard Ward Identity (WI), {\it i.e.}~the first line of \eq{mWI}. Additionally it contains also a term proportional to $R_k$, which is a remnant of the coarse-graining. We observe that this term vanishes for any regulator in the limits $k\to\infty$ and $k\to 0$. In particular, it vanishes identically for a mass-like regulator $R_k=k^2$, thus reducing the mWI to the usual WI in these cases. The flow of ${\cal W}_k[A]$ is obtained
from \eq{flowE} and \eq{mWI}:
\beq \label{compatible}
\partial_t {\cal
W}^a_k = -\frac{1}{2}{\rm Tr}\left( G_k \frac{\partial
R_k}{\partial t} G_k \frac{\delta}{\delta
A}\times \frac{\delta}{\delta A}\right){\cal W}^a_k \ .
\end{equation}
The flow \eq{compatible} has a fixed point ${\cal W}_k[A]=0$. Thus, if $\Ga_k[A]$ solves \eq{mWI} at some scale $k_0$ and evolves subsequently according to \eq{flowE}, then $\Ga_k[A], k<k_0$, fulfills as well ${\cal W}_k[A]=0$. In particular, $\Ga_{k=0}[A]$ will obey the usual WI, which establishes gauge invariance for the {\it physical} Green functions.
This approach has been used to prove that the 1-loop $\beta$-function of SU($N$) gauge theory coupled to $N_f$ fermions is indeed universal, independent of the choice for $R_k$ or $\xi$ (in $d=4$) \cite{Axial,1-loop}.
It is worth stressing that the mWI plays a double role when it comes to {\it approximate} solutions, {\it i.e.} for a truncation of $\Ga_k[A]$. First of all, the mWI can be implemented even in these cases. Perturbatively, this is well known, and sometimes denoted as the Quantum Action Principle. A general procedure that allows to respect the mWI for numerical implementations even {\it beyond} perturbation theory can also be given (for the details, see \cite{thermalRG}). At the same time, the mWI allows the control of the domain of validity for a given truncation \cite{thermalRG}. This error control is quite welcome also on a computational level as it avoids to go to the next order in the chosen expansion.
\subsection{Imaginary time formalism}
It is straightforward to upgrade the above approach to the case of non-vanishing temperature within the imaginary time formalism \cite{averageT}. The flow equation contains a loop integral over some momentum dependent functional. Therefore, the only changes concern the Tr in the flow equation, which becomes a sum over Matsubara frequencies, and the 0-component of the loop momenta, which is discretized
\bea
\int\0{d^dp}{(2\pi)^d} \to T\sum_n \int\0{d^{d-1}p}{(2\pi)^{d-1}},\quad p_0\to 2\pi n T \ .
\eea
Note, however, that the flow equation now connects the UV parameters of the $4d$ theory at $T=0$ with the IR ones at $T\neq 0$. Thus, both {\it quantum} and {\it thermal} fluctuations do contribute to the flow equation. This procedure corresponds to integrating along the diagonal as depicted in fig.~1. It has been applied to phase transitions in scalar \cite{averageT,StevensConnor,LiaoStrickland} and gauge field theories \cite{TetradisT,FreireLitim}.
At this point it is interesting to note the similarity between exact flow equations and the approach advocated in \cite{Pressure} to compute the non-perturbative pressure. Indeed, the method of \cite{Pressure} can be seen as a flow equation with a mass-like regulator $R_k=k^2$. But as we commented earlier, a mass term regulates only {\it marginally}. In order to avoid the additional UV problem for large momenta fluctuations, one should consider instead {\it differences}, like $P[T]-P[0]$. This makes these divergences to cancel out and yields a well-defined flow equation. Even more interesting is the extension to gauge theories which has not been studied yet. For a mass-like regulator the second line in \eq{mWI} vanishes identically. This corresponds to the statement that gauge invariance can be maintained for any $k$ (in this particular case), and is a special feature of the axial gauge fixing used. The above leads to the conclusion that the generalization of \cite{Pressure} to gauge theories is most conveniently done within axial gauges \cite{Axial}. A more detailed account is given in \cite{thermalRG}.
\section{Wilsonian flow in real time}
The philosophy of the previous section is most appropriate for static situations, that is to say for equilibrium physics, and can be used to compute physical quantities at first order phase transitions ({\it i.e.}~free energies, surface tension, latent heat), or at second order ones ({\it i.e.}~critical exponents, equations of state, amplitude ratios). Yet a number of interesting physical problems are related to non-static properties of quantum field theories, and the question raises whether this approach can be extended to space-time with Minkowskian signature.
Recently, a strategy has been proposed for integrating-out the temperature fluctuations within a real-time formulation of thermal field theory \cite{TRG1}. The key idea is to introduce a {\it thermal} cut-off for the on-shell degrees of freedom. This philosophy has several advantages. By construction, it allows to study precisely the effects of thermal fluctuations only, and it is not restricted to static quantities \cite{Pietroni}. As thermal fluctuations are on-shell, it is straightforward to guarantee a gauge invariant implementation of this coarse-graining even for intermediate coarse-graining scale $k$ \cite{TRG2}. However, no statements can be made regarding quantum fluctuations. They have to be included from the onset in the initial condition.
\subsection{Real time formalism}
At finite temperature the fields $\phi$ are defined on a contour $C$ in the complex time plane \cite{TFT,Kapusta,LeBellac}. In the real time formulation this contour consists of a forward branch (the real axis in the complex time plane) and a backward branch (parallel to the forward branch, at arbitrary distance $\sigma<1/T$). It is convenient to introduce two separate set of fields $\phi_1$ (the original fields) and $\phi_2$ (the so-called thermal ghosts), living on the two branches. Thus there is a doubling of the degrees of freedom and the propagators become $2\times 2$ matrices, determined by the boundary conditions.
Let us consider the case of a scalar field of mass $m$. Its free propagator $D^{\phi}$ at $T=0$ is
\beq
D^{\phi}(p)=\frac{1}{p^2-m^2+i\varepsilon}
\end{equation}
The corresponding tree level real time propagator in momentum space is
\beq
D^{\phi}_T=Q[D^{\phi}(p)] + \Delta^{\phi}(p)\, N_T(|p_0|)
\; B.\label{thermalprop}
\end{equation}
We introduced $\Delta^{\phi}(p)=D^{\phi}(p) - \left(D^{\phi}(p)\right)^*$ and the matrices $B_{ij}=1, (i,j=1,2)$ and
\bea
Q[D^{\phi}] &=&
\left(
\begin{array}{cc}
D^{\phi} & \Delta^{\phi}\theta(-p_0) \\ &\\
\Delta^{\phi}\theta(p_0) & - \left(D^{\phi}\right)^*
\end{array}
\right)
\label{nonthermal}
\eea
The function $N_T$ denotes some thermal distribution function, which, at thermal equilibrium, would be the Bose-Einstein distribution. It contains basically all the thermal information. In the $\varepsilon\rightarrow 0$ limit, we observe
\beq
\Delta^{\phi}(p) \longrightarrow -2i \pi \de(p^2-m^2)
\end{equation}
and conclude that the thermal part of the tree level propagator involves on-shell degrees of freedom only.
\subsection{Thermal coarse graining}
The question is how the thermal propagator could be modified in order to implement a thermal coarse graining. The proposal of \cite{TRG1} consists in a scale dependent modification of the thermal distribution function,
\beq\label{thermalcoarse}
N_T\to N_{T,k}=N_T \, \Theta_k(|{\bf p}|/k)
\end{equation}
The corresponding tree level cut off thermal propagator $D_{T,k}$ obtains from \eq{thermalprop} through the replacement \eq{thermalcoarse}, {i.e.}
\beq
D^{\phi}_{T,k}=Q[D^{\phi}(p)] + \Delta^{\phi}(p)\, N_{T,k}(|p_0|,|{\bf p}|)
\; B.\label{thermalprop-k}
\end{equation}
Eq.~\eq{thermalcoarse} may be interpreted as the thermal analogue of \eq{euclidcoarse}. The thermal distribution $N_T$ is switched on mode by mode through the $\Theta_k$-function, whereas in \eq{euclidcoarse}, the propagation of longer wave length modes is switched on.
The modes with $|{\bf p}|\gg k$ will be in thermal equilibrium at temperature $T$, while those with $k\gg |{\bf p}|$ remain in equilibrium at the temperature $T=0$. We shall use in the sequel a sharp cut-off
\beq
\Theta_k(|{\bf p}|)=\theta(|{\bf p}|-k).
\end{equation}
Alternatively, an exponentially smooth cut-off is given by
\beq\label{smooth}
\Theta_k=1-\0{e^{|p_0|/T}}{1+e^{|{\bf p}|/k}(e^{|p_0|/T}-1)}\ .
\end{equation}
Note that \eq{smooth} yields a modified Bose-Einstein distribution very similar to the one proposed by Nair -although within a different prospective- in \cite{Nair}.
\subsection{The thermal renormalization group}
A path integral formulation of these ideas has been given, following \cite{TFT}, in full analogy to the Euclidean case \cite{TRG1}. Let us first introduce sources $J_i\ (i=1,2)$ for the fields $\phi_i$ in order to obtain the path integral representation of the generating functional of real-time cutoff Green functions as
\bea
Z_k[J] &=& \int {\cal D}\phi_1 {\cal D}\phi_2 \exp i \big( \mbox{\small $\frac{1}{2}$}
\Tr\,\phi_i \left(D_{T,k}^{-1}\right)_{ij} \phi_j
\nonumber \\ && \quad\quad\quad + S_{\rm int}[\phi] + \Tr\,J_i\phi_i\big)\,.
\label{path}
\eea
The trace corresponds to the sum over all fields and indices (only the thermal one have been given explicitly), and momentum integration. $S_{\rm int}[\phi] $ is the bare interaction action
\beq
S_{\rm int}[\phi]=S_{\rm int}[\phi_1]-S^*_{\rm int}[\phi_2].
\end{equation}
The flow equation for $Z_k[J]$ can be derived easily from \eq{path} and reads, using $t=\ln (k/\Lambda)$,
\beq
\p_t Z_k[J]
=-\0{i}{2} {\rm Tr}\left[\, \0{\de}{\de J_i} \p_t \left(D_{T,k}^{-1}\right)_{ij}
\0{\de}{\de J_j} Z_k[J]\right] \,.
\label{evz}
\end{equation}
We define as usual the cutoff effective action $\Ga_k$ as the Legendre transform of the generating functional of the connected Green functions, using $W_k[J]= i \ln Z_k[J]$, as
\beq
\Ga_k[\phi] = \Tr\,J_i\phi_i - \s012\,\phi_i\left( D_{T,k}^{-1}\right)_{ij} \phi_j -W_k[J]
\label{action}
\end{equation}
with
\beq
\phi_i =\0{\de W_k[J]}{\de J_i}
\end{equation}
where we have isolated the free part of the cutoff effective action and used for the classical fields the same notation as for the quantum fields.
The flow equation for the cutoff effective action $\Ga_k[\phi]$ follows as
\bea\label{flow}
\p_t \Ga_k[\Phi] &=& \0{i}{2} \Tr \left[\p_t D_{T,k}^{-1}\left(D_{T,k}^{-1}
+\0{\de^2 \Ga_k[\phi] }{\de\phi \:\de\phi}
\right)^{-1} \right]
\eea
(thermal indices have been suppressed now). This flow equation is the thermal analogue of \eq{flowE}.
Given that the initial condition for $\Ga_\La[\phi]$ at $\La\gg T$ is the full renormalized theory at zero temperature, the above flow equation describes the effect of the inclusion of thermal fluctuations at a momentum scale around $|{\bf p}| = k$. At any fixed scale $k$, $\Ga_k$ describes a system in which only the high frequency modes $|{\bf p}| >k$ are in thermal equilibrium, while the low frequency modes $|{\bf p}| <k$ do not feel the thermal bath and behave like zero temperature modes.
The flow equations for all the possible vertices are obtained by expanding \eq{flow} in powers of the fields. For their derivation it is helpful to note that the cutoff effective action has a discrete $Z_2$ symmetry \cite{NS}
\beq
\Ga_k[\phi_1,\phi_2]=-\Ga^*_k[\phi_2^*,\phi_1^*]\ ,
\end{equation}
which relates different vertices to each other.
\section{Application: U(1) Higgs theory}
In this section we will apply the real-time thermal RG to an Abelian Higgs model. A derivation of the flow for the effective potential is given. A more elaborate presentation will be given elsewhere \cite{realAHM}.
The U(1) Higgs model is quite appealing as a testing ground for the feasibility of this approach. First of all, it is an important model for cosmological phase transitions. Furthermore, the starting point for the use of \eq{flow}, that is the renormalized action at vanishing temperature, is well known \cite{abelianhiggs4d} and can be computed with high accuracy. This model has several different mass scales, which we expect to thermalize and decouple at different coarse-graining scales. Finally, in the limit $T\to \infty$ we expect to find flow equations for the purely $3d$ Abelian Higgs model. This might shed new light on the superconducting phase transition in $3d$.
\subsection{The Euclidean effective potential}
The field content of this model is given by $N$ complex scalar fields, an Abelian gauge field, ghost fields, and their thermal partners. We will use the Landau gauge throughout. For the time being, we shall be interested in the phase transition at finite temperature, whose details are encoded in the coarse grained effective potential. Therefore, we have to relate the Euclidean effective potential $V_k(\bar\phi)$ to $V_k[\phi]$. Evaluating the cutoff effective action for constant field
s results in
\beq
\Ga_k[\phi={\rm const}]=-V_k[\phi]\int d^4 x \,.
\end{equation}
We shall evaluate it for a field configuration $\phi=\hat\phi$ which gives a non-vanishing v.e.v.~$\bar\phi$ only to the real part of one component of $\phi_1$. All other fields are then set equal to zero. In \cite{NS} it was shown, that the Euclidean potential (for one real scalar field) is given by
\beq\label{euclidpot}
\0{\p V_k(\bar\phi)}{\p\bar\phi}=
\left. \0{\p V_k[\phi]}{\p\phi_1}
\right|_{\phi=\hat\phi}
\,,
\end{equation}
This relation can be generalized to an arbitrary number of fields \cite{realAHM}. Apart from an irrelevant constant, the effective potential is \cite{NS}
\beq
V_k(\bar\phi) = \0{1}{2} m^2 \bar\phi^2 -\int^{\bar\phi} {d} \phi \:\Ga_k^{(1)} ({\phi})
\,,\label{effpot}
\end{equation}
where
\beq
\Ga_k^{(1)} (\bar\phi) \equiv \left.\0{\de\Ga_k[\phi]}{\de\phi_1}\right|_{\phi=\hat\phi}
\label{tadev}
\end{equation}
denotes the tadpole. Thus the flow equation for the Euclidean potential is
deduced from the one for the tadpole, using \eq{flow}.
\subsection{Approximate flow equations}
In order to obtain a closed set of flow equations, we have to employ some approximations regarding higher order vertices. We shall employ the leading order approximation in a derivative expansion. This implies that the wave function renormalization of the scalar field is neglected. Furthermore, we shall assume that the action remains at most quadratic in the Abelian gauge field. The (possible) imaginary parts of the scalar and photon self energies are neglected.
These approximations allow a closed set of flow equations for the functions $V_k(\rb)$ and $U_k(\rb)$, where $\rb=\bar\phi\bar\phi^*$, and $U_k(\rb)$ is the (field dependent) coefficient in front of the $A^2$ operator in the action. The flow for $U_k$ is related to the longitudinal part of the photon self energy at vanishing momenta. For the time being, we discard the distinction between the temporal and the spatial gauge field component. (The more general Ansatz $U_k A^2\to U_{1,k} A_0^2+U_{2,k} A_i^2$ is able to correctly describe the decoupling Debye mode.) It is useful to introduce the following shorthand notations
\bea
\Omega_1(\rb)&=&\sqrt{k^2+V_k'(\rb)+2 \rb V_k''(\rb)}\\
\Omega_2(\rb)&=&\sqrt{k^2+V_k'(\rb)}\\
\Omega_3(\rb)&=&\sqrt{k^2+U_k(\rb)}
\eea
in terms of which the flow for $V_k(\rb)$ reads
\bea
\0{\partial_t V_k(\rb)}{Tk^3} = &-&\0{1}{2 \pi^2} \ln\left[1-\exp\left(-\Omega_1/T\right)\right]\,
\theta(\Omega^2_1) \nonumber \\
&-&
\0{2N-1}{2 \pi^2} \ln \left[1-\exp\left(-\Omega_2/T\right)\right]\,
\theta(\Omega^2_2) \nonumber \\
&-& \0{3}{2 \pi^2} \ln \left[1-\exp\left(-\Omega_3/T\right)\right]\,\theta(\Omega^2_3).
\eea
An analogous flow equation is obtained for the function $U_k(\rb)$.
Now we will rescale all the dimensionful quantities with appropriate powers of $T$ and $k$ in order to obtain dimensionless flow equations for the $N$-component Abelian Higgs model. To that end, we shall introduce the following functions (using ${}'=\partial_\r$):
\bea
v(\r,t) &=& V(\rb,t)/T k^3\\
u(\r, t)&=& U(\rb,t)/k^2\\
\r &=& \rb/kT,
\eea
and $\om_i=\Omega_i/k \ (i=1,2,3)$
\bea
w_1(\r,t)&=&\sqrt{1+v'+2\r v''}\\
w_2(\r,t)&=&\sqrt{1+v'}\\
w_3(\r,t)&=&\sqrt{1+u}.
\eea
Within this notation the flow equation for $v$ takes the form
\bea
\partial_t v &=&
-3 v + \r v'\nonumber \\
&&\di -\0{\theta(w_1^2)}{2\pi^2} \ln\left[1-\exp\left(-w_1 k/T\right)\right]
\nonumber \\
&&\di -\0{\theta(w_2^2)}{2\pi^2} (2N-1) \ln\left[1-\exp\left(-
w_2 k/T\right)\right]\nonumber \\
&&\di - 3 \0{\theta(w_3^2)}{2\pi^2} \ln\left[1-\exp\left(-w_3 k/T\right)\right]
\label{v-flow}
\eea
Note that the flow still explicitly depends on the ratio $k/T$, which is just a consequence of the presence of two independent scales.
\subsection{Low and high temperature limits}
By construction, the flow equations (for $v$ and $u$)
only integrate-out the temperature fluctuations. This is why the
initial conditions -given for the functions $u$ and $v$ at some
large scales $k$- already have to be the full quantum effective
parameters of the $4d$ theory at vanishing temperature. Therefore, the flow equations have to vanish
in the low temperature limit since no further fluctuations need to be taken
into account. That this is actually the case is easily seen in \eq{v-flow}.
For $T\to 0$, the exponential factors $\exp(-w k/T) \ k/T$ suppress any
non-trivial flow.
The high temperature limit is much more interesting. For $T\to\infty$
we would expect to recover the purely three-dimensional running of the
couplings. Expanding $\ln [1-\exp(w\, k/T)]=\ln w + \ln (k/T) +
{\cal{O}}(w\ k/T)$ we obtain
\bea
\partial_t v &=&
-3 v + \r v'
\di -\0{\theta(w_1^2)}{4\pi^2} \ln [1+v'+2\r v'']
\nonumber \\
&&\di -\0{\theta(w_2^2)}{4\pi^2} (2N-1) \ln [1+v']
\di - 3 \0{\theta(w_3^2)}{4\pi^2} \ln [1+u]
\label{v3-flow}
\eea
Note that we have suppressed terms proportional to $\theta(w^2)\ln\left( k/T\right)$, as their contribution for $w^2>0$ is field-independent.
It is interesting to compare \eq{v3-flow} with the flow equation
directly derived in $3d$ via the effective average action approach
\cite{Abelsch}. If we specify a sharp cut-off regulator and neglect the
scalar anomalous dimension, the corresponding
flow equation for the Euclidean dimensionless potential
$v^{}_{\mbox{\tiny E}}$ reads \cite{3d-abel,3d-abelN}
\bea
\partial_t v^{}_{\mbox{\tiny E}} &=&
-3 v^{}_{\mbox{\tiny E}} + \r v_{\mbox{\tiny E}}'
\di -\0{1}{4\pi^2} \ln [1+v_{\mbox{\tiny E}}'+2\r v_{\mbox{\tiny E}}'']
\nonumber \\
&&\di -\0{2N-1}{4\pi^2} \ln [1+v_{\mbox{\tiny E}}']
\di - \0{2}{4\pi^2} \ln [1+2 e^2 \r]
\label{v3E-flow}
\eea
where $e^2=\bar e^2/k$ denotes the dimensionless gauge coupling squared.
Two comments are in order. First of all, the r$\hat{\rm o}$le of the (field-dependent) mass of the gauge field $M^2_{\mbox{\tiny E}}=2e^2\r k^2$ has been taken over by the function $u$, with $M^2=u(\r)k^2=U_k(\rb)$. With the computation of $U_k(\rb)$ we would obtain therefore the full field dependence of the Abelian charge. In \cite{3d-abel} it was argued that the field dependence of $e^2$ might play an important r$\hat{\rm o}$le close to the critical points. Note also that the numerical coefficient in front of the last term in \eq{v3-flow} differs from that in \eq{v3E-flow} ({\it i.e.}~3 instead of 2). As mentioned earlier, this is due to the Debye mode, which, in the present approximation, can not decouple properly, and therefore still contributes to the flow in the high temperature limit.
The second point concerns the $\theta$-functions, absent in \eq{v3E-flow}. They ensure that the flow \eq{v3-flow} will not run into a singularity nor develop an imaginary part. As soon as the arguments in the logarithms tend to negative values, the $\theta$-function cuts their contribution off. In \eq{v3E-flow}, this is not so obvious. While solving \eq{v3E-flow}, however, one observes that the flow does indeed avoid the singularities automatically \cite{abelianhiggs4d,3d-abel,3d-abelN}.
\section{Discussion and Outlook}
We reviewed main features of the Wilsonian RG and argued that they represent a systematic and efficient tool for applications to thermal field theories. In particular, we emphasized that even gauge theories can be handled systematically in both the real and imaginary time formalism. The particular differences between these two approaches have been discussed, the main one being that the imaginary time formalism is adequate for the computation of static quantities, while the real time approach allows the study of non-static quantities and non-equilibrium situations.
The application to an Abelian Higgs model extends previous applications to an interesting toy model for cosmological phase transition. A detailed study of the first and second order phase transition is now feasible. This will also allow for an independent check of the perturbative dimensional reduction scenario, which is at the heart of recent Monte Carlo simulations. The extension to the electroweak phase transition seems to be straightforward, although more elaborate approximations have to be employed in this case. It would be particularly interesting to study the critical points of both models. In the U(1) case, one expects two critical points (describing the second order phase transition, and the end point of the first order phase transition region). This approach might even open a door to a better understanding of the superconducting phase transition, which corresponds to the large $T$ limit. In the SU(2) case one expects to find only one critical point, the end point of the line of first order phase transitions. This fixed point was recently discovered to belong to the Ising-type universality class \cite{MonteCarlo}. A field theoretical determination of the end point and the corresponding critical exponents is still missing. Up to now only Monte Carlo simulations have been able to study this parameter range.
With these tools at hand a number of other interesting problems can now be envisaged. An open question concerns for example the thermal $\beta$-function of QCD, which has been computed by a number of groups, with remarkably different results (see \cite{oneloop-thermal} and references therein). The Wilsonian RG, and in particular the heat kernel methods used in \cite{1-loop}, can be employed even within the imaginary time formalism and should be able to resolve this point. It seems also be possible to construct a gauge invariant thermal renormalization group within real and imaginary time along the lines indicated earlier \cite{thermalRG}. This would be a very useful extension of \cite{Pressure} to fermions and gauge fields.
\section*{Acknowedgements}
It is a pleasure to thank U.~Heinz for organizing a very pleasant conference, F.~Freire and J.~M.~Pawlowski for an enjoyable collaboration and a critical reading of the manuscript, M. d'Attanasio for initiating the work presented in section IV, and B. Bergerhoff and M. Pietroni for discussions. Financial support from the organizers of TFT98 is gratefully acknowledged.
| proofpile-arXiv_065-8232 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} \label{intro}
The first color-magnitude diagrams (CMD) obtained by Baade for the
dwarf spheroidal (dSph) companions of the Milky Way, and in particular for
the Draco system (Baade \& Swope 1961), showed all of the features
present in the CMD's of globular clusters. This, together with the presence of RR
Lyrae stars (Baade \& Hubble 1939; Baade \& Swope 1961) led to
the interpretation that dSph galaxies are essentially pure Population II
systems. But Baade (1963) noted that there are a number of
characteristics in the stellar populations of dSph galaxies that
differentiate them from globular clusters, including extreme red
horizontal branches and the distinct
characteristics of the variable stars. When carbon stars were
discovered in dSph galaxies, these
differences were recognized to be due to the presence of an
intermediate-age population (Cannon, Niss \& Norgaard--Nielsen 1980;
Aaronson, Olszewski \& Hodge 1983; Mould \& Aaronson 1983). In the past few years this intermediate-age population has been shown beautifully in the
CMDs of a number of dSph galaxies (Carina: Mould \& Aaronson 1983; Mighell 1990; Smecker-Hane, Stetson \& Hesser 1996; Hurley-Keller, Mateo \& Nemec 1998; Fornax: Stetson, Hesser \& Smecker-Hane 1998; Leo I: Lee et al. 1993, L93 hereinafter; this paper). Other dSph show only a dominant old stellar population in their CMDs (Ursa Minor: Olszewski \& Aaronson 1985; Mart\'\i nez-Delgado \& Aparicio 1999; Draco: Carney \& Seitzer 1986; Stetson, VandenBergh \& McClure 1985; Grillmair et al. 1998; Sextans: Mateo et al. 1991).
An old stellar population, traced by a horizontal-branch (HB), has been
clearly observed in all the dSph galaxies satellites of the Milky Way, except Leo~I, regardless of their subsequent star formation histories (SFH). In this respect, as noted by L93, Leo~I is a peculiar galaxy, showing a well populated red-clump (RC) but no evident HB. This suggests that the first substantial amount of star formation may have been somehow delayed in this galaxy compared with the other dSph. Leo~I is also singular in that its large galactocentric radial velocity (177$\pm$3 km $\rm {s}^{-1}$, Zaritsky et al. 1989) suggests that it may not be bound to the Milky Way, as the other dSph galaxies seem to be (Fich \& Tremaine 1991). Byrd et al. (1994) suggest that both Leo~I and the Magellanic Clouds seem to have left the neighborhood of the
Andromeda galaxy about 10 Gyr ago. It is interesting that the Magellanic Clouds also seem to have only a small fraction of old stellar population.
Leo~I presents an enigmatic system with unique characteristics among
Local Group galaxies. From its morphology and from its similarity to other dSph in terms of its lack of detectable quantities of HI (Knapp, Kerr \& Bowers 1978, see Section~\ref{leoi_prev}) it would be considered a dSph galaxy. But it
also lacks a conspicuous old population and it has a much larger fraction of
intermediate-age population than its dSph counterparts, and even, a
non-negligible population of young ($\le$ 1 Gyr old) stars.
In this paper, we present new {\it HST} F555W ($V$) and F814W ($I$)
observations of Leo~I. In Section~\ref{leoi_prev}, the previous work
on Leo~I is briefly reviewed. In Section~\ref{obs}, we present the
observations and data reduction. In Section~\ref{phot} we discuss the
photometry of the galaxy, reduced independently using both ALLFRAME and
DoPHOT programs, and calibrated using the ground-based photometry of L93. In Section~\ref{cmd} we present the CMD of Leo~I, and discuss the stellar populations and the metallicity of the galaxy. In Section~\ref{discus} we summarize the conclusions of this paper. In a companion paper, (Gallart et al. 1998, Paper~II) we will quantitatively derive the SFH of Leo~I through the comparison of the observed CMD with a set of synthetic CMDs.
\section{Previous Work on Leo~I} \label{leoi_prev}
Leo~I (DDO~74), together with Leo~II, was discovered by Harrington \&
Wilson (1950) during the course of the first Palomar Sky Survey. The
distances to these galaxies were estimated to be $\simeq$ 200 kpc, considerably
more distant than the other dSph companions of the Milky Way.
It has been observed in HI by Knapp et al. (1978) using the NRAO 91-m telescope, but not detected. They set a limit for its HI mass of $M_{HI}/M_{\odot} \le 7.2\times10^3$ in the central 10\arcmin ($\simeq$ 780 pc) of the galaxy. Recently, Bowen et al. (1997) used spectra of three QSO/AGN to set a limit on the HI column density within 2--4 kpc in the halo of Leo~I to be $N(HI) \le 10^{17} {\rm cm}^{-2}$. They find no evidence of dense flows of gas in or out of Leo~I, and no evidence for tidally disrupted gas.
The large distance to Leo I and the proximity on the sky of the bright
star Regulus have made photometric studies difficult. As a consequence,
the first CMDs of Leo~I were obtained much later than for the other nearby dSphs (Fox \& Pritchet 1987; Reid \& Mould 1991; Demers, Irwin \& Gambu 1994; L93). From the earliest observations of the stellar populations
of Leo I there have been indications of a large quantity of
intermediate-age stars. Hodge \& Wright (1978) observed an unusually
large number of anomalous Cepheids, and carbon stars were found by
Aaronson et al. (1983) and Azzopardi, Lequeux \& Westerlund (1985,
1986). A prominent RC, indicative in a low Z system of an intermediate-age stellar population, is seen both in the $[(B-V, V]$ CMD of Demers et al. (1994)
and in the $[(V-I), I]$ CMD of L93. The last CMD is
particularly deep, reaching $I\simeq 24$ ($M_I \simeq + 2$), and
suggests the presence of a large number of intermediate--age,
main--sequence stars. There is no evidence for a prominent HB in any of
the published CMD's.
L93 estimated the distance of Leo~I to be $(m-M)_0 = 22.18 \pm 0.11$ based on the position of the tip of the red giant
branch (RGB); we will adopt this value in this paper. They also
estimated a metallicity of [Fe/H] = --2.0$\pm$0.1 dex from the mean
color of the RGB. Previous estimates of the
metallicity (Aaronson \& Mould 1985; Suntzeff, Aaronson \& Olszewski
1986; Fox \& Pritchet 1987; Reid \& Mould 1991) using a number of
different methods range from [Fe/H]=--1.0 to --1.9 dex. With the new {\it HST} data presented in this paper, the information on the age structure from the turnoffs will help to further constrain the metallicity.
\section{Observations and Data Reduction} \label{obs}
We present WFPC2 {\it HST} $V$ (F555W) and $I$ (F814W) data in one
2.6\arcmin $\times$ 2.6\arcmin~field in Leo~I obtained in March 5,
1994. The WFPC2 has four internal cameras: the planetary camera (PC)
and three Wide Field (WF) cameras. They image onto a Loral
800$\times$800 CCD, which gives an scale of 0\arcsec.046 pixel$^{-1}$
for the PC camera and 0\arcsec.10 pixel$^{-1}$ for the WF cameras. At
the time of the observations the camera was still operating at the higher
temperature of --77.0 $^o$C. Figure~\ref{carta} shows the location of
the WFPC2 field superimposed on Digitized Sky Survey image of Leo~I.
The position of the ground-based image of L93 is also
shown. The position was chosen so that the PC field was situated in the central,
more crowded part of the galaxy. Three deep exposures in both F555W
($V$) and F814W ($I$) filters (1900 sec. and 1600 sec. each,
respectively) were taken. To ensure that the brightest stars were not saturated, one shallow exposure in each filter (350 sec. in F555W and 300 sec in F814W) was also obtained. Figure~\ref{mosaic} shows the $V$ and $I$ deep (5700 sec. and 4800 sec. respectively) WF Chip2 images of Leo~I.
\begin{figure}
\caption[]{Digitized Sky Survey image of the Leo~I field. The outlines indicate the WFPC2 field and the field observed by L93. The total field shown here is $10\arcmin\times 10\arcmin$. North is up, east is to the left.}
\label{carta}
\end{figure}
\begin{figure}
\caption[]{$V$ (above) and $I$ (below) deep (5700 sec. and 4800 sec. respectively) WF Chip~2 Leo~I images. }
\label{mosaic}
\end{figure}
All observations were preprocessed through the standard STScI pipeline,
as described by Holtzmann et al. (1995). In addition, the treatment of
the vignetted edges, bad columns and pixels, and correction of the
effects of the geometric distortion produced by the WFPC2 cameras, were
performed as described by Silbermann et al. (1996).
\section{Photometry} \label{phot}
\subsection{Profile fitting photometry} \label{psfphot}
Photometry of the stars in Leo~I was measured independently using the set
of DAOPHOT~II/ALLFRAME programs developed by Stetson (1987, 1994),
and also with a modified version of DoPHOT (Schechter, Mateo \& Saha
1993). We compare the results obtained with each of these programs below.
ALLFRAME photometry was performed in the 8 individual frames and the photometry list in each band was obtained by averaging the magnitudes of the corresponding individual frames. In summary, the process is as follows: a candidate star list
was obtained from the median of all the images of each field using
three DAOPHOT~II/ALLSTAR detection passes. This list was fed to
ALLFRAME, which was run on all eight individual frames simultaneously.
We have used the PSFs obtained from the public domain
{\it HST} WFPC2 observations of the globular clusters Pal 4 and
NGC~2419 (Hill et al. 1998). The stars in the different frames of
each band were matched and retained if they were found in at least three
frames for each of $V$ and $I$. The magnitude of each star in each band
was set to the error-weighted average of the magnitudes for each star
in the different frames. The magnitudes of the brightest stars were
measured from the short exposure frames. A last match between the stars
retained in each band was made to obtain the $VI$ photometry table.
DoPHOT photometry was obtained with a modified version of the code to account for the {\it HST} PSF (Saha et al. 1996). DoPHOT reductions were made on average $V$ and $I$ images combined in a manner similar to that described by Saha et al. (1994) in order to remove the effects of cosmic rays. Photometry of the brightest stars was measured from the $V$ and $I$ short exposure frames.
The DoPHOT and ALLFRAME calibrated photometries (see Section~\ref{transjohn}) show a reasonably good agreement. There is a scatter of 2-3\% for
even the brightest stars in both $V$ and $I$. No systematic differences can be seen in the $V$ photometry. In the $I$ photometry there is good systematic agreement among the brightest stars, but a small tendency for the DoPHOT magnitudes to become brighter compared to the ALLFRAME magnitudes with increasing $I$ magnitude. This latter effect is about 0.02 magnitudes at the level of the RC, and increases to about 0.04-0.05 mag by $I$ = 26. We cannot decide from these data which program is `correct'. However, the systematic differences are sufficiently small compared to the random scatter that our final conclusions are identical regardless of which reduction program is used.
In the following we will use the star list obtained with
DAOPHOT/ALLFRAME. Our final photometry table contains a total of 31200
stars found in the four WFPC2 chips after removing stars with
excessively large photometric errors compared to other stars of similar
brightness. The retained stars have $\sigma\le 0.2$, chi$<1.6$ and $-0.5\le$sharp$\le 0.5$.
\subsection{Transformation to the Johnson-Cousins system} \label{transjohn}
For our final photometry in the Johnson-Cousins system we will rely ultimately in the photometry obtained by L93. Before this last step though, we transformed the profile fitting photometry using the prescription of Holtzmann et al. (1995). In this section, we will describe both steps and discuss the differences between the {\it HST}-based photometry and the ground based photometry.
The ALLFRAME photometry has been transformed to standard magnitudes in the Johnson-Cousins system using the prescriptions of Holtzmann et al. (1995) and Hill et al. (1998) as adopted for the {\it HST} $H_0$ Key Project data. PSF magnitudes have been transformed to instrumental magnitudes at an aperture of radius 0.5" (consistent with Holtzmann et al. 1995 and Hill et al. 1998) by deriving the value for the aperture correction for each frame using DAOGROW (Stetson 1990).
The Johnson-Cousins magnitudes obtained in this way were compared with ground-based magnitudes for the same field obtained by L93 by matching a number of bright ($V<21, I<20$), well measured stars in the {\it HST}
($V_{HST}$, $I_{HST}$) and ground-based photometry ($V_{Lee}$,
$I_{Lee}$). The zero-points between both data sets have been determined
as the median of the distribution of ($V_{Lee}-V_{HST}$) and
($I_{Lee}-I_{HST}$). In Table~\ref{zeros} the values for the median of ($V_{Lee}-V_{HST}$), ($I_{Lee}-I_{HST}$) and its dispersion $\sigma$ are listed for each chip (no obvious color terms are observed, as expected, since both photometry sets have been transformed to a standard system taking into account the color terms where needed of the corresponding telescope-instrument system). $N$ is the number of stars used to calculate the transformation. Although the value of the median zero--point varies from chip to chip, it is in the sense of making the corrected $V$ magnitudes brighter by $\simeq 0.05$ mag than $V_{HST}$ and the corrected $I$ magnitudes fainter than $I_{HST}$ by about the same amount. Therefore, the final $(V-I)$ colors are one tenth of a magnitude bluer in the corrected photometry.
\begin{table}
\caption {Zero points: ($V_{Lee}-V_{HST}), (I_{Lee}-I_{HST})$}
\label{zeros}
\begin{center}
\begin{tabular}{lcccc}
\hline
\hline
\noalign{\vspace{0.1 truecm}}
CHIP & filter & median & $\sigma$& N \\
\noalign{\vspace{0.1 truecm}}
\hline
\noalign{\vspace{0.1 truecm}}
CHIP 1 & F555W & -0.037 & 0.100 & 17 \\
CHIP 2 & F555W & -0.110 & 0.103 & 59 \\
CHIP 3 & F555W & -0.080 & 0.063 & 43 \\
CHIP 4 & F555W & -0.059 & 0.067 & 57 \\
\noalign{\vspace{0.1 truecm}}
\hline
CHIP 1 & F814W & 0.035 & 0.075 & 17 \\
CHIP 2 & F814W & 0.013 & 0.064 & 53 \\
CHIP 3 & F814W & 0.076 & 0.042 & 43 \\
CHIP 4 & F814W & 0.080 & 0.041 & 53 \\
\noalign{\vspace{0.1 truecm}}
\hline
\hline
\end{tabular}
\end{center}
\end{table}
Note that the CTE effect, which may be important in the case of observations made at the temperature of $-77^o$ C, could contribute to the dispersion on the zero-point. Nevertheless, if the differences $(V_{Lee}-V_{HST})$, $(I_{Lee}-I_{HST})$ are plotted for different row intervals, no clear trend is seen, which indicates that the error introduced by the CTE effect is not of concern in this case. The fact that the background of our images is considerable (about 70 $e^-$) can be the reason for the CTE effect not being noticeable.
We adopt the L93 calibration because it was based on observations of a large number of standards from Graham (1981) and Landolt (1983) and because there was very good agreement between independent calibrations performed on two different observing runs and between calibrations on four nights of one of the
runs. In addition, the Holtzmann et al. (1995) zero points were derived
for data taken with the Wide Field Camera CCD's operating at a lower temperature compared to the present data set.
\begin{figure}
\caption[]{Observed CMDs of Leo~I for all four WFPC2 chips. }
\label{4cmd}
\end{figure}
\section{The Leo~I color-magnitude diagram}\label{cmd}
\subsection{Overview}\label{overview}
In Figure~\ref{4cmd} we present four $[(V-I), I]$ CMDs for Leo~I based
on the four WFPC2 chips. Leo~I possesses a rather steep and blue RGB,
indicative of a low metallicity. Given this low metallicity, its very well-defined RC, at $I\simeq$ 21.5, is characteristic of an
intermediate-age stellar population. The main sequence (MS), reaching
up to within 1 mag in brightness of the RC, unambiguously shows that a
considerable number of stars with ages between $\simeq$ 1 Gyr and 5 Gyr
are present in the galaxy, confirming the suggestion by L93
that the faintest stars in their photometry might be from a
relatively young ($\simeq$ 3 Gyr) intermediate-age population. Our CMD,
extending about 2 magnitudes deeper than the L93 photometry and
reaching the position expected for the turnoffs of an old population,
shows that a rather broad range in ages is present in Leo~I. A number of yellow stars, slightly brighter and bluer than the RC, are probably evolved counterparts of the brightest stars in the MS. Finally, the lack of discontinuities in the turnoffs/subgiant region indicate a continuous star formation activity (with possible changes of the star formation rate intensity) during the galaxy's lifetime.
We describe each of these features in more detail in Section~\ref{compaiso}, and discuss their characteristics by comparing them with theoretical isochrones
and taking into account the errors discussed in Section~\ref{photerrors}. We
will quantitatively study the SFH of Leo~I in Paper~II by comparing the distribution of stars in the observed CMD with a set of model CMDs computed using the stellar evolutionary theory as well as a realistic simulation of the observational effects in the photometry (see Gallart et al. 1996b,c and Aparicio, Gallart \& Bertelli 1997 a,b for different applications of this method to the study of the SFH in several LG dwarf irregular galaxies).
\subsection{Photometric errors}\label{photerrors}
Before proceeding with an interpretation of the features present in the CMD, it is important to assess the photometric errors. To investigate the total errors present in the photometry, artificial star tests have been performed in a similar way as described in Aparicio \& Gallart (1994) and Gallart, Aparicio \& V\'\i lchez (1996a). For details on the tests run for the Leo~I data, see Paper II. In short, a large number of artificial stars of known magnitudes and colors were injected into the original frames, and the photometry was redone again following exactly the same procedure used to obtain the photometry for the original frames. The injected and recovered magnitudes of the artificial stars, together with the information of the artificial stars that have been lost, provides us with the true total errors.
In Figure~\ref{errors}, artificial stars representing a number of small intervals of magnitude and color have been superimposed as white spots on the observed CMD of Leo~I. Enlarged symbols ($\times$, $\circ$, $\triangle$) show the recovered magnitudes for the same artificial stars. The spread in magnitude and color shows the error interval in each of the selected positions. This information will help us in the interpretation of the different features present in the CMD (Section~\ref{compaiso}). A more quantitative description of these errors and a discussion of the characteristics of the error distribution will be presented in Appendix A of Paper~II.
\begin{figure}
\caption[]{A fraction of the results of the artificial star test conducted on the Leo~I image, superimposed in selected positions of the observed CMD of Leo~I. White spots show the locus of the injected magnitudes of a set of artificial stars, and the enlarged symbols show the recovered magnitudes for the same artificial stars. The scatter in the recovered magnitudes gives us information on the total errors in each position.}
\label{errors}
\end{figure}
\subsection{The Leo I distance}
We will adopt, here and in Paper II the distance obtained by L93 $(m-M)_0=22.18\pm0.11$ from the position of the tip of the RGB. Since the ground based observations of L93 cover a larger area than the {\it HST} observations presented in this paper, and therefore sample the tip of the RGB better, they are more suitable to derive the position of the tip. On another hand, since we derive the callibration of our photometry from theirs, we don't expect any difference in the position of the tip in our data. The adopted distance provides a good agreement between the position of the different features in the CMD and the corresponding theoretical position (Figures~\ref{leoi_isopa} and~\ref{leoi_isoya}), and its uncertainty does not affect the (mostly qualitative) conclusions of this paper.
\subsection{Discussion of the CMD of Leo~I: Comparison with theoretical isochrones}\label{compaiso}
\begin{figure}
\caption[]{Combined $[(V-I)_0, M_I]$ CMD of the stars in the WFPC2 field. A distance modulus $(m-M)_0=22.18$ (L93) and reddening E(B-V)=0.02 (Burstein \& Heiles 1984) have been used to transform to absolute magnitudes and unreddened colors. Isochrones of 16 Gyr (Z=0.0004) --thick line--, 3 Gyr (Z=0.001) --thin line-- and 1 Gyr (Z=0.001) --dashed line-- from the Padova library (Bertelli et al. 1994) have been superimposed on the data. Only the evolution to the tip of the RGB is shown in the 16 Gyr and 3 Gyr isochrones. The Z=0.001 isochrones published by Bertelli et al. (1994) are calculated using the old Los Alamos opacities (Huebner et al. 1977), and therefore, are not homogeneous with the rest of the isochrones of their set. The Z=0.001 isochrones drawn here have been calculated by interpolation between the Z=0.0004 and Z=0.004 isochrones.}
\label{leoi_isopa}
\end{figure}
\begin{figure}
\caption[]{Yale isochrones (Demarque et al. 1996) for the same ages, metallicities and evolutionary phases (except for the 1 Gyr old isochrone) as in Figure~5, superimposed on the same data.}
\label{leoi_isoya}
\end{figure}
\begin{figure}
\caption[]{HB-AGB phases for 16 Gyr ($Z=0.0004$) --thick line-- and complete isochrones of 1 Gyr, 600 and 400 Myr ($Z=0.001$) --thin lines-- from the Padova library, superimposed on the same data of Figure~5. See details on the Z=0.001 isochrones in the caption of Figure~5.}
\label{leoi_isoclump}
\end{figure}
In Figure~\ref{leoi_isopa}, isochrones of 16 Gyr ($Z=0.0004$), 3 and 1 Gyr ($Z=0.001$) from the Padova library (Bertelli et al. 1994) have been superimposed upon the global CMD of Leo~I. In Figure~\ref{leoi_isoya}, isochrones of the same ages and metallicities from the Yale library (Demarque et al. 1996) are shown. In both cases (except for the Padova 1 Gyr old, Z=0.001 isochrone), only the evolution through the RGB tip has been displayed (these are the only phases available in the Yale isochrones). In Figure~\ref{leoi_isoclump}, the HB--AGB phase for 16 Gyr ($Z=0.0004$) and the full isochrones for 1 Gyr, 600 and 400 Myr ($Z=0.001$) from the Padova library are shown.
A comparison of the Yale and Padova isochrones in
Figures~\ref{leoi_isopa} and ~\ref{leoi_isoya} shows some differences
between them, particularly regarding the shape of the RGB (the RGB of
the Padova isochrones are in general {\it steeper} and redder, at the
base of the RGB, and bluer near the tip of the RGB, than the Yale
isochrones for the same age and Z) and
the position of the subgiant branches of age $\simeq 1$Gyr (which is
brighter in the Padova isochrones). In spite of these
differences, the general characteristics deduced for the stellar
populations of Leo~I do not critically depend on the set chosen.
However, based on these comparisons, we can gain some insight into current
discrepancies between two sets of evolutionary models widely used, and
therefore into the main uncertainties of stellar evolution theory that
we will need to take into account when analyzing the observations using
synthetic CMDs (Paper~II).
In the following, we will discuss the main features of the Leo~I CMD using the isochrones in Figures~\ref{leoi_isopa} to~\ref{leoi_isoz}. This will allow us to reach a qualitative understanding of the stellar populations of Leo~I, as a starting point of the more quantitative approach presented in Paper~II.
\subsubsection{The main-sequence turnoff/subgiant region}\label{ms}
The broad range in magnitude in the MS turnoff region of Leo~I CMD is a clear indication of a large range in the age of the stars populating Leo~I. The fainter envelope of the subgiants coincides well with the position expected for a $\simeq$ 10--15 Gyr old population whereas the brightest blue stars on the main sequence (MS) may be as young as 1 Gyr old, and possibly
younger.
Figure~\ref{leoi_isoclump} shows that the blue stars brighter than the 1 Gyr isochrone are well matched by the MS turnoffs of stars a few hundred Myr old. One may argue that a number of these stars may be affected by observational errors that, as we see in Figure~\ref{errors}, tend to make stars brighter. They could also be unresolved binaries comprised of two blue stars. Nevertheless, it is very unlikely that the brightest blue stars are stars $\simeq$ 1 Gyr old affected by one of these situations, since one has to take into account that: a) a 1 Gyr old binary could be only as bright as $M_I\simeq 0.3$ in the extreme case of two identical stars, and b) none of the blue artificial stars at $M_I\simeq 1$ (which are around 1 Gyr old) got shifted the necessary amount to account for the stars at $M_I \simeq -0.1$ and only about 4\% of them have been shifted a maximum of 0.5 mag. We conclude, therefore, that some star formation has likely been going on in the galaxy from 1 Gyr to a few hundreds Myr ago. The presence of the bright yellow stars (see subsection~\ref{yellow} below), also supports this conclusion.
Concerning the age of the older population of Leo~I, the present analysis of the data using isochrones alone does not allow us to be much more precise than the range given above (10--15 Gyr), although we favour the hypothesis that there may be stars older than 10 Gyr in Leo~I. In the old age range, the isochrones are very close to one another in the CMD and therefore the age resolution is not high. In addition, at the corresponding magnitude, the observational errors are quite large. Nevertheless, the characteristics of the errors as shown in Figure~\ref{errors} make it unlikely that the faintest stars in the turnoff region are put there due to large errors because i) a significant migration to fainter magnitudes of the stars in the $\simeq$10 Gyr turnoff area is not expected and, ii) because of the approximate symmetric error distribution, errors affecting intermediate-age stars in their turnoff region are not likely to produce the well defined shape consistent with a 16 Gyr isochrone (see Figure~\ref{leoi_isopa}).
Finally, the fact that there are not obvious discontinuities in the turnoff/subgiant region suggests that the star formation in Leo~I has proceeded in a more or less continuous way, with possible changes in intensity but no big time gaps between successive bursts, through the life of the galaxy. These possible changes will be quantified, using synthetic CMDs, in Paper II.
\subsubsection{The horizontal-branch and the red-clump of core He-burning stars}\label{hb}
Core He-burning stars produce two different features in the CMD, the HB and the RC, depending on age and metallicity. Very old, very low metallicity stars distribute along the HB during the core He-burning stage. The RC is produced when the core He-burners are not so old, or more metal-rich, or both, although other factors may also play a role (see Lee 1993). The HB--RC area in Leo~I differs from those of the other dSph galaxies in the following two important ways.
First, the lack of a conspicuous HB may indicate, given the low metallicity of
the stars in the galaxy, that Leo~I has only a small fraction of very old stars. There are a number of stars at $M_I\simeq 0 $, $(V-I)_0=0.2-0.6$ that could be
stars on the HB of an old, metal poor population, but their position is
also that of the post turn-off $\simeq$ 1 Gyr old stars (see
Figure~\ref{leoi_isoclump}). The relatively large number of these stars and the discontinuity that can be appreciated between them and the rest of the stars in the Herszprung-Gap supports the hypothesis that HB stars may make a contribution. This possible contribution will be quantified in Paper II. Second, the Leo~I RC is very densely populated and is much more extended in luminosity than the RC of single-age populations, with a width of as much as $\Delta I \simeq$ 1 mag. The intermediate-age LMC populous clusters with a well populated RC (see e.g. Bomans, Vallenari \& De Boer 1995) have $\Delta I$ values about a factor of two smaller. The RCs of the
other dSph galaxies with an intermediate-age population (Fornax:
Stetson et al. 1998; Carina: Hurley--Keller, Mateo \& Nemec 1998) are also much
less extended in luminosity.
The Leo~I RC is more like that observed in the CMDs of the general field of the LMC (Vallenari et al. 1996; Zaritzky, Harris \& Thompson 1997). A RC extended in luminosity is indicative of an extended SFH with a large intermediate--age component. The older stars in the core He--burning phase lie in the lower part of the observed RC, younger RC stars are brighter (Bertelli et al. 1994, their Figure~12; see also Caputo, Castellani \& Degl'Innocenti 1995). The brightest RC stars may be $\simeq$ 1 Gyr old stars (which start the core He-burning phase in non-degenerate conditions) in their blue--loop phase. The stars scattered above the RC, (as well as the brightest yellow stars, see subsection~\ref{yellow}), could be a few hundred Myr old in the same evolutionary phase (see Figure 1 in Aparicio et al. 1996; Gallart 1998). The RC morphology depends on the fraction of stars of different ages, and will complement the quantitative information about the SFH from the distribution of sub-giant and MS stars (Paper II).
\subsubsection{The bright yellow stars: anomalous Cepheids?} \label{yellow}
There are a number of bright, yellow stars in the CMD (at $-2.5 \le M_V < -1.5$ mag and $0 \le (V-I) \le 0.6$ mag). L93 indicate that a significant fraction of these stars show signs of variability, and two of the stars in their sample were identified by Hodge \& Wright (1978) to be anomalous Cepheids\footnote{Anomalous Cepheids were first discovered in dSph galaxies, and it was demonstrated (Baade \& Swope 1961; Zinn \& Searle 1976) that they obey a period-luminosity relationship different from that of globular cluster Cepheids and classical Cepheids. The relatively large mass ($\simeq 1.5 M\odot$) estimated for them implies that they should be relatively young stars, or mass transfer binaries. Since the young age hypothesis appeared incompatible with the idea of dSph galaxies being basically Population II systems, it was suggested that anomalous Cepheids could be products of mass-transfer binary systems. Nevertheless, we know today that most dSph galaxies have a substantial amount of intermediate-age population, consistent with anomalous Cepheids being relatively young stars that, according to various authors (Gingold 1976, 1985; Hirshfeld 1980; Bono et al. 1997), after undergoing the He-flash, would evolve towards high enough effective temperatures to cross the instability strip before ascending the AGB.}. Some of them also show signs of variability in our {\it HST} data. In Figure~\ref{leoi_isoclump} however, it is shown that these stars have the magnitudes and colors expected for blue--loop stars of few hundred
Myr. This supports our previous conclusion that the brightest stars in the MS have ages similar to these.
Given their position in the CMD, it is interesting to ask whether some of the
variables found by Hodge \& Wright (1978) in Leo~I could be classical Cepheids
instead of anomalous Cepheids\footnote {Both types of variables would be double-shell burners although, taking into account the results of the authors referenced in the previous footnote, from a stellar evolution point of view the difference between them would be that the anomalous have started the He-burning in the core in degenerate conditions, while the classical are stars massive enough to have ignited He in the core under non-degenerate conditions. If the Leo~I Cepheids are indeed among the yellow stars above the RC, which are likely blue--loop stars, they would meet the evolutionary criterion to be classical Cepheids.}. From the Bertelli et al. (1994) isochrones, we can obtain the mass and luminosity of a 500 Myr blue-loop star, which would be a representative star in this position of the CMD. Such a star
would have a mass, $M\simeq 2.5 M_{\odot}$, and a luminosity, L$\simeq$
350 L$_{\odot}$. From Eq. 8 of Chiosi et al. (1992) we calculate that
the period that corresponds to a classical Cepheid of this mass and
metallicity is 1.2 days, which is compatible with the periods found by
Hodge \& Wright (1978), that range between 0.8 and 2.4 days.
We suggest that some of these variable stars may be similar to the
short period Cepheids in the SMC (Smith et al. 1992), i.e. classical
Cepheids in the lower extreme of mass, luminosity and period. If this
is confirmed, it would be of considerable interest in terms of
understanding the relationship between the different types of Cepheid
variables. A new wide field survey for variable stars, more accurate
and extended to a fainter magnitude limit (both to search for
Cepheids and RR Lyrae stars) would be of particular interest in the case of
Leo~I.
\subsubsection{The Red Giant Branch: the metallicity of Leo~I} \label{rgb}
The RGB of Leo~I is relatively blue, characteristic of a system with low metallicity. Assuming that the stars are predominantly old, with a small dispersion in age, L93 obtained a mean metallicity [Fe/H]=--2.02 $\pm$ 0.10 dex and a metallicity dispersion of $-2.3 < {\rm [Fe/H]} < -1.8$ dex. This estimate was based on the color and intrinsic dispersion in color of the RGB at $M_I=-3.5$ using a calibration based on the RGB colors of galactic globular clusters (Da Costa \& Armandroff 1990; Lee, Freedman \& Madore 1993b). For a younger mean age of about 3.5 Gyr, they estimate a slightly higher metallicity of [Fe/H]=--1.9, based on the difference in color between a 15 and a 3.5 Gyr old population according to the Revised Yale Isochrones (Green et al. 1987). Other photometric measurements give a range in metallicity of [Fe/H]= --1.85 to --1.0 dex (see L93 and references therein). The metallicity derived from moderate resolution spectra of two giant stars by Suntzeff (1992, unpublished) is [Fe/H]$\simeq -1.8$ dex.
Since Leo~I is clearly a highly composite stellar population with a large spread in age, the contribution to the width of the RGB from such an age range may no longer be negligible compared with the dispersion in metallicity. Therefore, an independent estimate of the age range from the MS turnoffs is relevant in the determination of the range in metallicity. In the following, we will discuss possible limits on the metallicity dispersion of Leo~I through the comparison of the RGB with the isochrones shown in Figures~\ref{leoi_isopa} through ~\ref{leoi_isoz}. As we noted in the introduction of Section~\ref{cmd}, there are some differences between the Padova and the Yale isochrones, but their positions coincide in the zone about 1 magnitude above the RC. We will use only this position in the comparisons discussed below.
\begin{figure}
\caption[]{Z=0.0004 isochrones for 10, 4 and 1 Gyr (evolution through the RGB only) and 0.5 Gyr (full isochrone) from the Padova library, superimposed on the same data of Figure~5.}
\label{leoi_isoz}
\end{figure}
We will first check whether the whole width of the RGB can be accounted for by the dispersion in age. In subsection~\ref{ms} above, we have shown that the ages of the stars in Leo~I range from 10--15 Gyr to less than 1 Gyr. In Figure~\ref{leoi_isoz} we have superimposed Padova isochrones of Z=0.0004 and ages 10, 1 and 0.5 Gyr on the Leo~I CMD. This shows that the full width of the RGB above the RC can be accounted for by the dispersion in age alone. A similar result is obtained for a metallicity slightly lower or higher. This provides a
lower limit for the metallicity range, which could be negligible. The AGB of the 0.5 Gyr isochrone appears to be too blue compared with the stars in the corresponding area of the CMD. However, these AGBs are expected to be
poorly populated because a) stars are short lived in this phase and b) the fraction of stars younger than 1 Gyr is small, if any.
Second, we will discuss the possible range in Z at different ages from a) the position of the RGB, taking into account the fact that isochrones of the same age are redder when they are more metal--rich and isochrones of the same metallicity are redder when they are older and b) that the extension of the blue-loops depends on metallicity:
a) for stars of a given age, the lower limit of Z is given by the blue edge of the RGB area we are considering: isochrones of any age and Z=0.0001 have colors in the RGB above the RC within the observed range. Therefore, by means of the present comparison only, we cannot rule out the possibility that there may be stars in the galaxy with a range of ages and Z as low as Z=0.0001. The oldest stars of this metallicity would be at the blue edge of the RGB, and would be redder as they are younger. The upper limit for the metallicity of stars of a given age is given by the red edge of the RGB: for old stars, the red edge of the observed RGB implies an upper limit of Z $\le$ 0.0004 (see Figure~\ref{leoi_isoz}), since more metal rich stars would have colors redder than observed. For intermediate-age stars up to $\simeq$ 3 Gyr old we infer an upper limit of Z=0.001, and for ages $\simeq$ 3-1 Gyr old an upper limit of Z=0.004.
b) we can use the position of the bright yellow stars to constrain Z: the fact that there are a few stars in blueward extended blue--loops implies that their metallicity is as low as Z$\leq$0.001 or even lower (Figure~\ref{leoi_isoclump}), because higher metallicity stars don't produce blueward extended blue-loops at the observed magnitude. This does not exclude the possibility that a fraction of young stars have metallicity up to Z=0.004. These upper limits are compatible with Z slowly increasing with time from Z$\simeq$ 0 to Z$\simeq$0.001--0.004, on the scale of the Padova isochrones.
In summary, we conclude that the width of the Leo~I RGB can be accounted for the dispersion of the age of its stellar population and, therefore, the metallicity dispersion could be negligible. Alternatively, considering the variation in color of the isochrones depending on both age and metallicity, we set a maximum range of metallicity of $0.0001\le {\rm Z} \le$0.001--0.004: a lower limit of Z=0.0001 is valid for any age, and the upper limit varies from Z=0.0004 to Z=0.004, increasing with time. These upper limits are quite broad; they will be better constrained, and some information on the chemical enrichment law gained, from the analysis of the CMD using synthetic CMDs in Paper II.
\section {Conclusions}\label{discus}
From the new {\it HST} data and the analysis presented in this paper, we conclude the following about the stellar populations of Leo~I:
1) The broad MS turnoff/subgiant region and the wide range in
luminosity of the RC show that star formation in Leo~I has extended
from at least $\simeq$ 10--15 Gyr ago to less than 1 Gyr ago. A lack
of obvious discontinuities in the MS turnoff/subgiant region suggests that star
formation proceeded in a more or less continuous way in the central
part of the galaxy, with possible intensity variations over time, but no big time gaps between successive bursts, through the life of the galaxy.
2) A conspicuous HB is not seen in the CMD. Given the low metallicity of the galaxy, this reasonably implies that the fraction of stars older than $\simeq$ 10 Gyr is small, and indicates that the beginning of a substantial amount of star formation may have been delayed in Leo~I in comparison to the other dSph galaxies. It is unclear from the analysis presented in this paper whether Leo~I contains any stars as old as the Milky Way globular clusters.
3) There are a number of bright, yellow stars in the same area of the
CMD where anomalous Cepheids have been found in Leo I. These stars also
have the color and magnitude expected for the blue-loops of low
metallicity, few--hundred Myr old stars. We argue that some of these
stars may be classical Cepheids in the lower extreme of mass,
luminosity and period.
4)The evidence that the stars in Leo~I have a range in age complicates the determination of limits to the metallicity range based on the width of the RGB. In one extreme, if the width of the Leo~I RGB is atributted to the dispersion of the age of its stellar population alone, the metallicity dispersion could be negligible. Alternatively, considering the variation in color of the isochrones depending on both age and metallicity, we set a maximum range of metallicity of $0.0001\le {\rm Z} \le$0.001--0.004: a lower limit of Z=0.0001 is valid for any age, and the (broad) upper limit varies from Z=0.0004 to Z=0.004, increasing with time.
In summary, Leo~I has unique characteristics among Local Group
galaxies. Due to its morphology and its lack of detectable quantities of
HI, it can be classified as a dSph galaxy. But it appears to have the youngest
stellar population among them, both because it is the only dSph lacking a conspicuous old population, and because it seems to have a larger fraction of
intermediate-age and young population than other dSph. The
star formation seems to have proceeded until almost the present time,
without evidence of intense, distinct bursts of star formation.
Important questions about Leo~I still remain. An analysis of the data
using synthetic CMDs will give quantitative information about the
strength of the star formation at different epochs. Further
observations are needed to characterize the variable-star population in
Leo~I, and in particular, to search for RR Lyrae variable stars. This
will address the issue of the existence or not of a very old stellar
population in Leo~I. It would be interesting to check for variations of
the star formation across the galaxy and to determine whether the HB
is also missing in the outer parts of Leo~I.
Answering these questions is important not only to understand the
formation and evolution of Leo~I, but also in relation to general
questions about the epoch of galaxy formation and the evolution of
galaxies of different morphological types. The determination of the
strength of the star formation in Leo~I at different epochs is
important to assess whether it is possible that during intervals of high star
formation activity, Leo~I would have been as bright as the faint blue
galaxies observed at intermediate redshift. In addition, the duration
of such a major event of star formation may be important in explaining
the number counts of faint blue galaxies.
\acknowledgments
We want to thank Allan Sandage for many very useful discussions and a
careful reading of the manuscript. We thank also Nancy B. Silbermann,
Shoko Sakai and Rebecca Bernstein for their help through the various stages of the {\it HST} data reduction. Support for this work was provided by NASA grant
GO-5350-03-93A from the Space Telescope Science Institute, which is
operated by the Association of Universities for Research in Astronomy
Inc. under NASA contract NASA5--26555. C.G. also acknowledges
financial support from a Small Research Grant from NASA administered
by the AAS and a Theodore Dunham Jr. Grant for Research in Astronomy. A.A. thanks the Carnegie Observatories for their hospitality. A.A. is supported by the Ministry of Education and Culture of the Kingdom of Spain, by the University of La Laguna and by the IAC (grant PB3/94). M.G.L is supported by the academic research fund of Ministry of Education, Republic of Korea, BSRI-97-5411. The Digitized Sky Surveys were produced at the Space Telescope Science Institute under U.S. Government grant NAG W-2166. The images of these surveys are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope.
| proofpile-arXiv_065-8237 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Extragalactic tails and bridges
are the best-known signatures
of recent gravitational interactions between galaxies
(e.g., Toomre $\&$ Toomre \markcite{tt72}1972).
They are also often the birthsites of
stars (Schweizer \markcite{s78}1978; Mirabel et al. \markcite{m91}1991,
\markcite{m92}1992).
Understanding when and how star formation is initiated
in unusual environments such as tails and bridges
can provide important clues to the processes
governing star formation in general, while studying
the properties of gas in tails and bridges
may provide information about gas behavior during
galaxy interactions and collisions.
One poorly-understood factor in star formation
initiation in tails and bridges
is the
amount and distribution of
molecular gas.
Because molecular gas is the material out of which stars
form, it is important to measure its distribution and
mass in these structures,
to test theories of how star formation is triggered in tails and bridges,
and to understand gas phase changes during galaxy collisions
and tail/bridge formation.
In an earlier NRAO 12m telescope survey of tidal tails in six
interacting/merging systems, we
searched for CO emission with no success (Smith $\&$ Higdon
\markcite{sh94}1994).
CO was also not found in the tail of the Leo Triplet galaxy
NGC 3628 (Young
et al. \markcite{y83}1983), the tidal dwarf in the Arp 105
system (Duc $\&$ Mirabel \markcite{dm94}1994;
Smith et al. \markcite{smith98}1998), or the HI-rich bridge
of NGC 7714/5 (Struck et al. \markcite{str98}1998).
The only locations where CO has been found outside of
the main disk of a galaxy is
a small concentration
($\sim$10$^6$ M$_\odot$)
of molecular
gas
near
an extended arm in the M81 system
(Brouillet et al. \markcite{b92}1992) and a larger mass (10$^9$ M$_\odot$)
near the peculiar Virgo Cluster
galaxy NGC 4438 (Combes et al. \markcite{c88}1988). In the latter case,
this gas is believed to have been removed from the disk
by ram pressure stripping during a high velocity collision with
its apparent companion, the S0 galaxy NGC 4435 (Kenney et al.
\markcite{k95}1995).
In this paper, we present the first detection
of a large quantity of molecular gas in an extragalactic tail, the eastern
tail of the peculiar starburst galaxy NGC 2782.
\section{NGC 2782}
The peculiar galaxy NGC 2782 (Figure 1) is an isolated galaxy
with two prominent tails (Smith \markcite{s91}1991,
\markcite{s94}1994; Sandage $\&$ Bedke \markcite{sb94}1994).
The longer western tail is rich in HI but faint in the optical,
and the atomic gas extends well beyond the observed stars
(Smith \markcite{s91}1991; Jogee et al. \markcite{j98a}1998).
The eastern tail has a gas-deficient optical knot at the tip
(Smith \markcite{s94}1994).
The HI in this tail is concentrated at the base of the stellar
tail (Smith \markcite{s94}1994).
HII regions are visible
in this location in the Arp Atlas \markcite{a66}(1966) photograph
as well as
the H$\alpha$ map of Evans et al. \markcite{e96}(1996), and
have been confirmed spectroscopically by
Yoshida, Taniguchi, $\&$ Mirayama \markcite{ytm94}(1994).
Unlike most known double-tailed merger remnants (e.g., NGC 7252; Schweizer
\markcite{s82}1982), the
main body of NGC 2782 has a exponential light distribution
(Smith \markcite{s94}1994; Jogee et al. \markcite{j99}1999),
indicating that its
disk survived the encounter that created the tails.
The HI velocities of the NGC 2782 tails (Smith \markcite{s94}1994)
are opposite those expected from the H$\alpha$
(Boer et al. \markcite{b92}1992),
HI (Smith
\markcite{s94}1994), and CO (Jogee et al. \markcite{j98b}1999)
velocity fields of
the galaxy core, indicating that the tails are probably in a different
plane than the inner disk.
The center of NGC 2782 contains a well-known nuclear starburst
(Sakka et al. \markcite{s73}1973; Balzano \markcite{b83}1983;
Kinney et al. \markcite{k84}1984),
with an energetic outflow (Boer et al. \markcite{b92}1992; Jogee et al.
\markcite{j88a}1998; Yoshida et al. \markcite{y98}1998).
The disk of NGC 2782 appears
somewhat disturbed, with three prominent optical `ripples'
(Arp \markcite{a66}1966; Smith \markcite{s94}1994), one of which contains bright
H~II regions
(Hodge $\&$ Kennicutt \markcite{hk83}1983; Smith \markcite{s94}1994;
Evans et al. \markcite{e96}1996; Jogee et al. \markcite{j98a}1998).
The lack of an obvious companion galaxy to NGC 2782 (Smith \markcite{s91}1991),
as well the presence of two oppositely directed tidal tails,
suggests that it may be the remnant of a merger.
The survival of the NGC 2782 disk, however, indicates that if it
is a merger, the intruder was probably not of equal mass (Smith
\markcite{s94}1994).
It is possible that the optical concentration at the
end of the eastern tail is the remains of a low mass companion,
connected to the main galaxy by a stellar bridge.
The presence of the ripples in the disk as
well as the lack of HI at the tip of the eastern tail are consistent
with the
hypothesis that this companion passed through the disk
of the main galaxy (Smith \markcite{s94}1994).
The striking gas/star offset in this tail may be an example
of the differing behaviour of gas and stars during a galaxy
collision: the gas may have been left behind
as
the companion passed through the larger galaxy.
In the longer western tail, in contrast to the eastern tail,
the HI extends beyond the optical tail (Figure 1).
In the above collision scenario, the longer western tail is
material
torn from the main galaxy's outer disk, which may have initially
been more extended in gas than in stars.
In our previous CO survey of tidal features, we searched
for CO in the longer western tail of NGC 2782 with no success
(Smith $\&$ Higdon \markcite{sh94}1994). In this paper, we present
new CO observations of
the shorter eastern tail that
reveal a large quantity of molecular gas in this feature.
As noted above, this feature may be either a tail
or a bridge plus companion, depending on how it formed.
For convenience throughout this paper, we will simply refer
to it as a tail. However, we note that it has
some morphological differences from `classical' tidal
tails (e.g., the Antennae), in particular, the concentration
of gas at the base of the stellar feature is unusual.
Throughout this paper, we assume a distance of 34 Mpc
(H$_o$ = 75 km s$^{-1}$ Mpc$^{-1}$) to NGC 2782.
\section{Single Dish CO (1 $-$ 0) Observations}
NGC 2782 was observed in the
CO (1 $-$ 0) line during
1996 December,
1997 April, May, and October,
and 1998 October
using the 3mm SIS
receiver on the NRAO 12m telescope.
Two 256$\times$2 MHz filterbanks, one for each
polarization, were used for these observations,
providing a total bandpass of 1300 km s$^{-1}$
centered at 2555 km s$^{-1}$ with a
spectral resolution of 5.2 km s$^{-1}$.
A nutating subreflector with a beam throw of 3$'$
was used, and each scan was 6 minutes
long. The beamsize FWHM is 55$''$ at 115 GHz.
The pointing was checked periodically with
bright continuum sources and was consistent
to 10$''$. The system temperatures ranged
from 300 to 400 K.
Calibration was accomplished using an ambient chopper wheel.
We observed 17 positions in the NGC 2782 system. Fifteen of these
were
arranged in
a 5 $\times$ 3 grid at 25$''$ spacing. These include the center and
8 surrounding positions, as well as six positions
in the eastern tidal tail.
In addition, we re-observed the position in the western
tidal tail previously observed by Smith $\&$ Higdon \markcite{sh94}(1994)
and observed another position at the tip of the western tail.
In Figure 1,
these positions are marked on the HI and optical
maps from Smith \markcite{s94}(1994) and Jogee et al.
\markcite{j98a}(1998).
We have detected CO emission at 14 of the 17 observed positions in NGC 2782:
the center, the surrounding positions,
and five
positions in the eastern tail.
The sixth position in the eastern
tail was not detected.
The position in the western tail, previously observed but
undetected by Smith $\&$ Higdon \markcite{sh94}(1994), remains undetected.
The position at the tip of the western tail was also undetected.
The final summed scans
are shown in Figures 2 $-$ 4. Integrated fluxes,
rms noise levels, peak velocities, and line widths
are provided in Table 1.
For the first position in the western tail, the new data
have been combined with the older data.
Note that the noise levels for the positions in
the tails are considerably lower than for the other positions.
\section{Molecular Gas in the Eastern Tail of NGC 2782}
The most striking result of our 12m observations is
the detection of CO out in the eastern HI structure.
The central velocities of the CO lines in that tail and
the narrow CO line widths
are consistent with those in HI (Smith \markcite{s91}1991, \markcite{s94}1994),
showing that this molecular gas is associated with
the tail rather than the main disk.
This is illustrated in Figure 5, where velocity is plotted against right
ascension for both the 21 cm HI data from Smith (1994)
and the new CO data.
The CO and HI in the eastern tail are at a velocity of $\sim$2620 km s$^{-1}$,
redshifted relative
to the systemic velocity of 2555 km s$^{-1}$. The HI in the western tail is blueshifted.
Both the CO and HI gas in the disk, however, are blueshifted to the east of the nucleus
and redshifted to the west of the nucleus.
The molecular gas in the eastern tail, like the HI,
shows a reversal in velocity, an apparent `counter-rotation',
with respect to the gas in the inner disk.
As noted previously (Smith 1994), the tails
are probably not in the same plane as the disk so this may
not represent a true counter-rotation in the same plane.
Converting the CO fluxes for this tail into molecular gas masses is
very uncertain, because of
possible metallicity and CO self-shielding effects.
In tidal features, where the column densities and metallicities
tend to be low, the
Galactic
I$_{CO}$/N$_{H_2}$
ratio may underestimate the amount of
molecular gas (Smith $\&$ Higdon \markcite{sh94}1994) (see Section 7).
The gas in the eastern tail of NGC 2782
may be metal-poor
(Yoshida et al. \markcite{y94}1994),
although this has not yet been
quantified.
Using the Galactic
N$_{H_2}$/I$_{CO}$
ratio
for this tail may therefore underestimate the total amount of
molecular gas present in it.
For convenience in comparing with other galaxies and other tails/bridges,
we
will use this conversion factor
(2.8 $\times$ 10$^{20}$ cm$^{-2}$/(K km s$^{-1}$); Bloemen et al.
\markcite{b86}1986),
with the understanding
that the H$_2$ mass it provides may be a lower limit to the
true molecular gas mass in this feature.
Possible variations to this conversion factor
are discussed in detail in Section 7.
Assuming the emission fills the beam (coupling efficiency $\eta$$_c$ = 0.82),
the Galactic conversion factor gives an average H$_2$ column density
for the five observed locations
in the eastern tail of
2 $\times$ 10$^{20}$ cm$^{-2}$.
The CO flux does not vary wildly from position to position
in this tail, showing that molecular clouds are distributed throughout
the feature, not concentrated in a single location.
Integrating over all five positions in the eastern tail,
the Galactic
N$_{H_2}$/I$_{CO}$ value gives
a total molecular gas mass for this tail
of 6 $\times$ 10$^8$ M$_{_\odot}$.
This is the first detection of such a large quantity of molecular
gas in a tail or bridge.
This mass is two
orders of magnitude higher than that in the possible M81 cloud
(Brouillet et al. \markcite{b92}1992), and is similar to or greater than
that found in irregular galaxies using the same conversion
factor (Combes \markcite{c85}1985; Tacconi
$\&$ Young \markcite{ty87}1987).
The molecular to atomic gas mass ratio
for this tail is thus
0.6. This is higher than
the M$_{H_2}$/M$_{HI}$ ratio
derived for most dwarf irregular galaxies with the same
conversion factor (Combes \markcite{c85}1985; Tacconi
$\&$ Young \markcite{ty87}1987; Israel, Tacconi, $\&$ Baas
\markcite{itb95}1995).
It is also higher than that found
for Scd and Sm
galaxies,
but lower than the global values for earlier
high mass spiral galaxies (Young
$\&$ Knezek \markcite{yk89}1989). This ratio is consistent with
the value found in the outer regions of
the Milky Way and other spiral galaxies, at galactic radii of 5 $-$ 15 kpc (Bloemen
et al. \markcite{b86}1986;
Tacconi $\&$ Young \markcite{ty86}1986; Kenney, Scoville,
$\&$ Wilson \markcite{ksw91}1991).
\section{Star Formation in the Eastern Tail of NGC 2782}
To investigate the processes that trigger star formation
in tails and bridges,
it is important
to quantify the star formation rates, efficiencies, and morphologies
in these structures.
In Figure 6, we compare the HI structure of the eastern tail
with the H$\alpha$ map from Jogee et al. \markcite{j98a}(1998).
This map shows at least nine H~II regions in this tail
(Table 2).
Four of these were previously tabulated by Evans et al.
\markcite{e96}(1996).
Star formation is well-distributed throughout
this feature, not concentrated
in a single location.
Calibrating the H$\alpha$ image using the total H$\alpha$ flux for NGC 2782
from Smith \markcite{s94}(1994) gives a total H$\alpha$ luminosity for
this tail of 4.0 $\pm$ 1.7 $\times$ 10$^{39}$ L$_\odot$.
This
falls within the range
spanned by irregular galaxies
(Hunter $\&$ Gallagher \markcite{hg85}1985;
Hunter, Hawley, $\&$ Gallagher \markcite{hhg93}1993), and is
close to the observed L$_{H\alpha}$
for well-known irregular galaxy NGC 6822 (Hunter et al.
\markcite{hhg93}1993).
Assuming an extended Miller-Scalo initial mass function (Kennicutt
\markcite{k83}1983) and no extinction correction,
the total star formation rate for the eastern NGC 2782 tail is
therefore between 0.01 and 0.05 M$_\odot$ year$^{-1}$.
The H$\alpha$ luminosities for the observed H~II regions in this
tail range from 5 $\times$ 10$^{37}$ erg s$^{-1}$ to
3 $\times$ 10$^{38}$ erg s$^{-1}$ (Table 2),
with the more luminous regions being in the south.
These luminosities are similar to those of the
brightest H~II regions in NGC 6822 (Hodge, Lee, $\&$ Kennicutt
\markcite{hlk89}1989), thus these H~II regions are not extremely luminous, being more than
an order of magnitude fainter than the 30 Doradus H~II region in the Large
Magellanic Cloud (Faulkner \markcite{f67}1967; Kennicutt \markcite{kennicutt84}1984).
The ratio of the star formation rate to the available molecular
gas for this feature,
L$_{H\alpha}$/M$_{H_2}$, is
0.002 L$_\odot$/M$_\odot$. This is very low,
compared to
global values for high mass galaxies
(0.001 $-$ 1 L$_\odot$/M$_\odot$, with the majority between 0.0025 and 0.1 L$_\odot$/M$_\odot$;
Young et al. \markcite{y96}1996).
This implies that the timescale
for depletion of the available gas
by star formation is very long, about 20 billion years.
A greater-than-Galactic N$_{H_2}$/I$_{CO}$ ratio in the NGC 2782
tail would make
these differences even more extreme.
The H$\alpha$/CO ratio for this tail is also low compared to
irregular galaxies. For
eight irregular galaxies with CO and H$\alpha$ measurements
available (from Tacconi $\&$ Young \markcite{ty83}1983;
Hunter $\&$ Gallagher \markcite{hg96}1996; Hunter et al.
\markcite{hhg93}1993;
Young et al. \markcite{y96}1996; Madden et al.
\markcite{m97}1997; Israel \markcite{i97}1997), the L$_{H\alpha}$/M$_{H_2}$ ratios
(using the Galactic conversion factor for comparison purposes) range from
$\ge$0.01 to 1.9, much higher than our value for the NGC 2782 eastern tail.
Therefore
this tail has a very low star formation rate relative to its CO flux,
compared to global values for galaxies in general.
\section{Comparison With Other Tail/Bridge Features}
In Table 3, we compare the HI and implied H$_2$
column densities
of the eastern NGC 2782 tail with five
other extended features: the star-forming tail in the Antennae galaxy (NGC 4038/9),
the NGC 7714/5 bridge, the molecular gas concentration
outside of the main disk of NGC 4438, the Magellanic Irregular
in the Arp 105 system, and the northern tail of NGC 4676 (the `Mice').
We also include derived M$_{H_2}$/M$_{HI}$ values in this table.
As before, we are assuming the Galactic
N$_{H_2}$/I$_{CO}$
conversion factor
for convenience; in Section 7, we discuss possible variations in this
factor.
For the star-forming Antennae tail and the Arp 105 irregular,
the
M$_{H_2}$/M$_{HI}$
upper limits in the CO beam
are $\le$0.2, much less than the detected level in
the eastern NGC 2782 tail.
In the bridge of the interacting pair NGC 7714/5,
the CO/HI upper limit is even lower,
implying
M$_{H_2}$/M$_{HI}$ $\le$ 0.06.
On the other hand, in the NGC 4438 source, the CO mass is large
compared to the HI mass
(M$_{H_2}$/M$_{HI}$ $\sim$ 5).
Thus there is a wide range in the CO/HI
ratios in these features.
For NGC 4676 and the other tails measured but not detected in CO
(Young et al. \markcite{y83}1983; Smith $\&$ Higdon
\markcite{sh94}1994; Duc $\&$ Mirabel \markcite{dm94}1994;
Duc et al. \markcite{d97}1997), including our new measurements of the longer western tail
of NGC 2782,
the HI column densities are less than that in the eastern NGC 2782 tail,
and the derived
M$_{H_2}$/M$_{HI}$ upper limits in the CO beam are similar to or
higher than
the ratio for the NGC 2782 eastern tail,
so we are not able to make any
strong comparisons.
In Table 4, we compare
the star forming properties of
the eastern NGC 2782 tail with those of
the other objects in Table 3.
For the NGC 4438 feature, much of the ionized gas
may have been ionized by shocks rather than young stars (Kenney
et al. \markcite{k95}1995). Therefore, in Table 4 we list
the observed
H$\alpha$ luminosity as an upper limit
to the H$\alpha$ luminosity from young stars.
Arp 105 is not included in Table 4,
although star formation is
on-going in the Arp 105 structure
(Duc $\&$ Mirabel \markcite{dm94}1994),
because no global H$\alpha$ flux has been published for this
feature to date.
The H$\alpha$ luminosities are similar for
the NGC 2782 tail, the NGC 7714/5 bridge, and the Antennae
tail and a few times larger for the NGC 4676 tail.
For NGC 4438, the upper limit to L$_{H\alpha}$ is similar to the
measured values for the other systems.
The CO luminosity for the eastern NGC 2782 tail
and the NGC 4438 clump
are much higher than in the other systems, and therefore the implied
L$_{H\alpha}$/M$_{H_2}$
ratios are much lower, if the
N$_{H_2}$/I$_{CO}$
ratios are similar.
The L$_{H\alpha}$/M$_{H_2}$ ratio of NGC 2782,
and the upper limit for NGC 4438,
are more than 7 times lower than the lower limit for the Antennae dwarf
and 3 $-$ 4 times lower than the lower limits for
the NGC 7714/5 and NGC 4676 features.
Thus either the rate of star formation per molecular gas
mass differs from system to system, being lowest in NGC 2782 and NGC 4438,
or the
N$_{H_2}$/I$_{CO}$ ratios are lower in NGC 2782 and NGC 4438 than in the other
objects. These possibilities are discussed in Sections 7 $-$ 9.
We also note that the spatial distributions
of the H~II regions
vary from feature to feature for the objects in Table 4, so
the average ambient ultraviolet flux differs from
object to object.
The nine H~II regions in the NGC 2782 tail are spread out
over
a total area of 60 kpc$^2$, while in the Antennae, the three
H~II regions found by
Mirabel et al. \markcite{m92}(1992) are located within
a 10 kpc$^2$ region.
In the NGC 7714/5 bridge,
the area subtended
by the star forming regions is $\sim$36 kpc$^2$,
while in the NGC 4676 tail, the H~II regions in the 55$''$ (23 kpc) CO beam are aligned along
a narrow ridge $\sim$2 kpc or less in width (\markcite{h95}Hibbard 1995; \markcite{hg96}Hibbard
$\&$ van Gorkom 1996).
Thus the NGC 2782 H~II regions are more spread out than the HII regions
in these other features,
therefore the ambient UV field is weaker.
The
H$\alpha$ luminosity functions
also
appear to differ from feature to feature.
Most of the observed
H$\alpha$ in the Antennae dwarf is arising from a single
luminous H~II region of 1.4 $\times$ 10$^{39}$ erg s$^{-1}$;
the NGC 4676 tail contains several knots with similar luminosities
(\markcite{h95}Hibbard 1995).
These H~II regions are
more luminous than any individual H~II region in the NGC 2782 feature (Table 2).
In NGC 7714/5, the three
brightest H~II regions in the bridge
(Gonz\'alez-Delgado et al. \markcite{g95}1995)
are also more luminous than any in the NGC 2782 tail,
but less luminous than the brightest in the Antennae tail.
\section{Possible Variations in the N$_{H_2}$/I$_{CO}$ Ratio}
In comparing the NGC 2782 tail to other features,
we must first address the question of possible system-to-system differences
in the
N$_{H_2}$/I$_{CO}$ ratio. The parameters that may
affect this ratio include the metallicity and dust content,
the column and volume density, and the ambient ultraviolet
radiation field.
Low dust extinction,
as well as low C and O abundances,
leads to more CO destruction and therefore
smaller CO cores in low metallicity molecular clouds (Maloney $\&$ Black
\markcite{mb88}1988;
Maloney \markcite{m90}1990; Maloney $\&$ Wolfire \markcite{mw96}1996).
CO interferometric studies of nearby dwarf galaxies support
this scenario;
the virial masses implied by the linewidths are often higher than
H$_2$ masses derived from CO fluxes using the standard Galactic
conversion ratio
(Dettmar $\&$ Heithausen
\markcite{dh89}1989;
Rubio et al. \markcite{r91}1991, \markcite{r93}1993a,\markcite{rlb93b}b;
Wilson \markcite{w94}1994, \markcite{w95}1995;
Arimoto et al. \markcite{a96}1996).
There is some suggestion that the
N$_{H_2}$/I$_{CO}$ ratio in low metallicity systems
scales with abundance, but with large scatter (Wilson \markcite{w95}1995;
Arimoto et al. \markcite{a96}1996).
One of the reasons for the
observed
system-to-system CO/HI variations seen in Table 3 may therefore be
abundance
variations. At this point, however, not enough information is available
about these features to test this hypothesis.
Less-than-Galactic oxygen abundances of 12 + log[O/H] = 8.4 and 8.6 have been
derived for
the Antennae and Arp 105 features, respectively
(Mirabel et al. \markcite{m92}1992; Duc $\&$ Mirabel \markcite{dm94}1994).
These are similar to the metallicity of the Large Magellanic Cloud (Dufour
\markcite{d84}1994;
Russell $\&$ Dopita
\markcite{rd90}1990), and lower than the average
value for the Milky Way (12 + log[O/H] = 9.0; Shaver
et al. \markcite{s83}1983).
For the Large Magellanic Cloud, an enhanced
N$_{H_2}$/I$_{CO}$ has been inferred
(Cohen \markcite{c88}1988; Israel $\&$ de Graauw \markcite{id91}1991; Mochizuki
et al. \markcite{m94}1994; Poglitsch et al. \markcite{p95}1995).
For the other four objects in Table 3, abundance
analyses have not yet been undertaken.
For the eastern tail of NGC 2782, the H~II region studied
by Yoshida et al. \markcite{y94}(1994) shows an enhanced [O~III] $\lambda$5007/H$\beta$
ratio, hinting at a less-than-solar metallicity, however, this
has not yet been quantified.
The nucleus of NGC 7714 has been shown to be metal-poor
(French \markcite{f80}1980; Garc\'ia-Vargas et al. \markcite{g97}1997), but
at present no abundance study has been done for
the gas in the NGC 7714/5 bridge, the NGC 4438 clump, or the NGC 4676 tail.
Thus it is not yet possible to determine how metallicity
may be affecting the
N$_{H_2}$/I$_{CO}$
fluxes in these features.
CO and H$_2$ self-shielding variations may also produce the observed CO/HI
differences in these features.
CO becomes self-shielding at higher column densities
than H$_2$,
leading to higher
N$_{H_2}$/I$_{CO}$
ratios at column densities N$_H$ $\le$ 10$^{21}$ cm$^{-2}$
(van Dishoeck $\&$ Black \markcite{vb88}1988;
Lada et al. \markcite{l88}1988; Blitz, Bazell, and D\'esert
\markcite{bbd90}1990).
At column densities N$_H$ $\sim$ 5 $\times$ 10$^{20}$ cm$^{-2}$ or lower,
H$_2$ self-shielding also becomes an issue.
In the local interstellar medium,
the proportion of gas in molecular form
decreases rapidly
at a threshold level of $\sim$5 $\times$ 10$^{20}$ cm$^{-2}$
(Savage et al. \markcite{s77}1977; Federman et al. \markcite{f79}1979).
This threshold increases with decreasing density and
metallicity (Elmegreen \markcite{e89}1989).
The column densities of the features in Table 3 are
in the range where the lack of CO and possibly H$_2$ self-shielding may
be important.
However, the CO/HI ratio is not correlated with HI + H$_2$ column
density in this small sample.
For the NGC 4438, NGC 2782, and Antennae structures, there is a
trend of decreasing CO/HI ratios with decreasing HI + H$_2$ column densities.
In NGC 4438, it appears as if both the CO and H$_2$ thresholds are
exceeded, and the CO/HI ratio is very high.
In the Antennae galaxy, in contrast, CO (and maybe H$_2$) may not be well-shielded,
and the CO/HI ratio is very low. NGC 2782 lies between these two
extremes.
Our detection of CO in this tail implies that the CO
threshold is exceeded in at least portions of tail, but maybe
not over the entire feature.
NGC 7714/5 and Arp 105, however, do not fit this trend, while for
NGC 4676, the CO/HI upper limit is too high to be able to make any strong
constraints.
Arp 105 has a similar HI column density to NGC 2782 but less CO.
NGC 7714/5 has
an HI column density higher than the expected CO threshold, and higher
than that of NGC 2782, and yet
has a very low CO/HI upper limit. Perhaps this bridge has
significantly lower abundances than the other features,
and so a higher CO self-shielding threshold.
A related reason for the
observed CO/HI differences in Table 3
may be variations in the clumpiness
of the gas within the CO beam.
In NGC 4438, the physical size
of the CO
beam is only 2.2 kpc, 2.7 $-$ 4.5 $\times$ less than in
the other systems, so a higher average H$_2$ and CO column density is
not surprising.
In Arp 105 and NGC 4676, the beamsizes are 30 kpc and 23 kpc, respectively, so lower average CO
column densities are also not unexpected.
In the Antennae and NGC 7714/5 features, however, the beam subtends
6 kpc and 10 kpc, respectively, compared to 9 kpc in NGC 2782, yet the CO is much fainter.
Thus there is no trend of decreasing CO/HI ratio with beamsize.
Within the beam, however, there may be variations in how the gas is clumped;
perhaps in the NGC 2782 feature, the CO self-shielding limit is exceeded over
larger portions of the feature and so beam-dilution is less of a factor.
This issue could be addressed with higher resolution CO observations.
Another important factor which affects the
N$_{H_2}$/I$_{CO}$
ratio
is the ambient ultraviolet flux, which may be higher in
the Antennae, NGC 4676, and NGC 7714/5 features
than in the eastern NGC 2782 tail (see Section 6).
If two systems have similar low metallicities, dust
contents, and gas densities, the
one with the more intense
ambient
ultraviolet field will have more CO destruction and so a higher average
N$_{H_2}$/I$_{CO}$
ratio
(Maloney $\&$ Wolfire \markcite{mw96}1996).
This is consistent with the difference between the CO/HI ratios of the NGC 2782 tail
and the other features.
We conclude that system-to-system variations in the
N$_{H_2}$/I$_{CO}$ ratio likely play an important role in
determining the CO/HI values of these features. However, without detailed
analyses of the metal abundances in these structures we are not
able to quantify these differences.
\section{Tail/Bridge Formation Mechanisms}
Another factor
which may contribute to the
observed variations in the CO content of tails/bridges
is
differences in the formation mechanisms of these structures.
Two distinct processes contribute to
the formation of extragalactic tails and bridges:
tidal forces
(e.g., Toomre $\&$ Toomre \markcite{tt72}1972)
and
ram pressure stripping
(e.g., Spitzer $\&$ Baade \markcite{sb51}1951; Struck
\markcite{str97}1997).
The relative importance of these two mechanisms
probably varies from
system to system: in small impact parameter collisions
between gas-rich galaxies, cloud-cloud impacts and other
hydrodynamical effects may be important in forming
tails/bridges, while features pulled out during
distant encounters may be largely tidal.
Dissipation in the gas, in addition to possible
pre-collision differences in the gaseous and stellar distributions,
may cause large offsets between the gas and
the stars in bridges and tails.
Such offsets have been found in a number of
systems (Wevers et al. \markcite{w84}1984; Smith
\markcite{s94}1994; Hibbard $\&$ van
Gorkom \markcite{hv96}1996; Smith et al. \markcite{s97}1997).
Gas dissipation and shocks can occur both during the initial encounter
and also
during subsequent passages, as tails fall back into
the main galaxies (e.g., Hibbard and Mihos \markcite{hm95}1995).
The six features in Table 3 are quite different
morphologically. The Antennae and Arp 105 structures are end-of-tail clumps
(van der Hulst \markcite{v79}1979; van der Hulst et al. \markcite{vmb94}1994;
Duc et al. \markcite{d97}1997),
while in NGC 4676 H~II regions are seen along the full extent of
the tail (\markcite{h95}Hibbard 1995; \markcite{hg96}Hibbard $\&$ van Gorkom 1996).
The NGC 2782 gas is concentrated at the base of
a stellar tail, the targeted region in NGC 7714/5 is
in the middle of a bridge connecting two galaxies, and the molecular
concentration near NGC 4438 lies out of the plane of the galaxy.
Ram pressure stripping may have played a bigger role
in forming the eastern tail of NGC 2782 and the NGC 4438 clump
than the other features.
The NGC 2782 tail has a big gas/star offset, while
in the Antennae and NGC 4676 tails, the stars and gas are coincident
(van der Hulst et al. \markcite{v94}1994;
Hibbard $\&$ van Gorkom \markcite{96}1996). In the Arp 105 irregular, the
gas distribution is well-aligned with that of the stars,
except for a slight offset
at the southern end (Duc et al. \markcite{d97}1997).
The NGC 7714/5 bridge is actually two parallel bridges,
one made out of stars, the other of gas, indicating
that both tidal and hydrodynamical forces contributed to the
formation of this feature (Smith
et al. \markcite{ssp97}1997).
In NGC 4438, a distorted optical tail/arm lies 2 kpc away from
the CO concentration (Combes et al. \markcite{c88}1988); it is unclear whether
the molecular gas is associated with this stellar
structure or not (Kenney et al. \markcite{k95}1995).
We have ordered these six features in terms of the importance
of ram pressure stripping versus tidal forces in creating them.
The ranking is:
1) NGC 4438,
2) NGC 2782, 3) NGC 7714/5, 4) Arp 105, the Antennae, and NGC 4676.
The Antennae and NGC 4676 tails are classical tidal tails
(Toomre $\&$ Toomre \markcite{tt72}1972; Barnes \markcite{b88}1988;
Mihos, Bothun, $\&$ Richstone
\markcite{mbr93}1993);
the Arp 105 structure is also probably tidal (Duc et al. \markcite{d97}1997).
In contrast, the NGC 4438 clump is probably largely a product of ram
pressure stripping (Kenney et al. \markcite{k95}1995). The observed gas/star offsets in
the NGC 2782 tail and NGC 7714/5 bridge suggest that gas
dissipation played an important role in producing these features,
but the existence of stellar counterparts shows that tidal
forces also contributed (Smith 1994; Smith et al. 1997).
Interestingly, this splash/tidal ranking also correlates with
the CO/HI ratio in these systems. The two structures
with the most pronounced gas/star morphological differences have the largest
CO abundances.
One possible explanation for this correlation is simply a metallicity effect.
In NGC 2782 and NGC 4438,
where `splash' probably played an important role and the impact
parameters may have been smaller, the stripped gas, presumably
removed from the inner disk, may be more metal-rich than
material pulled from the outer disk in a grazing tidal interaction.
Therefore the
N$_{H_2}$/I$_{CO}$ ratio may be lower than in the other systems.
This possibility could be tested with detailed abundance studies.
The observed large CO fluxes from the NGC 4438 and NGC 2782 features
might be considered
somewhat surprising, in light of theoretical models of molecular
dissociation
during near head-on collisions.
In fast shocks,
H$_2$ and CO are dissociated (e.g., Hollenbach $\&$ McKee
\markcite{hm89}1989), so
in an extreme `splash', where high velocity cloud-cloud collisions
occur,
one may expect a large proportion of the
molecular gas to be dissociated (e.g., Harwit et al.
\markcite{h87}1987).
Direct evidence for strong shocks is present in the optical
spectrum of the NGC 4438 CO clump (Kenney et al. \markcite{k95}1995).
The existence of CO in the NGC 2782 and NGC 4438 features proves that, however they
formed, the collisions/encounters were not
drastic enough to dissociate all of the molecular gas, or, if the
gas were indeed dissociated,
sufficient time has passed for the H$_2$ to reform.
The molecule formation timescale is $\sim$ 10$^9$/n years/cm$^{-3}$
(Hollenbach $\&$ McKee \markcite{hm79}1979),
so assuming the age of the NGC 2782 structure
is $\sim$ 2 $\times$ 10$^8$ years (Smith \markcite{s94}1994),
if the average density in this tail is n $\ge$ 10
cm$^{-3}$, as expected for molecular clouds,
then it is quite possible that the molecules in this feature dissociated
during the collision and now have reformed.
\section{Star Formation Initiation}
One of the surprising results of this study is the low
L$_{H\alpha}$/M$_{H_2}$ ratio for this NGC 2782 tail.
Whether or not star formation is triggered in
a bridge or tail may be due in part to how much
ram pressure stripping versus tidal effect occurred during the encounter.
In a near-head-on encounter, one might expect
more shocks and cloud fragmentation than in a more gentle
tidal encounter.
Theoretical models suggest that star formation may
be inhibited in `splash' features because of gas heating during
the collision (Struck \markcite{str97}1997), while in tidal features
gravitational compression may enhance star formation
(Wallin \markcite{w90}1990).
Thus in `splash' features, molecular gas may be distributed in small,
relatively diffuse clouds rather than concentrated in giant molecular
clouds with high column densities.
These theoretical
results suggest a trend in the star formation rate per molecular
gas mass with
increasing tidal contribution, consistent with our
results:
the two most likely
`splash' candidates have the lowest H$\alpha$/CO ratios
in the group.
A second possibility is that the gas surface density in
the NGC 2782 tail may be below a critical
surface density required for gravitational collapse, as has been surmised
for the outer regions of galactic disks (Kennicutt \markcite{k89}1989).
For a differentially rotating thin gas disk, the critical surface density is
$\Sigma$$_{crit}$ = $\kappa$$\sigma$$_v$/3.36G (Toomre
\markcite{t64}1964), where
$\kappa$ is the epicyclic frequency, $\sigma$$_v$ is the velocity dispersion,
and G is the gravitational constant.
Although a tail or bridge may not be directly
participating in the overall rotation of a galaxy,
we apply this argument by assuming that these structures
are thin, and replacing
$\kappa$
by 2$\Delta$v/R, twice the velocity gradient
southwest to northeast
along the long axis of the feature
(assuming the southeast to northwest shear in the tail is negligible).
In Table 4, we have included HI velocity gradients and dispersions for
the other features in our sample, as well as their
expected critical surface densities.
For NGC 4438, the HI data from Cayette et al. (1990) are too low S/N to estimate
the velocity gradient and dispersion accurately, so no critical
density is derived.
We note that the quoted velocity gradients are the observed gradients
along the
features, which do not take viewing perspective into account.
Therefore the derived critical densities are quite uncertain,
perhaps by a factor of a few.
Within these uncertainties, the critical densities of all the features
are similar, and are similar to the observed gas column densities.
For the eastern NGC 2782 tail, the observed N$_H$ is indeed lower
than the predicted critical density,
suggesting that the gas in this feature may be
relatively stable against gravitational collapse.
In fact, the observed N$_H$ for this tail is {\it greater}
than the expected self-shielding threshold for CO and H$_2$,
and yet {\it less} than the expected
critical density for gravitational collapse.
This is consistent with the observation of abundant CO with
relatively low H$\alpha$ luminosity.
In the NGC 7714/5 bridge,
the HI column density alone is so high that
it approaches the critical density for gravitational instability. The huge
mass of atomic gas in this bridge alone may be enough to
ignite the formation
of stars and enhance the efficiency of star formation.
For NGC 4676,
the derived critical density is higher
than the
observed gas column density and for the Antennae the
critical and observed densities are similar, yet these
galaxies have high L$_{H\alpha}$/CO ratios.
This comparison suggests that
in the NGC 4676 tail
and perhaps in the Antennae feature
the beam
filling factor may be low and/or the
N$_{H_2}$/I$_{CO}$
ratio may be higher
compared to the
NGC 2782 tail.
\section{CONCLUSIONS}
Using the NRAO 12m telescope, we have found evidence for 6 $\times$ 10$^8$
M$_\odot$ of molecular gas in the eastern tail of NGC 2782.
Compared to both spiral and irregular galaxies,
the molecular gas content in the eastern tail of NGC 2782
is very high relative to the current rate of star formation,
implying a very long timescale for gas depletion.
Both the molecular gas and H~II regions in this feature are very extended,
spread out over a total area of 60 kpc$^{-2}$.
Comparison with tidal or `splash' regions in other galaxies
shows a wide range in CO/HI
and H$\alpha$/CO values.
\vskip 0.2in
We thank the telescope operators and the staff of the NRAO 12m telescope
for their help in making these observations.
We are pleased to acknowledge funding for this project from
a NASA grant
administered by the American Astronomical Society.
This research has made use of the NASA/IPAC Extragalactic
Database (NED) which is operated by the Jet Propulsion Laboratory
under contract with NASA.
| proofpile-arXiv_065-8242 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Theoretical studies of photonic parton distributions of {\em{real}}, i.e.,
on-shell, photons have a long history initiated by Witten's work
\cite{ref:witten}.
On the experimental side the past few years have seen much
progress since the advent of HERA.
The observation of `resolved' photon induced $ep$ processes, like
(di-)jet photoproduction, allows for tests of the hadronic nature of
(real) photons which are complementary to structure function
measurements in $e^+e^-$ collisions, where new results from LEP/LEP2
have improved our knowledge as well \cite{ref:newexp}.
Studies of the transition of the (di-)jet cross section from the
photoproduction to the deep-inelastic scattering (DIS) region at HERA
point to the existence of a parton content also for {\em virtual} photons
\cite{ref:expvirt,ref:maxfield}.
These measurements have revived the theoretical interest in this subject
and have triggered a series of analyses of the dependence of
the $ep$ jet production cross section on the virtuality of the
exchanged photon \cite{ref:grs2,ref:jets}.
Recently, a next-to-leading order (NLO) QCD calculation of the
(di-)jet rate in $ep$ (and $e\gamma$) scattering, which
properly includes the contributions due to resolved virtual photons,
has become available \cite{ref:kp,ref:jetvip}, and
resolved virtual photons have been included
for the first time also in the Monte Carlo event generator {\tt RAPGAP}
\cite{ref:rapgap}.
Pioneering work on the parton structure of virtual photons has been
already performed a long time ago [10-12].
However, phenomenological models for these distributions
have been proposed only in recent years [13-16] in view of the expected
experimental progress.
Ongoing measurements at HERA and future structure function measurements at
LEP2 should seriously challenge these models and
hopefully lead to a better understanding of the transition between
the photoproduction and the DIS regime.
To finish this introductory prelude, let us stress that photons provide
us with a unique opportunity to investigate its parton content
in a {\em continuous} range of masses (virtualities) in contrast to
the situation with nucleons or pions.
The framework for parton distributions of virtual photons,
theoretical expectations, and open questions
are briefly recalled in Sec.~2. The various different models for the
parton content of virtual photons are compared in Sec.~3,
supplemented by a short discussion of the treatment of heavy flavors
in Sec.~4. In Sec.~5 we sum up the different ways to measure the
parton densities of virtual photons in $ep$ and $e^+e^-$
experiments.
\section{Theoretical framework: definitions, expectations, open questions}
For clarity we henceforth denote the probed target photon with virtuality
$P^2=-p_{\gamma}^2$ by $\gamma (P^2)$, where $p_{\gamma}$ is the four
momentum of the photon emitted from, say, an electron in an $e^+ e^-$ or
$ep$ collider\footnote{In the latter case it is common to use $Q^2=-q^2$
instead of $P^2$, but we prefer $P^2$ according to the
original notation used in $e^+e^-$ annihilations
\cite{ref:uematsu,ref:rossi}, where it refers to
the virtuality of the probed
(virtual) target photon, and $Q^2$ is reserved for the highly virtual
probe photon $\gamma^*(Q^2)$, $Q^2=-q^2\gg P^2$.}. For real
$(P^2=0)$ photons we further simplify the notations by setting, as usual,
$\gamma \equiv \gamma (P^2=0)$.
The concept of photon structure functions for real and virtual
({\em transverse})
photons can be defined and understood, in close analogy
to deep-inelastic lepton-nucleon scattering, via the subprocess
$\gamma^*(Q^2) \gamma(P^2) \rightarrow X$, as in $e^+e^-\rightarrow e^{\pm}X$
(`single tag') or $e^+e^-\rightarrow e^+e^- X$ (`double tag').
The relevant `single tag'
differential cross section can be expressed as in the hadronic case
in terms of the common scaling variables $x$ and $y$
\begin{equation}
\label{eq:eq1}
\frac{d^2\sigma(e\gamma(P^2)\rightarrow eX)}{dx dy}=
\frac{2\pi\alpha^2 S_{e\gamma}}
{Q^4} \left[ \left(1+(1-y)^2\right)F_2^{\gamma (P^2)}(x,Q^2) - y^2
F_L^{\gamma (P^2)}(x,Q^2)\right]
\end{equation}
with $F_{2,L}^{\gamma (P^2)}$ denoting the photonic structure functions.
The measured $e^+e^-$ cross section is obtained by
convoluting (\ref{eq:eq1}) with the photon
flux for the target photon $\gamma (P^2)$ \cite{ref:flux}.
The range of photon `masses' (virtualities) produced is
\begin{equation}
\label{eq:eq2}
m_e^2\; y^2/(1-y) \le P_{min}^2 \le P^2 \le P_{max}^2 \le \frac{S}{2}\;
(1-y) (1- \cos \Theta_{max})\;\;,
\end{equation}
where $y$ is the energy fraction taken by the photon $(y=E_{\gamma}/E_e)$,
$S$ is the available squared c.m.s.\ energy, and $\Theta_{max}$ is the
maximal scattering angle of the electron in this
frame. $P^2_{min,max}$ in (\ref{eq:eq2}) are further determined by
detector specifications and/or an eventual tagging of the outgoing
electron at the photon producing vertex.
$P^2_{min}$ effectively measures to a good
approximation the dominant photon virtuality involved, just as
$P^2_{min}=m_e^2y/(1-y)\simeq 0$ represents quasi-real photons.
Even in the latter case there is, however, still a
small contribution from the high-$P^2$, virtual photon tail of the
spectrum, which has to be estimated \cite{ref:dg,ref:aurenche}
when one tries to extract the parton densities of real photons.
For {\em transverse} virtual target photons $\gamma (P^2)$, whose virtuality
$P^2$ is essentially given by $P^2 \simeq P^2_{min}$, one expects
\cite{ref:uematsu,ref:rossi} a
parton content $f^{\gamma (P^2)} (x,Q^2)$ along similar lines as for
real photons. The range of applicability of this `picture', however, deserves
a further scrutiny.
For real photons $\gamma$ it is well-known that
in the framework of the quark parton
model (QPM) the $x$- and $Q^2$-dependence of
$F_{2,L}^{\gamma}\equiv F_{2,L}^{\gamma (P^2)}$
is fully calculable from the `pointlike' QED process
$\gamma^*(Q^2)\gamma \rightarrow q\bar{q}$ if one introduces
quark masses $m_q$ to regulate the mass singularities due to
$P^2=0$ \cite{ref:zerwas}.
However, this description is subject to perturbative QCD corrections
due to gluon radiation not present in the QPM \cite{ref:witten,ref:buras}.
The logarithmically enhanced contributions $\alpha_s \ln Q^2/Q_0^2$
can be resummed to all orders, removing the dependence on effective
quark masses, where $Q_0$ denotes some a priori {\em{not}} fixed
renormalization point somewhere in the perturbative
region $Q_0\gg\Lambda_{QCD}$.
Of course, this is not the whole story, since the
photon can undergo a transition
into a vector meson of the same quantum numbers, which is afterwards probed
by the $\gamma^*(Q^2)$ (Vector
Meson Dominance (VMD) assumption). This {\em{non}}-perturbative part obeys
the same evolution equations as known from the hadronic case.
Turning to {\em{virtual}} photons, i.e., $P^2\neq 0$, it is {\em{expected}}
\cite{ref:uematsu,ref:rossi} that for large enough virtualities $P^2$
one ends up with a {\em{fully perturbative}} prediction
irrespective of $Q^2$.
To facilitate the discussions, it is useful to define the relevant different
ranges of $P^2$:
\begin{displaymath}
\renewcommand{\arraystretch}{1.5}
\begin{array}{cccccc}
(\mbox{I}) & P^2\ll\Lambda_{QCD}^2\ll Q^2&,&
(\mbox{II}) & P^2\simeq \Lambda_{QCD}^2 &,\\
(\mbox{III}) & \Lambda_{QCD}^2 \ll P^2 \ll Q^2 &,&
(\mbox{IV}) & P^2 \simeq Q^2 &.
\end{array}
\end{displaymath}
Case (I) we have already discussed above, since it refers to a (quasi-)real
photon with $P^2\simeq 0$. In range (III) one can apply similar
considerations as long as one restricts oneself to {\em{transverse}}
virtual photons \cite{ref:uematsu,ref:rossi}, with the important
distinction that $P^2$ is now within the perturbative domain and hence can
serve to fix $Q_0$, i.e., $Q_0={\cal{O}(P)}$.
This is the basis for the above mentioned conjecture of
absolute predictability in this case, since any
non-perturbative VMD-inspired contributions are expected to vanish like
$(1/P^2)^2$ due to such a `dipole' suppression factor in the vector meson
`propagator'.
Several questions have to be addressed: up to which values of
$P^2$ (and $x$, $Q^2$) is the non-perturbative part relevant?
What are the lower and upper bounds on $P^2$ in (III),
i.e., where and how takes the
transition to regions (II) and (IV), respectively, place, and
down to which value of $P^2$ in (III) should one trust
perturbation theory?
For smaller $P^2$, i.e., for a transition to the parton content
of real photons (I), one has to find some appropriate, physically
motivated prescription which {\em{smoothly}} extrapolates
through region (II), where perturbation theory cannot be applied,
down to $P^2=0$.
On the other side, $P^2$ is bounded from above by $P^2\ll Q^2$ in order
to avoid power-like (possibly higher twist) terms $(P^2/Q^2)^n$
which should spoil the dominance of the resummed logarithmic contributions
$\sim \alpha_s \ln Q^2/P^2$ and, furthermore, to guarantee the dominance of
the transverse photon contributions in physical cross sections.
For $P^2$ approaching $Q^2$ (region (IV)) the $e^+e^-$ result should
reduce to the one given by the {\em{full}} fixed order
box $\gamma^*(Q^2)\gamma(P^2)\rightarrow q\bar{q}$ including all
$\left(P^2/Q^2\right)^n$ terms and possibly
${\cal{O}}(\alpha_s)$ QCD corrections, which are unfortunately
unknown so far.
The question of when fixed order perturbation theory becomes
the more reliable prescription
and the concept of virtual transverse photonic parton distributions
(i.e., resummations) becomes irrelevant and perhaps misleading is
in some sense similar
to the question of whether heavy quarks should be
treated as massless partons or not, which was extensively discussed
in the literature recently \cite{ref:grshq,ref:acot}.
Both issues are characterized by the appearance of at least two
different, large scales, $P^2$ and $Q^2$ (or $m_q^2$ and $Q^2$),
which might be indicative for resummations or not.
In our case here, however, one is also interested in the
transition to a region where resummations are indispensable
(i.e., for real photons),
but the range of applicability of this approach
with respect to $P^2$ (and possibly $x$ and $Q^2$) cannot be
determined reliably so far unless the full NLO corrections to the
$\gamma^*(Q^2)\gamma(P^2)$ box will be available
to analyze its perturbative stability.
As already mentioned,
for a given $Q^2\gg P^2$ and increasing $P^2$ one expects
that the resummed results approach the QPM result
determined for $m_q^2\ll P^2\ll Q^2$, due to the shrinkage of the evolution
length, i.e., less gluon radiation.
The QPM result can be obtained from the process
$\gamma^*(Q^2)\gamma(P^2)\rightarrow q\bar{q}$, but now
$P^2\neq 0$ can act as the regulator and no quark masses have to be
introduced. Taking the
limit $P^2/Q^2\rightarrow 0$ whenever possible, one obtains for
$F_2^{\gamma(P^2)}$ \cite{ref:uematsu,ref:rossi}
\begin{equation}
\label{eq:eq3}
\frac{1}{x} F_{2,QPM}^{\gamma (P^2)}(x,Q^2) = 3\sum_q e_q^4
\frac{\alpha}{\pi} \Bigg\{\left[x^2+(1-x)^2\right]
\left( \ln \frac{Q^2}{P^2} + \ln \frac{1}{x^2}
\right) -2 +6x -6x^2\Bigg\} \;\;.
\end{equation}
It is important to notice that (\ref{eq:eq3})
is {\em different} from the result for on-shell $(P^2=0)$
photons \cite{ref:zerwas}, due to the different
regularization adopted here\footnote{Note
that $F_L^{\gamma (P^2)}$ is independent of the regularization
adopted for calculating $\gamma^* (Q^2) \gamma (P^2)\rightarrow q\bar{q}$.}.
This difference will be relevant also for the
formulation of a model for the parton content of virtual photons, since
it is part of the perturbatively calculable boundary
condition in NLO \cite{ref:uematsu,ref:rossi}.
The $Q^2$-evolutions of the photonic parton distributions
are essentially the same for real and virtual transverse photons.
The inhomogeneous evolution equations are
most conveniently treated in the Mellin $n$ moment space,
where all convolutions simply factorize, and
the solutions can be given analytically (see, e.g.,
\cite{ref:disgamma,ref:grs}).
Let us only recall here that the distributions $f^{\gamma(P^2)}(x,Q^2)$,
obtained from solving the inhomogeneous evolution equations, can be
separated into a `pointlike' (inhomogeneous) and a `hadronic'
(homogeneous) part
\begin{equation}
\label{eq:eq4}
f^{\gamma (P^2),n}(Q^2)= f_{PL}^{\gamma (P^2),n}(Q^2)+
f_{HAD}^{\gamma (P^2),n} (Q^2)\;\;.
\end{equation}
In NLO the pointlike singlet solution is schematically given by
\cite{ref:disgamma,ref:grs}
\begin{equation}
\label{eq:eq5}
\vec{f}_{PL}^{\,\gamma (P^2),n} = \left( \frac{2\pi}{\alpha_s}+\hat{U}\right)
\left(1-L^{1+\hat{d}\,}\right) \frac{1}{1+\hat{d}}\, \vec{a} +
\left(1-L^{\hat{d}\,}\right) \frac{1}{\hat{d}}\, \vec{b}\;\;,
\end{equation}
and the usual NLO hadronic solution can be found, e.g, in
\cite{ref:disgamma,ref:grs}.
$\vec{a}$, $\vec{b}$, $\hat{d}$, and $\hat{U}$ in (\ref{eq:eq5}) stand
for certain combinations of the photon-parton splitting functions and
the QCD $\beta$-function \cite{ref:disgamma,ref:grs}, and
$L\equiv \alpha_S(Q^2)/\alpha_s(Q_0^2)$.
Let us finish this technical part by quoting the relevant NLO expression
for the structure function $F_{2}^{\gamma (P^2)}(x,Q^2)$.
It should be pointed out that the treatment and
expressions for $f^{\gamma (P^2)}(x,Q^2)$
(as {\em{on-shell}} transverse partons obeying the usual
$Q^2$-evolution equations) presented above
{\em{dictates}} an identification of the relevant resolved
$f^{\gamma (P^2)} X \rightarrow X'$ sub-cross sections
with that of the real photon according to
$\hat{\sigma}^{f^{\gamma (P^2)} X \rightarrow X'} =
\hat{\sigma}^{f^{\gamma} X \rightarrow X'}$.
In particular, the calculation of $F_2^{\gamma (P^2)}(x,Q^2)$ requires
the same hadronic Wilson coefficients $C_{2,q}$ and $C_{2,g}$ as for $P^2=0$,
\begin{equation}
\label{eq:eq6}
F_2^{\gamma (P^2)}\! =\!\! \sum_{q=u,d,s} \!\! 2x e_q^2 \Bigg\{
q^{\gamma (P^2)}\! +\frac{\alpha_s}{2 \pi}
\left(C_{2,q} \ast q^{\gamma (P^2)} +
C_{2,g} \ast g^{\gamma (P^2)} \right)\! +
\frac{\alpha}{2\pi} e_q^2 C_{2,\gamma} \Bigg\}+ F_{2,c}^{\gamma (P^2)} \;,
\end{equation}
where $F_{2,c}^{\gamma (P^2)}$ represents the charm contribution (see
Sec.~4) and $\ast$ denotes the usual Mellin convolution.
Note that in the ${\rm{DIS}}_{\gamma}$ scheme, the NLO
direct photon contribution $C_{2,\gamma}$ in (\ref{eq:eq6})
is absorbed into the evolution
of the photonic quark densities, i.e., $C_{\gamma,2}=0$ \cite{ref:disgamma}.
The difference in the QPM expressions for $F_2^{\gamma (P^2)}$ between
real and virtual photons (as pointed out below Eq.~(\ref{eq:eq3})),
i.e., in the expressions for $C_{2,\gamma}$, is then accounted for by
a perturbatively calculable boundary condition
for $q^{\gamma (P^2)}$ in NLO \cite{ref:grs}.
The LO expression for $F_2^{\gamma (P^2)}$ is obviously entailed in
(\ref{eq:eq6}) by dropping all $C_{i,\gamma}$.
Finally, it should be noted that $F_2^{\gamma (P^2)}$
is kinematically constrained within \cite{ref:rossi}
$0\le x\le (1+P^2/Q^2)^{-1}$.
\section{Comparison of different theoretical models}
Let us now briefly highlight the main features of the available
theoretical models for the parton densities of virtual photons:
\newline
\noindent
{\bf{GRS (Gl\"{u}ck, Reya, Stratmann) \cite{ref:grs}:}}
The GRS distributions provide a straightforward and simple extension
of the phenomenologically successful GRV photon densities \cite{ref:grvphoton}
to non-zero $P^2$ in LO {\em and} NLO. As for the GRV densities, the
NLO boundary conditions are formulated in the $\mathrm{DIS}_{\gamma}$
factorization scheme, originally introduced for $P^2=0$
to overcome perturbative instability problems arising in the conventional
$\overline{\mathrm{MS}}$ scheme for large values of
$x$ (see \cite{ref:disgamma} for details).
At the low input scale $Q_0=\mu \simeq 0.5\,\mathrm{GeV}$,
universal for all `radiatively generated' GRV distributions
(proton, pion, and photon), the parton densities of
real photons are solely given by a simple VMD-inspired input
in LO and NLO($\mathrm{DIS}_{\gamma}$).
All one needs to fully specify the distributions for $P^2\ne 0$ is a
simple, physically reasonable prescription which smoothly
interpolates between $P^2=0$ (region (I)) and $P^2\gg \Lambda_{QCD}^2$
(region (III)). This may be fixed by \cite{ref:grs}
\begin{equation}
\label{eq:eq7}
f^{\gamma (P^2)}(x,Q^2=\tilde{P}^2) = \eta(P^2)
f_{non-pert}^{\gamma (P^2)}
(x,\tilde{P}^2) + \left[ 1- \eta(P^2)\right]
f_{pert}^{\gamma (P^2)}(x,\tilde{P}^2)
\end{equation}
with $\tilde{P}^2=\max(P^2,\mu^2)$ and
$\eta(P^2) = (1 +P^2/m_{\rho}^2)^{-2}$
where $m_{\rho}$ refers to some effective mass in the
vector-meson propagator.
Note that the ansatz (\ref{eq:eq7}) implies that the input parton
distributions are frozen at the input scale $\mu$ for
real photons for $0\le P^2\le \mu^2$ such that the only
$P^2$ dependence in region (II) stems from the dipole dampening
factor $\eta(P^2)$.
In NLO($\mathrm{DIS}_{\gamma}$) the perturbatively calculable
input $f^{\gamma (P^2)}_{pert}(x,\tilde{P}^2)$
in Eq.~(\ref{eq:eq7}) is determined by the QPM box result (\ref{eq:eq3});
in LO it vanishes (see \cite{ref:grs} for details).
Since almost nothing is known experimentally about the parton structure
of vector mesons, the VMD-like non-perturbative input
is simply taken to be proportional to the
GRV pion densities $f^{\pi}$ \cite{ref:grvpion}
\begin{equation}
\label{eq:eq8}
f_{non-pert}^{\gamma (P^2)}(x,\tilde{P}^2) =
\kappa\; (4\pi \alpha/f_{\rho}^2)
\times \left\{ \begin{array}{ccc}
f^{\pi}(x,P^2)& , &P^2>\mu^2 \\
& & \\
f^{\pi}(x,\mu^2) &,& 0\le P^2 \le \mu^2
\end{array} \right.
\end{equation}
where $\mu$, $\kappa$, $f_{\rho}$ are specified in \cite{ref:grvphoton}.
\begin{figure}[th]
\begin{center}
\vspace*{-0.6cm}
\epsfig{file=proc2fig1.eps,width=9cm,angle=90}
\caption{\label{viphfig7} \sf GRS \cite{ref:grs}
LO and NLO$(\mathrm{DIS}_{\gamma})$
predictions for the $u$-quark and gluon content
of a virtual photon for $Q^2=10\,\mathrm{GeV}^2$
and various fixed values of $P^2 ({\rm{GeV}}^2)$.
For comparison, the LO and NLO GRV parton distributions of the
real photon $(P^2=0)$ \cite{ref:grvphoton} are shown as well.}
\end{center}
\vspace*{-0.7cm}
\begin{center}
\epsfig{file=proc2fig2.eps,width=8.5cm,angle=270}
\caption{\label{viphfig5} \sf NLO GRS predictions for $F_{eff}^{\gamma (P^2)}
\equiv F_2+\frac{3}{2} F_L$ \cite{ref:grs}.
The data points are taken from PLUTO \cite{ref:pluto}. The purely
perturbative results correspond to $\eta\equiv 0$ in Eq.~(\ref{eq:eq7}).}
\end{center}
\vspace*{-0.7cm}
\end{figure}
The resulting $u$-quark and gluon distributions
$u^{\gamma(P^2)}(x,Q^2)$ and $g^{\gamma(P^2)}(x,Q^2)$, respectively,
are shown in Fig.~1 for $Q^2=10\,\mathrm{GeV}^2$ and
some representative values of $P^2$.
With $g^{\gamma (P^2)}$ being the same
in the ${\rm{DIS}}_{\gamma}$ and $\overline{{\rm{MS}}}$
scheme, the shown ${\rm{DIS}}_{\gamma}$ results for $u^{\gamma (P^2)}$
can be easily transformed to the conventional $\overline{{\rm{MS}}}$
scheme \cite{ref:disgamma,ref:grvphoton,ref:grs}.
As can be inferred from the purely perturbative ($\eta \equiv 0$)
contributions, the non-perturbative components, entering for
$\eta \ne 0$ in Eq.~(\ref{eq:eq7}), are non-negligible and partly even
dominant (especially for $x\lesssim 0.01$).
It turns out \cite{ref:grs} that only for
unexpectedly large $P^2\gg m_{\rho}^2$, say, $P^2 (\ll Q^2)$ larger
than about $10\,{\rm{GeV}}^2$,
the perturbative component starts to dominate over the
entire $x$ range shown (cf., e.g., Fig.~13 in \cite{ref:grs}
for $P^2=20,\,100\,{\rm{GeV}}^2$ and $Q^2=1000\,{\rm{GeV}}^2$).
The precise form of $\eta(P^2)$ in Eq.~(\ref{eq:eq7})
clearly represents, apart from $f_{non-pert}^{\gamma (P^2)}$ itself, the
largest uncertainty in this model and has to be tested by future experiments.
The only measurement of the virtual photon structure in
$\gamma^*(Q^2) \gamma(P^2)$ DIS available thus far \cite{ref:pluto},
is compared in Fig.~2 with the NLO GRS
prediction for $F_{eff}^{\gamma (P^2)}\equiv F_2+\frac{3}{2} F_L$,
the combination measured effectively by PLUTO \cite{ref:pluto}
(see also Sec.~5). Due to the poor statistics of the data and the
rather limited $x$ range, the resummed NLO result cannot be
distinguished from the naive, not resummed QPM result (dotted curve).
Finally, it should be noted that in the GRS approach heavy quarks
do not take part in the $Q^2$-evolution, i.e., there is {\em no}
`massless' photonic charm distribution \cite{ref:grs}.
Heavy flavors can only be produced {\em extrinsically},
and their contributions have to be calculated
according to the appropriate massive sub-cross sections (see also Sec.~4).
The LO GRS distributions are available in parametrized form for
$P^2<10\,\mathrm{GeV}^2$ and $Q^2\gtrsim 5 P^2$ \cite{ref:grs2}.
\newline
\noindent
{\bf{SaS (Schuler and Sj\"{o}strand) \cite{ref:sas1,ref:sas2}:}}
The starting point for their analysis are also some well-established
LO sets of parton densities for real photons \cite{ref:sas1},
SaS 1D and SaS 2D, corresponding to two rather different
assumptions about the non-perturbative hadronic input\footnote{The
additional SaS 1M and SaS 2M sets \cite{ref:sas1} are theoretically
inconsistent, as the LO evolved densities are combined
with the NLO scheme-dependent photon-coefficient function $C_{2,\gamma}$
in the calculation of $F_2^{\gamma}$ in LO. These sets should not be used in
phenomenological analyses.}.
The SaS 1D set has a similarly low input scale $Q_0\simeq0.6\,\mathrm{GeV}$
as in the GRV \cite{ref:grvphoton} and GRS \cite{ref:grs} analyses, but
instead of simply relating the VMD input distributions to that of a
pion, a fit is performed to the coherent sum of the lowest-lying
vector meson states $\rho, \omega, \phi$.
For the SaS 2D set a `conventional' high input scale $Q_0=2\,\mathrm{GeV}$
is used at the expense of two additional fit parameters, one characterizing
the necessary additional `hard' component for the quark input at larger values
of $Q_0$, the other models the effect of additional vector meson states
beside the ones already taken into account in the SaS 1D set.
The shape of the SaS gluon densities are entirely fixed by
theoretical estimates, no direct or indirect constraints
from direct-photon production data in $\pi p$ collisions as in
the GRV analysis \cite{ref:grvphoton} have been imposed.
Contrary to GRS, heavy flavors are included as massless partons in
the photon above the threshold $Q^2>m^2_q$, but when calculating/fitting
the available $F_2^{\gamma}(x,Q^2)$ data, the massive
`Bethe-Heitler' cross section for $\gamma^*\gamma\rightarrow c\bar{c}$
is used instead, which is, however,
not entirely consistent due to double counting.
The extension to non-zero $P^2$ is based on the fact that the $n$ moments
of the photon densities parton can be expressed as a
dispersion-integral in the mass $k^2$ of the $\gamma \rightarrow q\bar{q}$
fluctuations, which links perturbative and non-perturbative contributions
\cite{ref:sas1,ref:sas2}.
Having assumed some model-dependent weight function for the
dispersion-integral, and after associating the low-$k^2$ part
with some discrete set of vector mesons (as for $P^2=0$),
one arrives at their final expression
for the parton densities of virtual photons \cite{ref:sas1,ref:sas2}
\begin{equation}
\label{eq:sas1}
f^{\gamma (P^2)}(x,Q^2) \!=\!\! \sum_V \frac{4\pi \alpha}{f_V^2}
\left[ \frac{m_V^2}{m_V^2+P^2}\right]^2 \! f^{\gamma,V}(x,Q^2,\tilde{P^2}) +
\sum_q \frac{\alpha}{\pi} e_q^2 \int_{\tilde{P}^2}^{Q^2} \!\!
\frac{dk^2}{k^2} f^{\gamma\rightarrow q\bar{q}}(x,Q^2,k^2)\;,
\end{equation}
where in the perturbative contribution the suppression factor
$[k^2/(k^2+P^2)]^2$ has been substituted by an effective lower
cut-off $\tilde{P}^2$ for the integration.
Both components $f^{\gamma,V}$ and $f^{\gamma\rightarrow q\bar{q}}$
in (\ref{eq:sas1}) integrate to unit momentum.
\begin{figure}[th]
\begin{center}
\vspace*{-1.5cm}
\epsfig{file=proc2fig3.eps,width=15cm}
\vspace*{-1cm}
\caption{\sf Comparison of the LO GRS predictions \cite{ref:grs} for the
$u$-quark and gluon content of virtual photons with the
SaS 1D and SaS 2D results for $\tilde{P}^2=\max (Q_0^2, P^2)$
\cite{ref:sas1} at $Q^2=10\,{\rm{GeV}}^2$ and for two values of $P^2$.}
\end{center}
\vspace*{-0.7cm}
\end{figure}
As in the GRS model above, the VMD part contains a dipole suppression
factor, which dampens all non-perturbative contributions with
increasing virtuality $P^2$. Contrary the the GRS approach, several
different choices for the input scale $\tilde{P}^2$ have been studied
apart from $\tilde{P}^2=\max (Q_0^2, P^2)$ \cite{ref:sas2}.
The differences between all these procedures can be viewed as a
measure for the theoretical uncertainty within this approach
(see, e.g., Figs.~1 and 2 in \cite{ref:sas2}).
It should be also noted that for some more complicated choices for
$\tilde{P}^2$, the photonic parton densities obey evolution equations different
from those of the real photon, e.g., the inhomogeneous
term can be modified by a factor $Q^2/(Q^2+P^2)$ \cite{ref:sas2}.
All different sets of distributions are available in parametrized form
\cite{ref:sas1,ref:sas2} but, for the time being,
the SaS analysis is restricted to LO only.
Fig.~3 compares the LO GRS with the LO SaS 1D and SaS 2D distributions
(choosing $\tilde{P}^2=\max (Q_0^2, P^2)$)
for $Q^2=10\,{\rm{GeV}}^2$ and two $P^2$ values relevant for future LEP2
measurements \cite{ref:gamgam}.
As can be seen, the SaS 1D results, which refer to a equally
low input scale as used in GRS, and the GRS densities
are rather similar, at least for the $u$-quark,
whereas the SaS 2D (quark) distributions are sizeably smaller
in this kinematical range, mainly due to the higher input scale.
For smaller values of $x$ (not shown in Fig.~3) the GRS
densities rise more strongly than the SaS densities, due the different
non-perturbative input.
However, with increasing $P^2$ all differences get, of course, more or less
washed out, since one approaches the purely perturbative domain and the
differences in the treatment of the non-perturbative component
become negligible.
\newline
\noindent
{\bf{DG (Drees and Godbole) \cite{ref:dg}:}}
The aim of this analysis is to estimate the influence of the
high-$P^2$ tail in the photon flux in untagged or anti-tagged
events, i.e., to study the impact of virtual photons in a sample of almost
real photons. This effect should be taken into account if one tries to
extract the parton densities of real photons from such measurements.
To perform a quantitative analysis, DG provide a simple interpolating
multiplicative factor $r$ which can be applied to
{\em any} set of distributions of real photons.
Several different forms for $r$ are studied in \cite{ref:dg},
and one of the main alternatives is
\begin{equation}
\label{eq:dg1}
r=1-\frac{\ln (1+P^2/P_c^2)}{\ln (1+Q^2/P_c^2)}\;\;,
\end{equation}
where $P_c$ denotes some typical hadronic scale like, for
instance, the $\rho$ mass.
The factor $r$ is applied to all quark flavors, but the gluon is
expected to be further suppressed, since it is radiated off the quarks
\cite{ref:borz}:
\begin{equation}
\label{eq:dg2}
q^{\gamma (P^2)}(x,Q^2) = r q^{\gamma}(x,Q^2)\,\,,\,\,
g^{\gamma (P^2)}(x,Q^2) = r^2 g^{\gamma}(x,Q^2)\;\;.
\end{equation}
For typical $\gamma\gamma$ experiments with $Q^2\simeq 10\,\mathrm{GeV}^2$
in an untagged situation, virtual photon effects then suppress the
effective photonic quark and gluon content by
about 10 and $15\%$, respectively \cite{ref:dg}.
Obviously, the above ansatz (\ref{eq:dg2}) does not change the $x$ shape of the
distributions, which would require more complicated forms for
$r$ \cite{ref:dg}. The recipe (\ref{eq:dg2}) does not appear
to be well suited for QCD tests of the virtual photon content:
on the one hand the densities
in (\ref{eq:dg2}) are not a solution of the inhomogeneous evolution
equations, and on the other hand there is no dipole power suppression factor
for the non-perturbative VMD part of the virtual photon densities as
in GRS \cite{ref:grs} or SaS \cite{ref:sas1,ref:sas2}, i.e., the
approximation (\ref{eq:dg2}) can only be applied in the large $x$ region
where the perturbative pointlike part dominates, whereas at smaller
$x$ it may grossly overestimates the densities for increasing virtuality
$P^2$.
A similar strategy as in Eqs.~(\ref{eq:dg1}) and (\ref{eq:dg2}) has
been used in \cite{ref:aurenche} including, however, a power
suppressed VMD part.
\section{Treatment of heavy flavors}
The question of how to treat heavy flavor $(m_q\gg\Lambda_{QCD})$
contributions to structure functions and cross sections in the most
appropriate and reliable way
has attracted a considerable amount of interest
in the past few years \cite{ref:grshq,ref:acot,ref:dghq}.
This was mainly triggered by the observation that the charm contribution
to the DIS proton structure function $F_2^p$ amounts to about $20-25\%$
in the small $x$ region covered by HERA.
In case of the photon structure function $F_2^{\gamma (P^2)}$
in (\ref{eq:eq6}), effects due to
charm are sizeable also in the large $x$ region due to the existence of
the direct/pointlike component, such that a proper treatment is even
more important here.
There are two extreme ways to handle heavy flavors: one can simply
include heavy flavors as massless partons in the evolution above
some threshold $Q^2\gtrsim m_q^2$, or one can stick to a picture with
only light partons in the proton/photon. In the latter case, heavy
flavors do not participate in the evolution equations at all and can be
produced only extrinsically.
In case of $F_{2,c}^{\gamma (P^2)}$ in (\ref{eq:eq6}),
two different contributions have
to be taken into account. Firstly, the direct `Bethe-Heitler' process
$\gamma^*(Q^2)\gamma(P^2)\rightarrow c \bar{c}$, and secondly the
resolved contribution $\gamma^*(Q^2)g^{\gamma(P^2)}\rightarrow c \bar{c}$.
For real photons these cross sections are known up to NLO and can be found
in \cite{ref:hq1}. In \cite{ref:hq2} it was shown
that the two contributions are
separated in the variable $x$: for large $x$, say, $x\gtrsim 0.05$,
the direct process dominates, whereas for $x\lesssim 0.01$ the dominant
contribution stems from the resolved part.
The fully massless treatment has been abandoned recently in all
existing modern sets of proton densities, simply because it does not exhibits
the correct $x$ {\em and} $Q^2$ dependent threshold behavior.
A potential problem with the massive treatment are possibly large
logarithms in the relevant sub-cross sections far above the threshold,
which {\em might} be indicative for resummations, i.e., for introducing a
`massless' heavy quark distribution.
Therefore various `unified' prescription were proposed recently
\cite{ref:acot} which
reduce to the massive results close to, and to the massless
picture far above threshold. For the time being these studies have been
performed only for the proton densities and not in the context of photons.
However, as mentioned in Sec.~3, GRS \cite{ref:grs}
prefer to stick to the fully massive framework,
similarly to the case of the GRV proton densities \cite{ref:grvproton}.
In each case this is motivated by the observation that all relevant
fully massive production mechanisms appear to be perturbatively
stable, at least for all experimentally relevant values of $x$ and $Q^2$
(see, e.g., Refs.~\cite{ref:grshq,ref:hq2}).
Moreover, all theoretical uncertainties, in particular the
dependence on the factorization scale appear to be
well under control, leading to
the conclusion that there is no real need for any resummation procedure.
It should be mentioned that the relevant expressions for
$\gamma^*(Q^2)\gamma(P^2)\rightarrow c \bar{c}$ for non-zero $P^2$
are available only in LO so far \cite{ref:budnev}, hence a study of the
perturbative stability cannot be performed here yet.
Clearly, the fully massive treatment is much more cumbersome and
inconvenient than the massless or `unified' framework when
calculating, for instance, jet production cross sections in $ep$
or $\gamma\gamma$ collisions. One cannot simply increase the number of
active flavors by one unit and use a $c^{\gamma (P^2)}(x,Q^2)$ distribution.
Instead one has to calculate the relevant sub-cross sections with massive
quarks for the final state configuration under consideration, which
is much more involved and time-consuming in numerical analyses.
However, for large-$p_T$
jet production in LO ($m_q/p_T\ll 1$), for instance, one can simply
approximate the relevant massive cross sections by their massless
counterparts, neglecting of course all massless contributions with a
`heavy' quark in the initial state.
\section{Measuring $\gamma^*$-PDF's in $e^+e^-$ and $ep$ reactions}
Since there are several dedicated contributions which discuss
recent experimental progress or future prospects
\cite{ref:maxfield,ref:kp,ref:jung}, we can be fairly
brief here and concentrate only on topics not covered elsewhere.
Let us first of all delineate the $x,\, Q^2,$ and $P^2$
ranges covered by LEP2 and the expected statistical accuracy for the virtual
photon structure functions measurements \cite{ref:gamgam}.
Because of its higher energy and
integrated luminosity, LEP2 can provide improved information from double
tagged events as compared to the not very precise
results from PLUTO \cite{ref:pluto} shown in Fig.~2.
The most important double tag sample at LEP2 is expected to
come from events with $Q^2\gtrsim 3\,{\rm{GeV}}^2$ and
$0.1\lesssim P^2\lesssim 1\,{\rm{GeV}}^2$. For typically expected
$500\,\mbox{pb}^{-1}$ of data collected, about 800 of such events will be seen,
covering $3\cdot 10^{-4}\lesssim x<1$ and $3\lesssim Q^2 \lesssim
1000\,{\rm{GeV}}^2$ \cite{ref:gamgam}. The yield of events with both $Q^2$ and
$P^2 \gtrsim 3\,{\rm{GeV}}^2$ seems to be too small for a meaningful analysis.
Fig.~4 shows the virtual photon structure function
$F^{\gamma (P^2)}_2(x,Q^2)$ in LO as predicted by the
SaS 1D, SaS 2D (using $\tilde{P}^2=\max (Q^2,P^2)$), and
GRS models in two bins for $P^2$ and $Q^2$.
The error bars indicate the statistical precision
expected for each $x$ bin using the SaS 1D densities
(similar for the SaS 2D and GRS distributions). A measurement of
$F_2^{\gamma (P^2)}(x,Q^2)$ within these bins, as distinct from the real
$(P^2=0)$ photon structure function (illustrated by the solid curves for
SaS 1D) should be possible at LEP2 and could be compared to
the different model predictions, which turn out to be
rather similar in the accessible $x,\,Q^2,$ and $P^2$ bins, except for
the smallest $x$ bins and $P^2\rightarrow 0$
($\langle P^2\rangle =0.2\,{\rm{GeV}}^2$).
The latter differences are of course related to the present ignorance
of the $P^2=0$ distributions for $x\rightarrow 0$, i.e.,
whether they either steeply rise as in case of GRV \cite{ref:grvphoton}
or show a rather flat $x\rightarrow 0$ behaviour as, e.g.,
in case of SaS 2D \cite{ref:sas1,ref:sas2}.
\begin{figure}[th]
\vspace*{-1.5cm}
\begin{center}
\epsfig{file=proc2fig5.eps,width=13.5cm}
\vspace*{-0.4cm}
\caption{\label{vphappfig1}\sf Expectations for the statistical accuracy
of the virtual photon structure measurement at LEP2 in two different
$P^2$ and $Q^2$ bins using the SaS 1D distributions \cite{ref:sas1}.
The SaS 1D predictions for $P^2=0$ and the results for the GRS
\cite{ref:grs} and SaS 2D models are shown as lines for comparison.
The upper (lower) curves for GRS and SaS 2D refer to
$P^2=0.2\, (0.5)\,{\rm{GeV}}^2$, respectively.
The figure is taken from \cite{ref:gamgam}.}
\end{center}
\vspace*{-0.7cm}
\end{figure}
However, apart from the experimental challenge there is an
additional complication already noticed by PLUTO \cite{ref:pluto}:
what is directly measured is, of course,
{\em not} $F_2^{\gamma (P^2)}$ but the
$\gamma^*(Q^2)\gamma(P^2)$ DIS cross section, which can be schematically
expanded as
$\sigma_{\gamma^*(Q^2)\gamma(P^2)}=\sigma_{TT}+\varepsilon_1
\sigma_{LT} + \varepsilon_2 \sigma_{TL} + \varepsilon_1 \varepsilon_2
\sigma_{LL}$, where $L$ and $T$ denote longitudinal and transverse
polarization, respectively, of the probe and the target photons, and
$\varepsilon_{1,2}$ are the $L/T$ $\gamma$-flux ratios.
For PLUTO \cite{ref:pluto} $\varepsilon_1\simeq \varepsilon_2\simeq 1$
$(\Leftrightarrow y\ll1)$ and {\em assuming} that
$\sigma_{LL}\simeq 0$ and $\sigma_{LT}=\sigma_{TL}$, as for the
QPM expressions for $\gamma^*_{T,L}(Q^2)\gamma_{T,L}(P^2)\rightarrow q\bar{q}$
for vanishing constituent quark masses \cite{ref:budnev},
one arrives at the combination \cite{ref:pluto}
$\sigma_{\gamma^*\gamma}\sim F_2 +3/2 F_L
\equiv F_{eff}^{\gamma (P^2)}$ effectively measured by PLUTO (cf.\ Fig.~2).
Hence, strictly speaking such measurements cannot be directly related
to the densities $f^{\gamma(P^2)}(x,Q^2)$, since only
{\em transverse} $(T)$ virtual photons are described by the GRS, SaS,
and DG models. Furthermore, in the QPM model \cite{ref:budnev}
it turns out that the
contribution due to longitudinal target photons is rather sizeable
at large $x$ even for $P^2/Q^2\ll1$, contrary to the expectation that
transverse photons should dominate in this region. Clearly, more work is
required here for a meaningful interpretation of any future results
from LEP2, possibly including also studies of the parton content of
longitudinal photons which have not been carried out so far.
In $ep$ collisions (di-)jet production is certainly the best tool
to decipher the parton structure of virtual photons, and a lot of
experimental and theoretical progress was reported at the workshop
\cite{ref:maxfield,ref:kp}.
There should be a hierarchy between the hard scale $\mu_f^2$
($=Q^2$ in $e^+e^-$) at which the virtual photon is probed
(typically $\mu_f^2={\cal{O}}(P^2+p_{T,jet}^2)$ in case of jet
production) and the photon virtuality $P^2$ (see footnote 1).
Exactly in this kinematical domain an excess in the dijet rate was
observed by H1 \cite{ref:maxfield}, which can be
nicely attributed to a resolved virtual photon contribution,
in accordance with all existing models for the $f^{\gamma (P^2)}$
described in Sec.~3.
Recently, similar studies were extended to
the production rate of forward jets \cite{ref:jung},
which is regarded as a test in favor of the BFKL dynamics
\cite{ref:mueller}. Indeed the
usual DGLAP (direct photon induced) cross section falls short
of the data by roughly a factor of two \cite{ref:h1forward}.
In \cite{ref:jung} it was demonstrated,
however, that the inclusion of the resolved virtual photon component
removes this discrepancy, and the full DGLAP results are then
in even better agreement with data than the BFKL results, in particular for
the two to one forward jet ratio \cite{ref:h1forward}.
However one should be cautious to jump to any conclusions.
In order to suppress the phase space for the DGLAP evolution, the
$p_T^2$ of the forward jet is required to be of the same size as
the virtuality $P^2$ of the photon, hence there is no real
hierarchy between the hard scale $\mu_f^2={\cal{O}}(P^2+p_{T,jet}^2)$
and $P^2$ and thus no large logarithm $\ln \mu_f^2/P^2$ which
is resummed in $f^{\gamma (P^2)}$. Naively one would therefore expect only a
small resolved contribution, as was also
observed in the H1 jet analysis \cite{ref:maxfield} or in the
theoretical studies \cite{ref:kp} for $P^2\rightarrow \mu_f^2$,
rather than a gain by about a factor of two.
Hence the kinematics of forward jets seems to be very subtle
(for instance, the virtual photon content is only probed at large
momentum fractions $x_{\gamma}$), and
presumably the theoretical uncertainties due to scale variations and
changes in the model for the $f^{\gamma (P^2)}$ (in particular, of the input
scale $\tilde{P}^2$) are of the same size as the resolved photon
contribution itself. More detailed studies are clearly required here.
Furthermore, it should be kept in mind that
all BFKL results so far are based only LO parton level calculations.
It seems, however, that the forward jet kinematics is not suited to
distinguish between BFKL and DGLAP at HERA \cite{ref:jung}.
| proofpile-arXiv_065-8255 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Since the work of Loeb \& Mao (1994), the possibility of
explaining the discrepancies on mass determinations, found by
Miralda-Escud\'{e} \& Babul (1994), via non-thermal pressure support has
been widely discussed. The discrepancy arises
from the two most promising techniques to obtain clusters of galaxies
masses. On one hand, the determination of masses in clusters of galaxies,
via X-ray data, is based on the hypothesis that the ICM is in hydrostatic
equilibrium with the gravitational potential, using the radial profiles of
density and temperature (Nulsen \& B\"{o}hringer
1995). On the other hand, gravitational lensing measures the projected
surface density of matter, a method which makes no assumptions on the
dynamical state of the gravitating matter
(Miralda-Escud\'{e} \& Babul 1994; Smail et al. 1997).
In clusters with diffuse radio emission X-ray observations can give a lower
limit to the strength of the magnetic field (the $3{\rm K}$ background photons
scattering off the relativistic electrons produces the diffuse X-ray
emission). Typically, this limit is $B\geq
0.1\;{\rm \mu G}$ (Rephaeli et al. 1987) on scales of $\sim 1\;{\rm Mpc}$.
Such a kind of detection of clusters magnetic fields leads,
using ROSAT\ PSPC data and also ${327}{\rm MHz}$ radio map of Abell 85, to
an estimate of $(0.95\pm .10)\;{\rm \mu G}$ (Bagchi et al. 1998).
In the case of Faraday rotation the information obtained is the upper limit
on the intensity of the field, and the measured values are $(RM\leq 100\;%
{\rm rad/m^2})$, which is more or less consistent with a intracluster field
of $B\sim 1\;{\rm \mu G}$, with a coherence length of $l_B\leq 10\;{\rm kpc}$.
This strength of the magnetic field corresponds to a ratio of magnetic to
gas pressure $p_B/p_{gas}$ $\leq 10^{-3}$, implying that $B$ does not
influence the cluster dynamics (at least on large scales). In inner regions
the magnetic fields are expected to be amplified due to the gas compression
(Soker \& Sarazin 1990). For frozen-in fields and homogeneous and spherically
symmetric inflow, $B\propto r^{-1}$ and $RM\propto
r^{-1}$, ($p_B\propto r^{-2}$ whereas gas pressure increases).
Very strong Faraday rotations were observed ($RM\sim 4000\;{\rm rad/m^2}$)
implying $B\geq 10\;{\rm \mu G}$ at $l_B\sim 1\;{\rm kpc}$ (Taylor \& Perley
1993; Ge \& Owen 1993, 1994).
\section{Evolution of the ICM with Magnetic Pressure}
Using a spherically symmetric finite-difference scheme Eulerian code,
the evolution of the intracluster
gas is obtained by solving the
hydrodynamic equations of mass, momentum and energy conservation (see Fria\c ca
1993), coupled to the state equation for a fully ionized gas with $10\%$
helium by number. The mass distribution, $M(r)$, is due to the contribution
of the X-rays emitting gas plus the cluster collisionless matter (which is
the sum of the contributions of galaxies and dark matter -- the latter being
dominant) following $\rho_{cl}(r)=\rho _c(1+r^2/a^2)^{-3/2}$,
{$\rho _c$ and $a$ (the cluster
core radius) are related to the line-of-sight velocity dispersion, $\sigma $,
by $9\sigma
^2=4\pi Ga^2\rho _c$. }The total pressure $p_t$ is the sum of thermal and
magnetic pressure, e.g. $p_t=p+p_B$. The constraints to the magnetic
pressure come from observations, from which $p_B=B^2/8\pi \simeq 4\times
10^{-14}{\rm erg\;cm}^{-3}{\rm s}^{-}1$ (cf. Bagchi et al. 1998) for a diffuse
field located at $\sim 700h_{50}^{-1}{\rm kpc}$ from the cluster center.
The initial conditions for the gas are an isothermal atmosphere ({$%
T_0=10^7$}${\rm K}$) with $30\%$ solar abundance and density distribution
following that of the cluster dark matter. The evolution is followed until
the age of $14{\rm \;Gyr}$.
We assume: frozen-in field; spherical symmetry for the flow and the cluster
itself; and that at $r>r_c$ (the cooling radius), the magnetic
field is isotropic, i.e., $B_{r^{}}^2=B_b^2/2=B^2/3$ and $l_r=l_t\equiv l$
(where $B_r$ and $B_t$ are the radial and transversal components of the
magnetic field $B$ and $l_r$ and $l_t$ are the coherence length of the
large-scale field in the radial and transverse directions). In order to
calculate $B_r$ and $B_t$ for $r<r_c$ we modified the calculation of the
magnetic field of Soker \& Sarazin (1990) by considering an inhomogeneous
cooling flow (i.e. $\dot{M}_i\neq \dot{M}$ varies with $r$). Therefore, the
two components of the field are then given by $D/Dt(B_{r^{}}^2r^4\dot{M}%
^{-2})=0$ and $D/Dt(B_{t^{}}^2r^2u^2\dot{M}^{-1})=0$. In our models it is
admitted that the reference radius is the cooling radius $r_c$. In fact, we
modify the geometry of the field when and where the cooling time comes to be
less than $10^{10}{\rm yr}$. Therefore, our condition to assume a
non-isotropic field is $t_{coo}\equiv 3k_BT/2\mu m_H\Lambda (T)\rho \ga %
10^{10}{\rm yr}$.
\section{Models and Results}
There are four
parameters to consider in each one of the models: $\sigma =1000{\rm \;km/s}$,
the cluster velocity dispersion; $\rho _0${{$=1.5\times 10^{-28}$}}${\rm g\;cm}%
^{-3}$, the initial average mass density of the gas; $a=250\;{\rm kpc}$, the
cluster core radius; and $\beta _0=10^{-2}$ (model A)$,10^{-3}$ (model B),
the initial magnetic to thermal pressure ratio.
First of all, the evolution we follow here is characteristic of cooling
flow clusters and in this scenario we discuss the evolution of the basic
thermodynamics parameters. Considering the overall characteristics of our
models, we compare the present models with Peres et al. (1998) deprojection
results (based on ROSAT observations), pointing out that the central cooling
time here adopted as our cooling
flow criterion, e.g. $t_{coo}\ga 10^{10}{\rm yr}$, is typical for a fraction
between $70\%$ and $90\%$ of their sample. This allow us conclude that our
models, which present cooling flows since the cluster has the age of $\sim
7-9{\rm \;Gyr}$, are typical for their sample.
\begin{figure}
\plottwo{sestofig1a.ps}{sestofig1b.ps}
\caption{Evolution of the density and temperature (left) and magnetic field
strength (right) profiles. Curves represent
early and late stages of the ICM evolution, as labeled, for the model A.}
\end{figure}
\begin{figure}
\plottwo{sestofig2a.ps}{sestofig2b.ps}
\caption{Evolution of the magnetic (dashed line) and thermal (full line)
pressures profiles on late stages of the ICM evolution for the model A
($\beta_0 = 10^{-2}$, left)
and model B ($\beta_0 = 10^{-3}$, right).}
\end{figure}
Figures 1 shows the evolution of density, temperature and magnetic
field strength, for model A, from which the presence of the cooling flow on
later stages of the ICM evolution and at inner regions, is remarkable, if
one notices the steep gradients of these quantities.
We chose two values of
magnetic field strength for the comparison of the results, on small and
large scales (see figure). Our results
for the magnetic field strength and also for the pressure, on large
and small scales, are in agreement to the observed ones.
Obviously the magnetic pressure (Figure 2) is compatible with the
magnetic field intensities and may be compared to the values determined by,
for instance, Bagchi et al. (1998), $p_B=B^2/8\pi \simeq 4\times 10^{-14}%
{\rm erg\;cm}^{-3}{\rm s}^{-1}$, at scales of $700\;{\rm kpc}$, in the present
time. From the analysis of the magnetic pressures expected from our models
it is clear that they agree, as well as the magnetic field strength, with
the observations.
The present models are in many aspects similar to the one of Soker \&
Sarazin (1990). However there are two important differences between our
model and theirs: i) they take into account only small-scale magnetic field
effects; and ii) they consider the magnetic field isotropic even in the
inner regions of the cooling flow. As a matter of fact the magnetic pressure
reaches equipartition only at radius as small as $\ga 1\;{\rm kpc}$ (model A)
or $\ga 0.5\;{\rm kpc}$ (model B), because the central increase of the $\beta
$ ratio is moderate in our model. Our more realistic description of the
field geometry is crucial. This implies that the effect of the magnetic
pressure on the total pressure of the intracluster medium, even on regions
as inner as few kpc, is small. Evolutive models for the intracluster medium,
with a realistic calculation of the geometry and amplification of the
magnetic fields,
like the one presented here, indicate that magnetic pressure
does not affect the hydrostatic equilibrium, except in the innermost regions,
i.e. $r \la 1{\rm \;kpc}$ (see Gon\c calves \& Fria\c ca 1998 for a more
detailed discussion).
\acknowledgements{We would like to thank the Brazilian agencies FAPESP
(97/05246-3 and 98/03639-0), CNPq and Pronex/FINEP (41.96.0908.00)
for support.}
| proofpile-arXiv_065-8259 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{The puzzles of the Big Bang model}
Cosmologists have long been dissatisfied with the ``Standard Big
Bang'' (SBB) model of the Universe. This is not due to any conflict
between the big bang theory and observations, but because of the limited
scope offered by the SBB to explain certain striking features
of the Universe. From the SBB perspective the homogeneity, isotropy,
and ``flatness'' of the Universe,
and the primordial seeds of galaxies and other structure
are all features which are ``built in'' from the beginning as
initial conditions. Cosmologists would like to explain these features
as being the result of calculable physical processes. A great
attraction of the Inflationary Cosmologies \cite{infl}
is that they address these
issues by showing on the basis of concrete calculations that a wide
variety of initial conditions evolve, during a period of cosmic
inflation, to reflect the homogeneity, isotropy, flatness and
perturbation spectrum that we observe today.
So far, {\em all} attempts to achieve this kind of
improvement over the SBB have
wound up taking the basic inflationary form, where the observable
Universe experiences a period of
``superluminal'' expansion. This is accomplished by modifying the
matter content of the Universe in such a way that
ordinary Einstein gravity becomes repulsive and drives inflationary
expansion.
This process is
in many ways remarkably straightforward and has found numerous
realizations over the years (\cite{infl1,infl2b,infl2,infl3}, etc),
although it might still be argued that a
truly compelling microscopic foundation for inflation has yet to
emerge.
One interesting question is whether inflation is the {\em right}
solution to the cosmological puzzles. Is inflation really what nature
has chosen to do? When this matter is discussed there is a
notable absence of any real competition to inflation, and this must be
counted in inflation's favour.
However, we believe the picture would become
much clearer if some kind of debate along these lines were possible.
To this end, we discuss here a possible alternative to inflationary
cosmology which, while not as well
developed as today's inflationary models,
might lead to some illuminating discussion.
In this alternative
picture, rather than changing the matter content of the Universe, we
change the speed of light in the early Universe.
We assume that the Universe matter content is the same as in the
SBB, that is, the Universe is radiation dominated at early times.
We also assume that Einstein's gravity is left unchanged,
in a sense made precise in Section~\ref{post}. The geometry and
expansion factor of the Universe are therefore the same as in
the SBB. However the {\it local} speed of light, as measured
by free falling observers associated with the cosmic expansion,
varies in time, decelerating from a very large value to its
current value.
We discuss below how Varying Speed of Light (VSL) models might
resolve the same cosmological puzzles as inflation, and
offer a resolution to the cosmological constant problem as well.
We shall not dwell on the possible mechanisms by means of which
the speed of light could have changed. Rather we wish to concentrate
on the conditions one should impose on VSL models for their cosmological
implications to be interesting. This phenomenological approach
should be regarded as a curiosity, which, we hope, will prompt
further work towards an actual theory in which the physical basis of
VSL models is realized.
One may doubt that such a self-consistent theory could ever be
constructed. We therefore feel forced to transcend the scope
of this paper, and discuss essential aspects of such a theory.
We find it befitting to start our discussion with an assessment
of the experimental meaning of a varying $c$ (Section~\ref{mean}).
We also need to be more specific about VSL theories in order
to tackle the flatness, cosmological constant, homogeneity,
and entropy problems. In Section~\ref{post} we state
what is actually required from any VSL theory to solve these problems.
However in Appendix I we lay out the foundations for such a theory.
\section{The meaning of a variable speed of light}\label{mean}
We first address the question of the meaning of a varying
speed of light. Could such a phenomenon be proved or disproved
by experiment? {\it Physically}
it does not make sense to talk about constancy
{\it or} variability of any dimensional ``constant''. A measurement of
a dimensional quantity must always represent its ratio to some
standard unit. For example, the length of my arm in meters is really
the dimensionless quantity given by the ratio of the arm length to
the length of a meter stick. If the ratio varied, one {\em could} interpret
this as a variation in either (or both) of the two lengths.
In familiar situations, there is usually a preferred interpretation
which distinguishes itself by giving a simpler view of the
world. Choosing a given person's arm as a standard of length would
require a whole range of simple objects to undergo peculiar dynamics,
whereas assuming the meter stick to be constant would usually give a
much simpler picture.
None the less, a given theory of the world requires dimensional
parameters. If these parameters varied, how would that process show up in
experiments? Suppose we set out to measure the speed of light.
For this one needs a length measure (rod) and a clock. In a world
described by a theory with time varying dimensional parameters, it is
quite possible that the rods and clocks, as well as the photon speeds,
could all vary. Because measurements are fundamentally dimensionless,
the experimental result will only measure some dimensionless
combination of the fundamental constants. Let us sketch a simple
illustration: Suppose we measure time with an atomic clock. Taking
the Rydberg energy ($E_R = m_e e^4/ 2(4\pi \epsilon_o
)^2\hbar^2$) to represent the dependence of all atomic energy levels on the
fundamental constants, the oscillation period of the atomic clock will be
$\propto \hbar /E_R$. Likewise, taking the Bohr radius ($a_0 =
4\pi\epsilon_0\hbar^2 / m_e e^2$) to reflect the
relationship between the lengths of ordinary objects (made of atoms)
and the fundamental constants, the length of our rod is $\propto a_0$.
Thus a measurement of $c$ with our equipment is really a measurement
of the dimensionless quantity
\begin{equation}
{c \over {a_0 / (\hbar /E_R)}} = {8\pi\epsilon_0\over \alpha}
\end{equation}
essentially the fine structure constant.
We could of course use other equipment which depends in
different ways on the fundamental dimensionless constants. For
example, pendulum clocks will necessarily involve Newton's constant
$G$. Different experiments will result, which measure different
dimensionless combinations of the fundamental dimensional constants.
Our conclusion that physical experiments are only sensitive to
dimensionless combinations of dimensional constants is hardly a new one.
This idea has been often stressed by Dicke (eg. \cite{dicke}), and we
believe this is not controversial.
Thus, speaking in theoretical terms of time varying dimensional
constants can lead to problems.
To give an historical example,
papers \cite{baum,sol} were written claiming stringent
experimental upper bounds
on the time variability of the dimensional quantity $\hbar c$.
In these the product $E\lambda$ was found to be the same for light
emitted at very different redshifts. From the deBroglie relation
$\hbar c=E\lambda$ one infers the constancy of $\hbar c$.
Bekenstein gives an illuminating discussion
of the fallacy built into this argument
\cite{beck}. Built into $E\propto 1/a$
and $\lambda\propto a$ is the assumption that $\hbar c$ is constant,
for otherwise the wavevector $k^\mu$ and the momentum vector $p^\mu$
could not both be parallel transported. Hence the experimental
statement that $\hbar c$ is constant is circular.
What would we do therefore if we were to observe changing dimensionless
quantities? Any theory explaining the phenomenon would necessarily have
to make use of dimensional quantities. It would a priori be a matter of
choice, prejudice, or convenience to decide which dimensional quantities
are variable and which are constant (as we mentioned in the
illustration above). There would be a kind of equivalence, or duality between
theories based on any two choices as far as dimensionless observations
are concerned. However, the equations for two theories which are
observationally equivalent, but which have different dimensional
parameters varying, will in general not look the same, and again
simplicity will end up being an important factor in making a choice
between theories. In what follows, we will prefer to work with models
which have the simplicity of ``minimal coupling''.
Let us illustrate this point with a topical example.
There has been a recent claim \cite{webb} of experimental evidence
for a time changing fine structure constant $\alpha=e^2/(4\pi \hbar c)$.
Although the ongoing chase for systematics precludes
any definitive conclusions,
let us assume for the purpose of the argument that the effect is real.
In building a theory which explains a variable $\alpha$
we must make a decision. We could {\it postulate} that electric charge
changes in time, or, say, that $\hbar c$ must change in time.
Bekenstein \cite{bek2} constructs a theory based on the first alternative.
He postulates a Lorentz invariant action, which does not conserve
electric charge.
Our theory is based on the second choice. We postulate breaking
Lorentz invariance, a changing $\hbar c$, and consequently
non-conservation of energy. Any arguments against
the experimental meaning of a changing $c$ can also be directed
at Bekensteins' changing $e$
theory, and such arguments are in both cases meaningless. In both cases
the choice of a changing dimensional ``constant'' reverts to the postulates
of the theory and is not, a priori, an experimental issue. The
observables are always dimensionless.
However, the {\em minimally coupled} theories based on either choice are
{\em not} dual (as we shall point out in Appendix I).
For this reason one might prefer
one formulation over the other.
Finally, and on a different tone,
suppose that future experiments were to confirm that not only $\alpha$
changes in time, but also that there are
time variations in dimensionless coupling constants based on other
interactions,
$\alpha_i=g_i^2/(\hbar c)$\footnote{In writing
these constants
we have assumed that the couplings of these interactions are defined
in terms of ``charges'' (with dimensions of $[E]^{1/2}[L]^{1/2}$). }.
Suppose further that
the ratios between the various
constants, $r_{ij}=\alpha_i/\alpha_j$, were observed to be constant.
Choosing what dimensional constants were indeed constants would still
be a matter of taste.
One could still define a theory in which the various charges
$g_i$ change in time, with fixed ratios, and $\hbar c$ remains constant.
However it would perhaps start to make more sense, merely
for reasons of simplicity, to postulate instead a changing $\hbar c$.
Therefore, even though a variable $c$ cannot be made a dimensionless
statement, evidence in favour of theoretical models with varying $c$ could be
accrued if the other $\alpha_i$ changed, with fixed ratios.
\section{Cosmological horizons}\label{coshor}
Perhaps the most puzzling feature of the SBB is the presence
of cosmological horizons. At any given time any observer
can only see a finite region of the Universe, with comoving radius
$r_h=c\eta$, where $\eta$ denotes conformal time, and
$c$ the speed of light. Since the horizon
size increases with time we can now observe many regions in our past
light cone which are causally disconnected, that is, outside each others'
horizon (see Fig.~\ref{fig1}).
The fact that these regions have the same properties (eg.
Cosmic Microwave background temperatures equal
to a few parts in $10^5$) is puzzling
as they have not been in physical contact. This is a mystery one may
simply relegate to the setting up of initial conditions in our Universe.
\begin{figure}
\centerline{\psfig{file=fig1.eps,width=6 cm,angle=-90}}
\caption{Conformal diagram (light at $45^\circ$) showing the
horizon structure in the SBB model. Our past light cone contains
regions outside each others' horizon.}
\label{fig1}
\end{figure}
\begin{figure}
\centerline{\psfig{file=fig2.eps,width=6 cm,angle=-90}}
\caption{Diagram showing the horizon structure in a SBB model
in which at time $t_c$ the speed of light changed from $c^-$
to $c^+\ll c^-$. Light travels at $45^\circ$ after $t_c$
but it travels at a much smaller angle with the space axis before
$t_c$. Hence it is possible for the horizon at $t_c$ to be much
larger than the portion of the Universe at $t_c$ intersecting our
past light cone. All regions in our past have then always been
in causal contact.}
\label{fig2}
\end{figure}
One may however try to explain these very peculiar initial conditions.
The horizon problem is solved by inflationary scenarios by postulating
a period of accelerated or superluminal
expansion, that is, if $a$ is the expansion
factor of the Universe, a period with $\ddot a>0$.
The Friedman equations
require that the strong energy condition $\rho + 3p/c^2 \ge 0$ must then
be violated, where $\rho c^2$ and $p$ are the energy density and pressure
of the cosmic matter. This violation is achieved by the inflaton field.
If $\ddot a>0$ for a sufficiently long period one can show
that cosmological horizons are a post-inflation
illusion, and that the whole
observed Universe has in fact been in causal contact
since an early time.
A more minimalistic
way of solving this problem is to postulate that light
travelled faster in the Early Universe. Suppose there was a ``phase
transition'' at time $t_c$ when the speed of light changed from $c^-$ to
$c^+$. Our past light cone intersects $t=t_c$ at a sphere
with comoving radius
$r=c^+ (\eta_0-\eta_c)$, where $\eta_0$ and $\eta_c$ are the conformal
times now and at $t_c$. This is as much of the Universe after the
phase transition
as we can see today~\cite{note2}. On the other hand the horizon size at $t_c$
has comoving radius $r_h =c^-\eta_c$. If $c^-/c^+\gg\eta_0/\eta_c$,
then $r\ll r_h$, meaning that the whole observable Universe today has
in fact always been in causal contact (see Fig.~\ref{fig2}).
Some simple manipulations show
that this requires
\begin{equation}\label{cond1}
\log_{10}{c^-\over c^+}\gg 32 -{1\over 2}\log_{10}z_{eq}+{1\over 2}
\log_{10}{T^+_c\over T^+_P}
\end{equation}
where $z_{eq}$ is the redshift at matter radiation equality, and $T^+_c$
and $T^+_P$ are the Universe and the Planck temperatures after the phase
transition. If $T^+_c\approx T^+_P$ this implies light travelling more
than 30 orders of magnitude faster before the phase transition.
It is tempting, for symmetry reasons, simply to postulate that
$c^-=\infty$ but this is not strictly necessary.
\section{A prescription for modifying physical laws while the
speed of light is varying}\label{post}
Hidden in the above argument is the assumption that the
geometry of the Universe is not affected by a changing $c$.
We have allowed a changing $c$ to do the job normally
done by ``superluminal expansion''. To enhance this effect
we have forced the geometry to still be the SBB geometry.
We now elaborate on this assumption.
We will propose a prescription for how, in general, to modify
gravitational laws while $c$ is changing. This
prescription is merely the one we found the most fertile.
In Appendix I we describe in detail a theory which
realizes this prescription.
The basic assumption is that a variable $c$ does not induce
corrections to curvature in the cosmological frame, and that
Einstein's equations, relating curvature to stress energy,
are still valid. The rationale behind this postulate is that
$c$ changes in the local Lorentzian frames associated
with cosmological expansion. The effect
is a special relativistic effect, not a gravitational effect.
Therefore curvature should not feel a changing $c$.
The previous statement is not covariant. However introducing
a function $c(t)$ is not even Lorentz invariant. So it is not
surprising that a favoured gauge, or coordinate choice, must be
made, where the function $c(t)$ is specified, and in which the
above postulate holds true. The cosmological frame
(with the cosmological time $t$) provides such a preferred frame.
In a cosmological setting the postulate proposed implies
that Friedman equations remain valid even when $\dot c\neq 0$:
\begin{eqnarray}
{\left({\dot a\over a}\right)}^2&=&{8\pi G\over 3}\rho -{Kc^2\over a^2}
\label{fried1}\\
{\ddot a\over a}&=&-{4\pi G\over 3}{\left(\rho+3{p\over c^2}\right)}
\label{fried2}
\end{eqnarray}
where, we recall, $\rho c^2$ and $p$ are the energy and
pressure densities,
$K=0,\pm 1$ and $G$ the curvature and the gravitational
constants, and the dot denotes a derivative with respect to proper time.
If the Universe is radiation dominated, $p=\rho c^2/3$, and we
have as usual $a\propto t^{1/2}$.
We have assumed that a frame exists where $c=c(t)$, and identified
this frame with the cosmological frame.
The assumption that Einstein's equations remain unaffected by
decelerating light carries with it an important consequence.
Bianchi identities apply to curvature, as a geometrical identity.
These then imply stress energy conservation as an integrability
condition for Einstein's equations.
If $\dot c\neq 0$, however,
this integrability condition is not stress energy
conservation. Source terms, proportional to $\dot c/c$,
come about in the conservation equations.
Seen in another way, the conservation equations imply an
equation of motion for free falling point particles.
This is normally the geodesic equation,
but now source terms will appear in the geodesic equation.
Clearly a violation of the weak equivalence principle is implied
while $c$ is changing \cite{will}. This, of course, does not
conflict with experiment, as we take $\dot c\neq 0$ only in the Early
Universe, possibly for only a very short time (such as a
phase transition).
Although this is a general remark we shall be concerned mostly
with violations of energy conservation in a cosmological
setting. Friedman equations can be combined into a
``conservation equation'' with source terms in
$\dot c/c$ and $\dot G/G$:
\begin{equation}\label{cons1}
\dot\rho+3{\dot a\over a}{\left(\rho+{p\over c^2}\right)}=
-\rho{\dot G\over G}+{3Kc^2\over 4\pi G a^2}{\dot c\over c}
\end{equation}
In a flat Universe ($K=0$) a changing $c$ does not violate
mass conservation. Energy, on the other hand,
is proportional to $c^2$. If,
however, $K\neq 0$ not even mass is conserved.
In Eqn.~\ref{cons1} we have included the effects
of $\dot G$ under the same postulate merely for completness.
In such a formulation VSL does not reduce to Brans Dicke theory
when $\dot c=0$, and $\dot G\neq 0$.
This is because
we postulate that Friedmann equations remain unchanged,
which implies that the conservation equations acquire terms in
$\dot c$ and $\dot G$.
In Brans Dicke theory one postulates exactly the opposite:
the conservation equations must still be valid, so that the
weak equivalence principle is satisfied.
While we could have taken this stance
for $c$ as well we feel that violation of energy conservation is the hallmark
of changing $c$. Variable $c$ must break Poincare invariance,
for which energy is the Noether current. Barrow \cite{bdvsl}
has proposed a formulation of VSL which has the correct Brans
Dicke limit.
\section{The flatness puzzle}
We now turn to the flatness puzzle.
The flatness puzzle can be illustrated as follows.
Let $\rho_c$ be the critical density of the Universe:
\begin{equation}
\rho_c={3\over8\pi G}{\left(\dot a\over a\right)}^2
\end{equation}
that is, the mass density corresponding to $K=0$
for a given value of $\dot a/a$. Let us define
$\epsilon=\Omega-1$ with $\Omega=\rho/\rho_c$. Then
\begin{equation}
\dot\epsilon=(1+\epsilon){\left({\dot\rho\over\rho}-
{\dot\rho_c\over\rho_c}
\right)}
\end{equation}
If $p=w\rho c^2$ (with $\dot w=0$), using
Eqns.(\ref{fried1}), (\ref{fried2}), and
(\ref{cons1}) we have:
\begin{eqnarray}
{\dot\rho\over \rho}&=&-3{\dot a\over a}(1+w)-{\dot G\over G}+
2{\dot c\over c}{\epsilon\over 1+\epsilon}\\
{\dot\rho_c\over \rho_c}&=&-{\dot a\over a}(2+(1+\epsilon)(1+3w))
-{\dot G\over G}
\end{eqnarray}
and so
\begin{equation}\label{epsiloneq}
\dot\epsilon=(1+\epsilon)\epsilon {\dot a\over a}
{\left(1+3w\right)}+2{\dot c\over c}\epsilon
\end{equation}
In the SBB $\epsilon$ grows like $a^2$ in the radiation era,
and like $a$ in the matter era, leading to a total growth by
32 orders of magnitude since the Planck epoch. The observational
fact that $\epsilon$ can at most be of order 1
nowadays requires that either $\epsilon=0$
strictly, or an amazing fine tuning must have existed in the initial
conditions ($\epsilon<10^{-32}$ at $t=t_P$). This is the flatness puzzle.
The $\epsilon=0$ solution is in fact unstable for any matter
field satisfying the strong energy condition $1+3w>0$. Inflation
solves the flatness problem with an inflaton field which satisfies
$1+3w<0$. For such a field $\epsilon$ is driven towards
zero instead of away from it. Thus inflation can solve the
flatness puzzle.
As Eqn.~\ref{epsiloneq} shows a decreasing speed of light
($\dot c/c<0$) would also drive $\epsilon$ to 0. If the speed
of light changes in a sharp phase transition, with $|\dot c/c|\gg
\dot a/a$, we can neglect the expansion terms in
Eqn.~\ref{epsiloneq}. Then $\dot\epsilon/\epsilon=2\dot c/c$ so
that $\epsilon\propto c^2$. A short calculation shows that the
condition (\ref{cond1}) also ensures
that $\epsilon\ll 1$ nowadays, if $\epsilon\approx 1$ before the
transition.
The instability of the $K\neq 0$ Universes while $\dot c/c<0$ can be
expected simply from inspection of the non conservation equation
Eq.~(\ref{cons1}). Indeed if $\rho$ is above its critical value,
then $K=1$, and Eq.~(\ref{cons1}) tells us that mass is taken out
of the Universe. If $\rho<\rho_c$, then $K=-1$, and then mass is produced.
Either way the mass density is pushed towards its critical value
$\rho_c$. In contrast with the Big Bang model, during a period
with $\dot c/c<0$ only the $K=0$ Universe is stable.
Note that with the set of assumptions we have used a changing
$G$ cannot solve the flatness problem
(cf.\cite{robert,jana,turner}).
We have assumed in the previous discussion that we are close,
but not fine-tuned, to flatness before the transition.
It is curious to note that this need not be the case.
Suppose instead that the Universe acquires ``natural initial
conditions'' (eg. $\epsilon\approx 1$) well
before the phase transition occurs. If such Universes
are closed they recollapse before the transition. If they are
open, then they approach $\epsilon=-1$. This is the Milne Universe,
which is our case (constant $G$) may be seen as Minkowski space-time.
Such a curvature dominated Universe is essentially empty, and a coordinate
transformation can transform it into Minkowski space-time. Inflation
cannot save these empty Universes, as can be seen from Eqn.~\ref{epsiloneq}.
Indeed even if $1+3w<0$ the first term will be
negligible if $\epsilon\approx-1$. This is not true for VSL: the
second term will still push an $\epsilon=-1$ Universe towards
$\epsilon=0$.
Heuristically this results from the fact that the violations of
energy conservation responsible for pushing the Universe towards
flatness do not depend on there being any matter in the Universe.
This can be seen from inspection of Eqn.~(\ref{cons1}).
In this type of scenario it does not matter how far before
the transition the ``initial conditions'' are imposed. We
end up with a chaotic scenario in which Darwinian selection gets rid
of all the closed Universes. The open Universes become empty and cold.
In the winter of these Universes a phase transition
in $c$ occurs, producing matter, and leaving the Universe
very fine tuned, indeed as an Einstein deSitter Universe (EDSU).
\section{The cosmological constant problem}
There are two types of cosmological constant problems, and
we wish to start our discussion by differentiating them.
Let us write the action as:
\begin{equation}
S=\int dx^4 \sqrt{-g}{\left( {c^4 (R+2\Lambda_1)\over 16\pi G}
+{\cal L}_M + {\cal L}_{\Lambda_2}\right)}
\end{equation}
where ${\cal L}_M$ is the matter fields Lagragian.
The term in $\Lambda_1$ is a geometrical cosmological constant,
as first introduced by Einstein. The term in $\Lambda_2$ represents
the vacuum energy density of the quantum fields \cite{steve}.
Both tend to dominate the energy density of the Universe,
leading to the so-called cosmological constant problem.
However they represent two rather different problems.
We shall attempt to solve the problem associated with
the first, not the second, term.
Ususally one hopes that the second term will be cancelled by an
additional couter-term in the Lagrangian. In the rest of
this paper it is the geometrical cosmological constant
that is under scrutiny.
If the cosmological constant $\Lambda\neq 0$ then the
argument in the previous section
still applies, with $\rho=\rho_m+\rho_\Lambda$,
where $\rho_m$ is the mass density in normal matter, and
\begin{equation}\label{enerlamb}
\rho_\Lambda={\Lambda c^2\over 8\pi G}
\end{equation}
is the mass density in the cosmological constant.
One still predicts $\Omega_m+\Omega_\Lambda=1$, with
$\Omega_m=\rho_m/\rho_c$ and $\Omega_\Lambda=\rho_\Lambda/\rho_c$.
However now we also have
\begin{equation}\label{dotLm}
\dot\rho_m+3{\dot a\over a}{\left(\rho_m+{p_m\over c^2}
\right)}=-\dot\rho_\Lambda-\rho{\dot G\over G}+
{3K c^2\over 4\pi G a^2}{\dot c\over c}
\end{equation}
If $\Lambda$ is indeed a constant then from Eq.~(\ref{enerlamb})
\begin{equation}\label{dotL}
{\dot \rho_\Lambda\over \rho_\Lambda}=2{\dot c\over c} -{\dot G
\over G}
\end{equation}
If we define $\epsilon_\Lambda=\rho_\Lambda/\rho_m$
we then find, after some straightforward algebra, that
\begin{equation}\label{epslab}
\dot \epsilon_\Lambda =\epsilon_\Lambda{\left(
3{\dot a\over a}(1+w)+2{\dot c\over c}{1+\epsilon_\Lambda
\over 1+\epsilon}\right)}
\end{equation}
Thus, in the SBB model,
$\epsilon_\Lambda$ increases like $a^4$ in the radiation era,
like $a^3$ in the matter era,
leading to a total growth by 64 orders of magnitude since the Planck
epoch.
Again it is puzzling that $\epsilon_\Lambda$ is observationally
known to be at most of order 1
nowadays. We have to face another fine tuning problem in the SBB
model: the cosmological constant problem.
If $\dot c=0$ the solution $\epsilon_\Lambda=0$
is in fact unstable for any $w>-1$. Hence violating the strong
energy condition $1+3w>0$ would not solve this problem.
Even in the limiting case $w=-1$ the solution
$\epsilon_\Lambda=0$ is not an attractor: $\epsilon_\Lambda$
would merely remain constant during inflation, then starting to
grow like $a^4$ after inflation.
Therefore inflation cannot ``explain'' the small value
of $\epsilon_{\Lambda}$, as it can with $\epsilon$,
unless one violates the dominant energy condition
$w\ge -1$.
However, as Eqn.~(\ref{epslab}) shows,
a period with $\dot c/ c \ll 0$ would drive $ \epsilon_\Lambda$
to zero. If the speed of light changes suddenly ($|\dot c/c|
\gg \dot a/a$) then we can neglect terms in $\dot a/a$, and so
\begin{equation}
{\dot \epsilon_\Lambda\over \epsilon_\Lambda(1+\epsilon_\Lambda)}
=2{\dot c\over c}{1\over 1+\epsilon}
\end{equation}
which when combined with $\dot\epsilon/\epsilon=2\dot c/c$
leads to
\begin{equation}
{\epsilon_\Lambda\over 1+\epsilon_\Lambda}
\propto {\epsilon \over 1+\epsilon}
\end{equation}
The exact constraint on the required
change in $c$ depends on the initial conditions
in $\epsilon$ and $\epsilon_\Lambda$. In any case once both
$\epsilon\approx 1$ and $\epsilon_\Lambda\approx 1$ we have
$\epsilon_\Lambda\propto c^2$. Then we can solve the
cosmological constant problem in a sudden phase transition
in which
\begin{equation}\label{cond2}
\log_{10}{c^-\over c^+}\gg 64 -{1\over 2}\log_{10}z_{eq}+2\log_{10}
{T^+_c\over T^+_P}
\end{equation}
This condition is considerably more restrictive than (\ref{cond1}),
and means a change in $c$ by more than 60 orders of magnitude,
if $T^+_c\approx T^+_P$.
Note that once again a period with $\dot G/G$ would not solve
the cosmological constant problem.
Equations (\ref{epsiloneq}) and (\ref{epslab}) are the equations
one should integrate to find conditions for
solving the flatness and
cosmological constant problems for arbitrary initial conditions and
with arbitrary curves $c(t)$. They generalize the conditions
(\ref{cond1}) and ({\ref{cond2}) which are valid only
for a starting point with $\epsilon\approx 1$ and
$\epsilon_ \Lambda \approx 1$ and for a step function $c(t)$.
As in the case of the flatness problem we do not need to impose
``natural initial conditions'' ($\epsilon_\Lambda\approx 1$)
just before the transition. These could have existed any time
before the transition, and the argument would still go through,
albeit with a rather different overall picture for the history of the
Universe.
If $\epsilon_\Lambda\approx 1$ well before the transition, then
the Universe soon becomes dominated by the cosmological constant.
We have inflation! The curvature and matter will be inflated away.
We end up in a de-Sitter Universe. When the transition is about to occur
it finds a flat Universe ($\epsilon=0$), with no matter ($\rho_m=0$),
and with a cosmological constant. If we rewrite Eqn.(\ref{epslab})
in terms of $\epsilon_m=\rho_m/\rho_\Lambda$, for $\epsilon=0$
and $|\dot c/c|\gg \dot a /a$, we have
$\dot \epsilon_m=-2(\dot c/ c)(1+\epsilon_m)$. Integrating
leads to $1+\epsilon_m\propto c^{-2}$.
We conclude that we do not need the presence of any matter in the Universe
for a VSL transition to convert a cosmological
constant dominated Universe into a EDSU Universe full of ordinary
matter. This can be seen
from Eqns.~(\ref{dotLm})-(\ref{dotL}). A sharp decline in $c$ will
always discharge any vacuum energy density into ordinary matter.
We stress the curious point that
in this type of scenario the flatness problem is not
solved by VSL, but rather by the period of inflation
preceding VSL.
\section{The homogeneity of the Universe}\label{homo}
Solving the horizon problem by no means guarantees solving
the homogeneity problem, that is, the uncanny homogeneity of
the currently observed Universe
across many regions which have apparently been causally disconnected.
Although solving the horizon problem is a necessary condition for solving
the homogeneity problem, in a generic inflationary model solving the
first causes serious
problems in solving the latter. Early causal contact between
the entire observed Universe allows
equilibration processes to homogenize the whole observed
Universe. It is crucial to the inflation picture that before
inflation the observable universe in well inside the Jeans length,
and thus equilibrates toward a homogeneous state.
However no such process is perfect, and small density
fluctuations tend to be left outside the Hubble radius,
once the Universe resumes its
standard Big Bang course. These fluctuations then grow like $a^2$
during the radiation era, like $a$ during the matter era, usually entailing
a very inhomogeneous Universe nowadays. This is a common flaw in
early inflationary models \cite{gupi} which requires additional
fine-tuning to resolve.
In order to approach this problem we study in Appendix II the
effects of a changing $c$ on the theory of scalar cosmological
perturbations \cite{KS}. The basic result is that the comoving
density contrast $\Delta$ and gauge-invariant velocity $v$
are subject to the equations:
\begin{eqnarray}
\Delta'-{\left(3w{a'\over a}+{c'\over c}\right)}\Delta&=&
-(1+w)kv-2{a'\over a}w\Pi_T\label{delcdotm}\\
v'+{\left({a'\over a}-2{c'\over c}\right)}v&=&{\left(
{c_s^2 k\over 1+w} -{3\over 2k}
{a'\over a}{\left({a'\over a}+{c'\over c}\right)}
\right)}\Delta \nonumber \\
+{kc^2w\over 1+w}\Gamma-
&kc&{\left({2/3\over 1+w}+{3\over k^2c^2}
{\left(a'\over a\right)}^2\right)}w\Pi_T\label{vcdotm}
\end{eqnarray}
where $k$ is the wave vector of the fluctuations,
and $\Gamma$ is the entropy production rate, $\Pi_T$
the anisotropic stress, and $c_s$ the speed of sound,
according to definitions spelled out in Appendix II.
In the case of a sudden phase transition Eqn.~({\ref{delcdotm})
shows us that $\Delta\propto c$, regardless of the chosen
equations of state for $\Gamma$ and $\Pi_T$. Hence
\begin{equation}
{\Delta^+\over\Delta^-}={c^+\over c^-}
\end{equation}
meaning a suppression of any fluctuations before the phase
transition by more than a factor of $10^{-60}$ if condition
(\ref{cond2}) is satisfied.
The suppression of fluctuations induced by a sudden phase transition
in $c$ can be intuitively understood in the same fashion as the solution to
the flatness problem. Mass conservation violation
ensures that only a Universe at critical mass density is stable,
if $\dot c/c\ll 0$. But this process occurs locally, so
after the phase transition the Universe should be left
at critical density {\it locally}. Hence the suppression of
density fluctuations.
We next need to know what are the initial conditions for $\Delta$ and
$v$. Suppose that at some very early time $t_i$
one has $\dot c/c= 0$ and the whole observable Universe
nowadays is inside the Jeans length: $\eta_0\ll c_i\eta_i/{\sqrt 3}$.
The latter condition is enforced as a byproduct of solving the horizon
problem. The whole observable Universe nowadays is then
initially in a thermal state. What is more each portion of the Universe
can be described by the canonical ensemble and so the Universe
is homogeneous apart from thermal fluctuations \cite{Peebles}.
These are characterized by the mass fluctuation
\begin{equation}
\sigma^{2}_M={{\langle\delta M ^2
\rangle}\over{\langle M\rangle}^2}={4k_b T_i\over M c_i^2}
\end{equation}
Converted into a power spectrum for $\Delta$ this is a
white noise spectrum with amplitude
\begin{equation}\label{pdelta}
P_\Delta(k)={\langle |\Delta(k)^2|\rangle}\propto
{4k_bT_i\over \rho_ic_i^2}
\end{equation}
What happens to a thermal distribution, its temperature, and its
fluctuations, while $c$ is changing?
In thermal equilibrium the distribution function of particle energies
is the Planck distribution $P(E)=1/(e^{E/k_bT}-1)$, where $T$
is the temperature.
When one integrates over the whole phase space, one obtains
the bulk energy density $\rho c^2\propto (k_b T)^4/(\hbar c)^3$.
Let us now consider the time when the Universe has already
flattened out sufficiently for mass to be approximately
conserved. To define the situation more completely, we
make two additional microphysical assumptions.
Firstly, let mass be conserved also for individual quantum particles,
so that their energies scale like $E\propto c^2$.
Secondly, we assume particles' wavelengths do not change with $c$.
If homogeneity is preserved, indeed the wavelength is an
adiabatic invariant, fixed by a set of quantum numbers,
eg: $\lambda =L/n$ for a particle in a box of size $L$.
Under the first of these assumptions a Planckian distribution with
temperature $T$ remains Planckian, but $T\propto c^2$.
Under the second assumption, we have $\lambda=2\pi \hbar c/E$,
and so $\hbar/ c$ should remain constant. Therefore the phase space
structure is changed so that, without particle production, one still
has $\rho c^2\propto (k_b T)^4/(\hbar c)^3$, with $T\propto c^2$.
A black body therefore remains a black body,
with a temperature $T\propto c^2$. If we combine this effect
with expansion, with the aid of Eqn.~(\ref{cons1}) we have
\begin{equation}\label{temp}
\dot T + T{\left({\dot a\over a}-2{\dot c\over c}\right)}=0
\end{equation}
We can then integrate this equation through the epoch when
$c$ is changing to find the temperature $T_i$ of the initial
state. This fully fixes the initial conditions for scalar
fluctuations, by means of (\ref{pdelta}).
In the case of a sudden phase transition we have $T^+=
T^- c^{2+}/c^{2-}$, and so
\begin{equation}
\sigma_M^{2-}={4k_b T^-\over M c^{2-}}={4k_b T^+\over M c^{2+}}
\end{equation}
or
\begin{equation}
\Delta^-(k)^{2}\approx {4k_bT^+\over \rho^+ c^{2+}}
\end{equation}
but since $\Delta\propto c$ we have
\begin{equation}
\Delta^+(k)\approx {\sqrt {4k_bT^+\over \rho^+ c^{2+}}}{c^+\over c^-}
\end{equation}
Even if $T^+=T_P^+=10^{19}Gev$ these fluctuations would still be
negligible nowadays. Therefore although the Universe ends up in a
thermal state after the phase transition, its thermal fluctuations,
associated with the canonical ensemble, are strongly suppressed.
For a more general $c(t)$ function the procedure is as follows.
Integrate Eqn.~(\ref{temp}) backwards up to a time $t_i$ when
$\dot c=0$, to find $T(t_i)$. Give $\Delta(t_i)$ a thermal spectrum of
fluctuations, according to (\ref{pdelta}), with $T(t_i)$.
With this initial condition integrate Eqns.(\ref{delcdotm})
and (\ref{vcdotm}) (or even better the second order equation
(\ref{deltacddot}) given in Appendix II), to find
$\Delta$ nowadays.
It is conceivable that a careful design of $c(t)$
would leave fluctuations, once $\dot c=0$ again,
with the right amplitude and spectrum to explain structure formation.
In particular $c(t)$ may be designed so as to convert a white noise
spectrum into a scale-invariant spectrum. However we feel that
until a mechanism for inducing $c(t)$ is found such efforts
are bound to look ludicrously contrived.
We feel that the power of VSL scenarios is precisely in leaving the
Universe very homogenous, after $c$ has stopped changing. This would
then set the stage for causal mechanisms of structure formation
to do their job \cite{vs,aa}.
\section{The isotropy of the Universe}
There is a sense in which there is an isotropy problem
in the SBB model, similar to the homogeneity problem.
We follow closely the remark made in \cite{KS}, pp.26.
In Appendix~III we write down the vector Einstein's equations
in the vector gauge,
and from them we derive the vorticity ``conservation'' equation when
$\dot c/c\neq 0$. If $v$ is the vorticity (defined in Appendix) and
$\Pi^T$ the vector stress, we have:
\begin{equation}
v'+(1-3w){a'\over a}v-2{c'\over c}v=-{kc\over 2}{w\over 1+w}\Pi^T
\end{equation}
In the absence of driving stress, $v$
remains constant during the radiation dominated epoch, and
decays like
$1/a$ in the matter epoch. In \cite{KS} it is further argued
that the relevant dimensionless quantity is
\begin{equation}
\omega={(k/ a)v \over(a'/ ca)}\propto {1\over a^{(1-9w)/ 2}}
\end{equation}
Hence for $w>1/9$
vorticity grows, leading to a further fine tuning problem.
This is most notably a problem if we accept the Planck
equipartition proposal, introduced in \cite{barrow}.
At Planck epoch there would then be a significant
vorticity. Depending on how one looks at it, this vorticity
would then get frozen in or grow, leading to a very anisotropic
Universe nowadays.
Whether or not this is a problem is clearly debatable.
In any case either inflation or VSL models could
solve this prospective problem. For $w<-1/3$ we have that
$v$ decays faster than $1/a^2$. Whatever dimensionless
quantity one chooses to look at, vorticity is therefore
safely inflated away. If $\dot c/c\neq 0$ we have that
$v\propto c^2$. Again any primordial vorticity is safely
suppressed after a phase transition in $c$ satisfying
any of the conditions (\ref{cond1}) or (\ref{cond2}).
\section{The entropy problem and setting the initial conditions}
Let us first consider the SBB model.
Let $S_h$ be the entropy inside the horizon, and $\sigma_h=S_h/k_B$
be its dimensionless counterpart. $\sigma_h$ is of order
$10^{96}$ nowadays. If we assume that the only scales in the cosmological
model are the ones provided by the fundamental constants, then at $t_P$ the
temperature is $T_P$. At Planck time, $\sigma_h$ (being dimensionless)
is naturally of order 1. In the SBB model the horizon distance is $d_h=2t$
in the radiation dominated epoch,
and ignoring mass thresholds $t\propto 1/T^2$. If evolution is adiabatic
one then has (in a flat Universe)
\begin{equation}\label{hbbs}
\sigma_h(t)\approx\sigma_h(t_P){\left(T_P\over T\right)}^3.
\end{equation}
Since $\sigma_h(t_P)\sim 1$, one has $\sigma_h(t_0)\sim 10^{96}$.
Thus the large entropy inside the horizon nowadays is a reflection of
the lack of scales beyond the ones provided by
the fundamental constants, the fact that the horizon size
is much larger nowadays than at Planck time, and the flatness of the Universe.
One may rephrase the horizon and flatness problems in terms of entropy
\cite{infl1}. However if one is willing to accept the horizon
structure and flatness of the Universe simply as features of the initial
conditions (rather than problems), there is no additional entropy problem.
There {\em is} a problem that arises if one tries to solve the horizon problem,
keeping the adiabatic assumption, by means of superluminal expansion.
This blows what at Planck time
is a region much smaller than the Planck size into a comoving
region containing the whole observable Universe nowadays.
This solves the horizon problem. However
if evolution is adiabatic such a process implies that
$\sigma_h(t_0)\ll 1$. Stated in another way,
since the number of particles inside the horizon $n_h$ is of the same
order as $\sigma_h$, this implies an empty Universe nowadays.
More mathematically, if $d_h$ is the horizon proper distance,
one has
\begin{equation}\label{entropy}
{\dot\sigma_h\over \sigma_h}={3\over d_h}
\end{equation}
where we have used $d_h=a\int^tdt'/a$. With any standard matter
($p>-\rho c^2/3$) the horizon grows like $t$. Accordingly $\sigma_h$
grows like a power of $t$.
On the other hand
the horizon grows faster than $t$ if $p<-\rho c^2/3$:
it grows exponentially if $p=-\rho c^2$, and like $t^n$ (with $n>1$)
for $-\rho c^2<p<-\rho c^2/3$. This provides the inflationary solution to
the horizon problem. However in the latter case Eqn. (\ref{entropy})
implies that $\sigma_h$ decreases exponentially, leading to $\sigma_h(t_0)
\ll 1$.
The way inflation bypasses this problem is by dropping the adiabatic
assumption. Indeed during inflation the Universe supercools, and a period
of reheating follows the end of inflation\footnote{
This issue has been carefully analyzed in the context of
inflationary models and models with time varying $G$ in \cite{turner}
}.
In a VSL scenarios the detailed solution to the entropy problem
depends on when and what type of ``natural conditions''
are given to the pre transition Universe.
We first derive equations for the entropy under varying $c$.
From $s=(4/3)\rho c^2/T$, $\rho\propto T^4/(\hbar c)^3$, and
from Eqns~\ref{cons1} and \ref{dotLm}
we obtain that the entropy of radiation satisfies
\begin{equation}\label{dots}
{\dot s\over s}={3\over 4}{\dot\rho\over\rho}
= -3{\dot a\over a} +{3\over 2}{\dot c\over c}{\epsilon
(1+\epsilon_\Lambda)
\over 1+\epsilon}-{3\over 2}{\dot c\over c}\epsilon_\Lambda
\end{equation}
If the Universe is EDSU, there are no violations of mass conservation,
and entropy is conserved. However if the Universe is open or has a positive
cosmological constant, then we have seen that there is creation of mass.
Accordingly there must be creation of particles, and entropy is produced.
If the Universe is closed, particles are taken away, and the entropy decreases.
The most suspicious case is therefore if the Universe was Einstein
de-Sitter before the phase transition. Let us assume therefore that
at $t=t_P^-$ (the Planck time with the constants before the transition)
the entropy inside the horizon (which has proper size $c^-t_P^-$)
was of order 1. Then the entropy inside the Hubble volume at $t=t_P^+$,
before and after the transition, is
\begin{equation}
\sigma_h(t_P^+)=\sigma_h(t_P^-)
{\left( c^+t_P^+\over c^- t_P^-\right)}^3{\left(
a(t_P^-)\over a(t_P^+)\right)}^3\approx 1
\end{equation}
where we have used $t_P^+/t_P^-=(c^-/c^+)^2$.
One takes a fraction $(c^+/c^-)^3$
of the horizon volume before the transition to make the Hubble
volume after the transition. However the entropy inside the horizon
has increased since $t_P^+$ by the same factor. Therefore entropy
conservation in this case does not conflict with $\sigma_h(t_P^+)
\approx 1$ after the transition. One way of understanding this is that by
imposing flatness from the outset (before the transition) one has already
``solved'' the entropy problem. Notice that the above argument
works for any value of $t_c/t_P^+$.
Now consider the case where ``natural'' initial conditions were
also imposed at $t_P^-$, with $\Lambda=0$. One should
have $\epsilon(t_P^-)$ of order 1. We have already discussed how the flatness
problem is solved in this case, when large empty curvature dominated
universes are filled with a (nearly perfectly) critical energy density
during the transition.
Open Universes become very empty, but they are still pushed
to EDSU at the transition. One may integrate (\ref{dots}) to find
that $s^+/s^-=(1+\epsilon^-)^{-3/4}$. One may also use Eqn.~\ref{epsiloneq}
to find that $\epsilon$ has evolved since $t_P^-$ to
$\epsilon^-(t_P^+)+1\approx (a(t_P^-)/a(t_P^+))^2\approx (t_P^-
/t_P^+)^2$, where we have used $a\propto t$ for the Milne Universe.
Hence we have that during the transition entropy is produced
like $s^+/s^-=(t_P^+/t_P^-)^{3/2}=(c^-/c^+)^3$. Given that $a\propto t$
for such Universes, the entropy before the transition in the
proper volume of size $c^+t_P^+$ is
\begin{equation}
S^-(c^+t_P^+)=
{\left( c^+t_P^+\over c^- t_P^-\right)}^3{\left(
a(t_P^-)\over a(t_P^+)\right)}^3\approx {\left( c^+\over c^-\right)}^3
\end{equation}
that is there is practically no entropy in relevant volume before
the transition. However we have that after the transition
\begin{equation}
\sigma_h(t_P^+)=S^+(c^+t_P^+)=
{S^-(c^+t_P^+)\left( c^-\over c^+\right)}^3\approx 1
\end{equation}
In such scenarios the Universe is rather cold and empty before
the transition. However the transition itself reheats the Universe.
Notice that, like in the first case discussed, the above argument
works for any value of $t_c/t_P^+$.
If at $t=t_P^-$ one also has $\epsilon_\Lambda\approx 1$ then we have a
scenario in which the cosmological constant dominates, solves
the flatness problem, and is discharged into normal matter. However
if $\rho_\Lambda\approx\rho_P^-$ at $t\approx t_P^-$,
then whatever the transition
time, after the transition the Universe will have a density in normal
matter equal to $\rho_m=\rho_P^-$. Hence the Hubble time after the
transition will be $t_P^-$, whatever the actual age of the Universe.
One may integrate (\ref{dots}) to find that in this case (setting
$\epsilon=0$) the entropy production during the transition is
$s^+/s^-=(1+\epsilon_\Lambda^-)^{3/4}$. In the period between
$t=t_P^-$ and the transition, $\epsilon_\Lambda$
increases like $a^4$, and the entropy density is diluted
like $1/a^3$. Hence after the transition the entropy density is
what it was at $t=t_P^-$, that is $s^+\approx 1/L_P^{-3}$.
If we now follow the Universe
until its Hubble time is $t_P^+$ (when its density is $\rho_P^+$)
we must wait until the expansion factor has increased by a factor
of $(\rho_P^+/\rho_P^-)^{1/4}$. Given that $s\propto 1/a^3$
the entropy density is diluted by a factor of $(\rho_P^+/\rho_P^-)^{3/4}$.
Therefore the entropy density when the Hubble time is $t=t_P^+$
is $s\approx 1/L_P^{+3}$. Again the dimensionless entropy inside the
Hubble volume, when this has size $L_P^+$, is of order 1.
Finally it is worth noting that treating the pre-transition universe
simply as a Roberston-Walker model is no doubt overly simplistic, and we use
it simply as a device to introducing our ideas. We expect that further
development of these ideas could result in a radically different view of the
pre-transition phase (much as has happened with the inflationary scenario).
One interesting observation is that one could avoid having multiple Planck
times by considering that $G\propto c^4$. Such assumption would
not conflict with the dynamics of flatness and $\Lambda$, as shown before,
but now $t_P^-=t_P^+$.
\section{Conclusions}
We have shown how a time varying speed of light could
provide a resolution to the well known cosmological puzzles. These
``VSL'' models could provide an alternative to the standard
Inflationary picture, and furthermore
resolve the classical cosmological constant puzzle.
At a technical level, the proposed VSL picture is not nearly as
well developed as the inflationary one, and one purpose of this
article is to stimulate further work on the unresolved technical
issues.
We are not trying to take an ``anti-inflation'' stand, but we do
strongly feel that broadening the range of possible models of the
very early Universe would be very healthy for the field of cosmology,
and would ultimately allow us to state in more concrete terms the
extent to which one model is preferred.
On a more fundamental level we hope to expand the phenomenological
approach presented in this paper into a theory where the concept
of (Poincare) symmetry breaking provides the physical basis for
VSL. Symmetry breaking is also the central ingredient in
causal theories of structure formation. We therefore hope to
arrive at a scenario where symmetry breaking provides a complete
and consistent complement to the SBB model which can resolve the
standard puzzles as well as explain the origin of cosmic structure.
{\bf Note added in proof:}
After this paper, as well as its sequel \cite{sq1}, were completed
J. Moffat brought to our attention
two papers in which he proposes a similar idea \cite{mof}. While
Moffat's work does not go as far as ours in addressing the
flatness, cosmological constant, and entropy problems, he does
go considerably further than we have in terms of specific
model building. Moffat's model does not satisfy our
prescription for solving the cosmological problems, but it may do so
in a modified form. We are currently investigating this possibility.
We regret that because we were unaware of this work we did not cite it
in the first publicly distributed version of this paper.
\section*{Acknowledgements} We would like to thank John Barrow,
Ruth Durrer, Robert
Brandenberger, Gary Gibbons, and Erick Weinberg for discussion.
We acknowledge support from
PPARC (A.A.) and the Royal Society (J.M.).
\section*{Appendix I: A specific realization of VSL}
In this Appendix we set up a specific VSL theory.
We first discuss the simple case of the electrodynamics of the
point particle in Minkowski space time. We start from Bekenstein's
theory of variable $\alpha$, and show how a VSL alternative
could be set up. We highlight the subtleties encountered in the VSL
formulation. We then
perform the same exercise with the Einstein-Hilbert action.
We briefly consider the
dynamics of the field $\psi=c^4$. Finally we cast the
key elements of our construction into a body of axioms.
\subsection{Electrodynamics in flat space time}
A changing $\alpha$ theory was proposed by Bekenstein
\cite{bek2} based on the
postulate of Lorentz invariance. The electrodynamics of a point
particle was first analyzed. If Lorentz invariance
is to be preserved then the particle mass $m$ and its charge
$e$ must be variable. In order to preserve ``minimal coupling'' (reduction
to standard electromagnetism when $\alpha={\rm const}$) one chooses
the world line action
\begin{equation}\label{emp}
L=-mc{\sqrt{-u^\mu u_\mu}} +{e\over c}u^\mu A_\mu
\end{equation}
with $u^\mu=\dot x^\mu$, $g_{\mu\nu}=\eta_{\mu\nu}$,
$e=e(x^\mu)$, and $m=m(x^\mu)$. Minimal coupling means simply
to take the standard action and replace $e$ and $m$ by variables
without breaking Lorentz invariance. $e$ and $m$ must then be scalar
functions.
This action leads to equation:
\begin{equation}
(m\dot x_\mu\dot)=-m_{,\mu}c^2 +{e\over c} u^\nu F_{\mu\nu}
\end{equation}
with the electromagnetic field tensor defined as
\begin{equation}
F_{\mu\nu}={1\over e}(\partial_\mu(eA_\nu )-\partial_\nu(eA_\mu ))
\end{equation}
The electromagnetic action can therefore be defined as:
\begin{equation}
S_{EM}={-1\over 16\pi}\int d^4 x F_{\mu\nu}F^{\mu\nu}
\end{equation}
Also, the particle action ($\ref{emp}$) may be written
as a Lagrangian density:
\begin{equation}
S_M=\int d^4x {\delta^{(3)}({\bf x}-{\bf x}(\tau))\over \gamma}
(-mc^2+(e/c)u^\mu A_\mu)
\end{equation}
in which $\gamma$ is the Lorentz factor.
Maxwell's equations are then:
\begin{equation}
e\partial_\mu(F^{\mu\nu}/e)=4\pi j^\mu
\end{equation}
with the current
\begin{equation}
j^\mu={\delta^{(3)}({\bf x}-{\bf x}(\tau))\over \gamma}{eu^\mu\over c}
\end{equation}
This current\footnote{
There is an alternative view in which rather than a changing $e$
one considers that the vacuum is a dielectric medium with variable
$\epsilon$. One may then identify a conserved charge, but this
is not the charge which couples to the gauge field.}
is the current which couples to the gauge field,
and in the rest frame it equals $e$. Therefore it cannot be conserved,
and indeed we have that
\begin{equation}
\partial_\mu j^\mu={j^\mu\over e^2}\partial_\mu e
\end{equation}
Let us now postulate instead that a changing $\alpha$ is to be interpreted
as $c\propto\hbar\propto\alpha^{-1/2}$, and that $e$ and $m$ are to be seen
as constants. Minimal coupling, in the above sense, would then prompt
us to consider the action ($\ref{emp}$), but with $c=c(x^\mu)$ everywhere,
and $e$ and $m$ constants. This action leads
to equations:
\begin{equation}\label{eq1}
m\ddot x_\mu={1\over 2}(mc^2)_{,\mu}+ {e\over c} u^\nu F_{\mu\nu}
\end{equation}
with the electromagnetic tensor defined as
\begin{equation}\label{eq2}
F_{\mu\nu}=c(\partial_\mu(A_\nu/c )-\partial_\nu(A_\mu/c ))
\end{equation}
However the above construction is not complete.
In spite of the appearance of Eqns.~($\ref{emp}$), ($\ref{eq1}$),
and ($\ref{eq2}$), Lorentz invariance is broken. This boils down
to the fact that, say $\partial_\mu$ is no long a 4-vector. Even
if $c$ were to be regarded as a scalar, $\partial_\mu$ would contain
$c$ in its zero component, but not in its spatial components.
The usual contractions leading to $S$ could still be taken
but $S$ would no longer be a scalar. This manifests itself in
the equations ($\ref{eq1}$) in the fact that in $\ddot x^\mu$
there are terms in $\partial c$ which
break Lorentz invariance.
Since the action is not Lorentz invariant, a minimal coupling prescription
cannot possibly be true in every coordinate system.
Minimal coupling is now the statement that there
is a preferred reference frame in which the action is to be obtained from
the standard action simply by replacing $c$ with a field. Let us call this
frame the ``light frame''.
In regions in which $c$ changes very little changes in the action upon
Lorentz transformations are negligible. Hence all boosts performed upon
the light frame become nearly equivalent and Lorentz invariance is recovered.
The Maxwell equations in a VSL theory become
\begin{equation}\label{max2}
{1\over c}\partial_\mu(cF^{\mu\nu})=4\pi j^\mu
\end{equation}
in the light frame.
Given that Lorentz invariance is broken, one can no longer expect the
general expression for a conserved current to take the form
$\partial_\mu j^\mu=0$.
Indeed one could try and compute $\partial_\nu$ of equations (\ref{max2}),
but now $\partial_\mu$ and $\partial_\nu$ do not commute. Also their
commutator is not Lorentz invariant: for instance $[\partial_0,
\partial_i]=(-\partial_i c/c^2)\partial_0$.
Still, $\partial_\mu j^\mu=0$ holds in the time frame. It is just
that this expression transforms into something more complicated in
other frames. The more complicated expression would still place
constraints on the theory, which could still be called ``conservation
of charge''.
\subsection{Minimal coupling to gravity}
Let us now examine gravity in such a theory\footnote{
Gravitation is normally
regarded as the gauge theory of the Poincare group\cite{tomk}.
Here we simply abandon
this point of view. In some future work we will try to define
a gauge principle for broken symmetries, thereby recovering the
standard view}. As in the previous
case we will impose a minimal coupling principle. Working in
analogy with Brans-Dicke theory, let us define a field $\psi=c^4$,
and introduce the following action
\begin{equation}\label{s}
S=\int dx^4{\left( \sqrt{-g}{\left( {\psi (R+2\Lambda)\over 16\pi G}
+{\cal L}_M\right)} +{\cal L}_\psi \right)}
\end{equation}
The dynamical variables are a metric $g_{\mu\nu}$, any matter field
variables contained in ${\cal L}_M$, and $\psi$ itself.
The Riemann tensor (and the Ricci scalar) is to be computed from
$g_{\mu\nu}$ at constant $\psi$ in the usual way.
As in the previous section covariance is broken, in spite of all
appearances. $\psi$ does not appear in coordinate transformations
of the metric, and so the connection $\Gamma^\alpha_{\mu\nu}$ does not
contain terms in $\nabla \psi$ in any frame. However the connection will
contain different terms in $\psi$ in different frames. Hence the statement
that the Riemann tensor is to be computed from the metric at constant
$\psi$ can only be true in one preferred frame. Minimal coupling
requires the definition of a light frame. The action (\ref{s}) is only
Lorentz invariant in appearance.
Varying the action with respect to the metric leads to:
\begin{eqnarray}\label{var}
{\delta S\over \delta g^{\mu\nu}}&=&{\sqrt{-g}\psi\over 8\pi G}
[G_{\mu\nu}-g_{\mu\nu}\Lambda]\\
{\delta S_M\over \delta g^{\mu\nu}}&=&-{\sqrt{-g}\psi\over 8\pi G}
T_{\mu\nu}
\end{eqnarray}
leading to a set of Einstein's equations without any extra terms
\begin{equation}
G_{\mu\nu}-g_{\mu\nu}\Lambda = {8\pi G\over \psi} T_{\mu\nu}
\end{equation}
valid in the light frame. This is the way we chose to phrase our
postulates in SectionIII of our paper. In other words all we need
is minimal coupling at the level of Einstein's equations.
The fact that a favoured set of coordinates is picked by our action
principle is not surprising as Lorentz invariance is broken. On
the other hand notice that the dielectric vacuum of Bekenstein theory
is an ether theory. His theory also breaks Lorentz invariance, not at the level
of the laws of physics, but in the form of the contents of space-time.
In changing $\alpha$ theories a favoured frame is always picked up.
In a cosmological setting it makes sense to identify this frame with
the cosmological frame. Free falling observers comoving with the
cosmological flow define a proper time and a set of spatial coordinates,
to be identified with the light frame. In this frame the Einstein-Hilbert
action is minimally coupled to a changing $\psi$, and the same happens to
Friedmann equations. The rest of our paper follows.
\subsection{The dynamics of $\psi$}
The definition of ${\cal L}_\psi$ controls the dynamics of $\psi$.
This is the most speculative aspect of our theory, but it also opens
the doors to empirical model building. In our paper we preferred
a scenario in which $c$ changes in an abrupt phase transition, but
one could also imagine $c\propto a^n$.
The latter scenario would result from a Brans Dicke type of Lagrangian
\begin{equation}
{\cal L}_\psi={-\omega\over 16\pi G\psi}\dot\psi^2
\end{equation}
(where $\omega$ is a dimensionless coupling)
and is being investigated. Addition of a temperature dependent
potential $V(\psi)$ would induce a phase transition, as in the scenario
developped in our paper.
However here we only make the following remarks, which are independent
of any concrete choice of ${\cal L}_\psi$. If $K=\Lambda=0$ one has
\begin{equation}
{\delta {\cal L}_\psi\over \delta \psi}={\sqrt{-g} T\over 4\psi}
\end{equation}
and so in the radiation dominated epoch ($T=0$),
once $K=\Lambda=0$, one should
not expect driving terms for the $\psi$ equation. Hence once the cosmological
problems are solved, in the radiation epoch, $c$ and $\hbar$
should be constants.
Incidently, once
the matter dominated epoch ($T\neq0$) is reached, $\psi$ should
perhaps start changing again, with interesting observations
consequences \cite{webb}.
We are studying the phase space portraits of these cosmologies, when
say $\Lambda\neq0$, and with various ${\cal L}_\psi$.
During phase transitions the perfect fluid approximation must break down.
One should then use, say, scalar field theory (let's call it $\phi$).
Now notice that terms in $\dot\phi$ will act as a source to $\psi$
(as they contain the speed of light). Hence whenever there is a
phase transition and the VEV of a field changes a large amount,
one may expect a large change in the speed of light, with most choices
of ${\cal L}_\psi$. A changing $\psi$ associated with SSB could then solve
the quantum version of the cosmological constant problem, but this might
require a rather contorted choice of ${\cal L}_\psi$.
\subsection{Axiomatic formulation of VSL theories}
{\bf Postulate 1.} {\it A changing $\alpha$ is to be interpreted as
a changing $c$ and $\hbar$ in the ratios $c\propto\hbar\propto
\alpha^{-1/2}$. The coupling $e$ is constant.}
This postulate merely sets up the theoretical interpretation of
the possible experimental fact that $\alpha$ changes, in terms of variable
dimensional quantities. This is a matter of convention and not
experiment, as much as
a constant $\hbar c$ is a matter of convention. With the above choice
a system of units for mass, length, time, and temperature is unambiguously
defined.
{\bf Postulate 2.} {\it There is a preferred frame for the laws of physics.
This preferred frame is normally suggested by the symmetries of the
problem, or by a criterium such as $c=c(t)$.}
If $c$ is variable, Lorentz invariance must be broken. Even if one
writes Lorentz invariant looking expressions these do not transform
covariantly. [In general this boils down to the explicit presence of $c$
in the operator $\partial_\mu$. Once one admits that Lorentz invariance
must be explicitly broken then a preferred frame must exist to formulate
the laws of physics. These laws are not invariant under frame
transformation, and one may expect that a preferred frame exists
where these laws simplify.
{\bf Postulate 3.} {\it In the preferred frame one may obtain the laws of
physics simply by replacing $c$ in the standard (Lorentz invariant)
action, wherever it occurs,
by a field $c=c(x^\mu)$.}
This is the principle of minimal coupling. Because the laws of physics
cannot be Lorentz invariant it
will not hold in every frame.
Hence the
application of this postulate depends crucially on the previous postulate
supplying us with a favoured frame. This principle may apply in Minkowski
space time electrodynamics, scalar field theory, etc, in which case
the frame in which $c=c(t)$ is probably the best choice. The cosmological
frame, endowed with the cosmic proper time is probably the best choice
in a cosmological setting.
{\bf Postulate 4} {\it The dynamics of $c$ must be determined by
an action principle deriving from adding an extra term to the Lagrangian
which is a function of $c$ only.}
This is work in progress. We do not wish to specify this postulate further
because for all we know this extra term can be anything. We merely
specify that no fields (including the metric) must be present in this
extra term because we wish minimal coupling to propagate into
the Einstein's equations.
\section*{Appendix II: Scalar perturbation equations for VSL models}
In this Appendix we derive the scalar cosmological perturbation equations
in VSL scenarios.
We assume $K=\Lambda=0$, and use a gauge where the perturbed metric
is written as
\begin{equation}
ds^2=a^2[-(1+2AY)d\eta^2 -2BYk_idx^id\eta +\delta_{ij}dx^idx^j]
\end{equation}
for a Fourier component with wave vector $k^i$. Here $Y$ is
a scalar harmonic.
We shall use conformal time $\eta$ to study fluctuations,
and denote $'=d/d\eta$.
The stress energy tensor is also writen as
\begin{eqnarray}
\delta T^0_0&=&-\rho Y\delta\nonumber\\
\delta T^i_0&=&-{\left(\rho+{p\over c^2}\right)}{v\over c}k^iY
\nonumber\\
\delta T^i_j&=&p\Pi_LY\delta^i_j+(k^ik_j-1/3\delta^i_j k^2)Yp\Pi_T
\end{eqnarray}
The Einstein's constraint equations then read \cite{KS}
\begin{eqnarray}
{3\over c^2}{\left(a'\over a\right)}^2A
-{1\over c}{a'\over a}kB&=&-{4\pi Ga^2\over c^2}\rho\delta\\
{k\over c}{a'\over a}A -{\left( {\left(a'\over a\right)}'-
{\left(a'\over a\right)}^2\right)}{B\over c^2}&=&
{4\pi Ga^2\over c^2}{\left(\rho+{p\over c^2}\right)}{v\over c}
\end{eqnarray}
and the dynamical equations are
\begin{eqnarray}
A+{1\over kc}{\left(B'+2{a'\over a}B\right)}&=&
-{8\pi Ga^2\over c^2}{p\over c^2}{\Pi_T\over k^2}\\
{a'\over a}{A'\over c^2}+{\left( 2{a''\over a}-
{\left(a'\over a\right)}^2\right)}{A\over c^2}&=&
{4\pi Ga^2\over c^2}{p\over c^2}{\left(\Pi_L-2\Pi_T\right)}
\end{eqnarray}
We assume that these equations do not receive corrections
in $\dot c/c$. This statement is gauge-dependent, much like
its counterpart for the unperturbed Eintein's equations.
We can only hope that the physical result does not change
qualitatively from gauge to gauge. Complying
with tradition we now define the comoving density contrast
\cite{KS}
\begin{equation}
\Delta=\delta+3(1+w){a'\over ca}{1\over k}{\left(
{v\over c}-B\right)}
\end{equation}
We also introduce the entropy production rate
\begin{equation}
\Gamma=\Pi_L-{c_s^2\over wc^2}\delta
\end{equation}
where the speed of sound $c_s$ is given by
\begin{equation}\label{cs}
c^2_s={p'\over \rho '}=wc^2{\left(1-{2\over 3}{1\over 1+w}
{c'\over c}{a\over a'}\right)}
\end{equation}
Note that the thermodynamical speed of sound is given by
$c^2_s=(\partial p/\partial \rho)|_S$. Since in SBB models
evolution is isentropic $c^2_s=(\partial p/\partial \rho)|_S
=\dot p/\dot\rho=p'/\rho '$. When $\dot c\neq 0$ evolution need not be
isentropic. However we keep the definition $c_s^2=p'/\rho '$
since this is the definition used in perturbative calculations.
One must however remember that the speed of sound given in
(\ref{cs}) is not the usual thermodynamical quantity.
With this definition one has for adiabatic perturbations
$\delta p/\delta\rho=p'/\rho'$, that is the ratio between
pressure and density fluctuations mimics the ratio of their
background rates of change.
Combining all four Einstein's equations we can then obtain
the (non-)conservation equations \cite{KS}
\begin{eqnarray}
\Delta'-{\left(3w{a'\over a}+{c'\over c}\right)}\Delta&=&
-(1+w)kv-2{a'\over a}w\Pi_T\label{delcdot2}\\
v'+{\left({a'\over a}-2{c'\over c}\right)}v&=&{\left(
{c_s^2 k\over 1+w} -{3\over 2k}
{a'\over a}{\left({a'\over a}+{c'\over c}\right)}
\right)}\Delta \nonumber \\
+{kc^2w\over 1+w}\Gamma-
&kc&{\left({2/3\over 1+w}+{3\over k^2c^2}
{\left(a'\over a\right)}^2\right)}w\Pi_T\label{vcdot2}
\end{eqnarray}
These can then be combined into a second order equation for
$\Delta$. If $\Gamma=\Pi_T=0$ this equation takes the form
\begin{equation}\label{deltacddot}
\Delta''+f\Delta'+(g+h+c_s^2k^2)\Delta=0
\end{equation}
with
\begin{eqnarray}
f&=&(1-3w)-3{c'\over c}\\
g&=&{\left(a'\over a\right)}^2{\left(\frac{9}{2}
w^2-3w-\frac{3}{2}\right)}\\
h&=&2{\left(c'\over c\right)}^2+{\left(\frac{9w}{2}-\frac{5}{2}
\right)}{c'\over c}{a'\over a}-{\left(c'\over c\right)}'
\end{eqnarray}
\section*{Appendix III: Vector perturbation equations for VSL models}
In a similar fashion we can study vector modes in a gauge
where the metric may be written as
\begin{equation}
ds^2=a^2[-d\eta^2 +2B Y_i dx^id\eta +\delta_{ij}dx^idx^j]
\end{equation}
where $Y_i$ is a vector harmonic. The stress energy tensor
is written as
\begin{eqnarray}
\delta T^0_i&=&{\left(\rho+{p\over c^2}\right)}{\left({v\over
c}-B\right)}Y_i\\
\delta T_{ij}&=&p\Pi^TY_{(i,j)}
\end{eqnarray}
Einstein's equations then read \cite{KS}
\begin{eqnarray}
k^2B&=&{16\pi G\over c^2}a^2{\left(\rho+{p\over c^2}\right)}
{v\over c}\\
{k\over c}{\left( B'+2{a'\over a}B\right)}&=&-{8\pi G\over c^2}
{p\over c^2}a^2\Pi^T
\end{eqnarray}
We assume that these do not receive $\dot c/ c$ corrections.
The conservation equation is then:
\begin{equation}
v'+(1-3w){a'\over a}v-2{c'\over c}v=-{kc\over 2}{w\over 1+w}\Pi^T
\end{equation}
| proofpile-arXiv_065-8262 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In most technological applications of surface chemistry, e.g. in
catalysis, the surfaces used to promote a reaction are highly non-ideal.
They contain steps and other imperfections in large concentrations,
which are thought to provide reactive sites.
Also in thin-film growth, steps are crucial for producing smooth layers
via the so-called step-flow mode of growth.
Despite the importance, detailed information about the role of steps is
scarce.
On metals, it is generally argued that the reactivity at steps is increased
due to a lower coordination number of atoms \cite{Somorjai}.
On semiconductors, the situation is less clear since step and terrace
atoms often attain similar coordination due to special reconstructions.
The H$_2$/Si system provides a good model to study the role of steps,
since it is the most thoroughly studied adsorption system on a
semiconductor surface and it is of considerable technological relevance
\cite{Chabal}.
In addition, several recent studies came to conclude that
the interaction of molecular hydrogen would be largely determined by
defect sites \cite{NaJo94,RaCa96b}, and in particular by steps
\cite{JiLu93,HaHa96}.
In this Letter, we demonstrate that the contributions from terraces and
steps to H$_2$ adsorption on vicinal Si(001) surfaces can be
discriminated using the second harmonic generation (SHG) probe
technique to monitor hydrogen
coverages during gas exposure.
The measured sticking coefficients differ by up to six orders of
magnitude and indicate the presence of an efficient adsorption
pathway at the steps, while adsorption on the terraces involves
a large barrier.
We performed density functional theory calculations to identify the relevant
reaction mechanisms and to compare their energetics.
Surprisingly, H$_2$ dissociation at the threefold-coordinated
step atoms is found to proceed via two neighboring sites and
directly leads to monohydride formation similar as at the dimerized
terrace atoms.
The huge differences in barrier heights arise from the interplay of
lattice deformation and electronic structure effects.
Thus, adsorption on a semiconductor surface may be highly site-specific,
even if the reactive surface atoms have similar coordination.
\begin{figure}[b!]
\leavevmode
\begin{center}
\epsfysize=2.5cm
\epsffile{fig1.ps}
\end{center}
\caption[]{
Relaxed geometry for the rebonded D$_{\rm B}$ step of a Si(117) surface.
The rebonded Si atoms are shown in white.
}
\label{geo1}
\end{figure}
The experiments were performed with Si samples that were
inclined from the [001] surface normal towards the [110] direction by
$2.5^\circ$, $5.5^\circ$ and $10^\circ \pm0.5^\circ$.
They were prepared by removing the oxide layer of the 10-$\Omega$cm
$n$-type wafers in ultra-high vacuum
at 1250 K followed by slow cooling to below 600 K.
Under these conditions double-atomic height
steps prevail on the surface
that have additional three-fold coordinated Si atoms attached
to the step edges, the so-called rebonded D$_{\rm B}$ steps\cite{dbsteps}
(a stick-and-ball model is displayed in Fig.~\ref{geo1}).
Low energy electron diffraction (LEED) confirmed that the
surfaces predominantly consisted of a regular array of double-height
steps separated by terraces with a unique orientation of Si dimers.
The sticking coefficients for the dissociative adsorption of H$_2$ on
these surfaces were determined from the temporal evolution of the
hydrogen coverage determined during gas exposure by SHG as
described previously \cite{BrKo96}.
Accurate measurements of the H$_2$ desorption process \cite{Hofer92}
ensured that the recorded signal changes were not affected by small
amounts of contaminants in the dosing gas which was purified in liquid
nitrogen traps \cite{BrKo96}.
For the sensitive detection of step adsorption it was exploited that
the presence of regular steps is associated with a symmetry break
in the surface plane.
For electric field components perpendicular to the step edges this enhances
the SHG contribution of the step with respect to the terrace sites \cite{RaHo98}.
A representative measurement taken at the 5.5$^\circ$ sample, kept at
a temperature of 560 K and exposed to H$_2$ at a pressure of
10$^{-3}$ mbar is displayed in the inset of Fig.~\ref{Arrh}.
There is a rapid drop of the surface nonlinear susceptibility $\chi^{(2)}_s$
responsible for the SHG signal immediately after admitting H$_2$ gas to
the chamber followed by a more gradual decay.
The two slopes of the $\chi^{(2)}_s$ correspond to sticking probabilities of
$1\times 10^{-4}$ and $1.4\times10^{-8}$.
The initial sticking coefficients $s_0$ measured for the different
samples at various temperatures are collected in the form of two
Arrhenius plots in the main part of Fig.~\ref{Arrh}.
They span a very wide range from $10^{-10}$ up to $10^{-4}$ \cite{compare}.
\begin{figure}[t]
\leavevmode
\begin{center}
\epsfxsize=8.0cm
\epsffile{fig2.ps}
\end{center}
\caption[]{\label{Arrh}
Initial sticking coefficients $s_0$ for a gas of H$_2$ at room
temperature on the steps (filled symbols) and terraces (open symbols) of
vicinal Si(001) surfaces at various surface temperatures $T_s$.
They were derived from the decay of the nonlinear susceptibility $\chi^{(2)}_s$
during H$_2$ exposure as shown in the inset.
Numerical fits to an Arrhenius law $s_0(T_s) = A\exp(-E_a/kT_s)$
yield activation energies $E_a$ and prefactors $A$ for step (terrace)
adsorption of $0.09\pm0.01\,\rm eV$ ($0.76\pm0.05\,\rm eV$) and
$4\pm2\times10^{-4}$ ($\sim 10^{-1}$).
}
\end{figure}
We attribute the fast hydrogen uptake -- which is not present on
the Si(001) surfaces -- to adsorption at special dissociation
sites of the stepped surface and identify the slow signal decay with
adsorption on terrace sites.
This interpretation is corroborated by the good agreement between the
absolute values of the smaller sticking coefficients with those
obtained previously for nominally flat Si(001) \cite{BrKo96} and
by the correlation of the saturation coverage
$\theta_{\rm step}^{\rm sat}$ associated with the fast decay with the
miscut angle (Table~\ref{satcover}).
$\theta_{\rm step}^{\rm sat}$ was determined by means of temperature
programmed desorption (TPD) after saturation was detected by SHG.
With the exception of the $10^\circ$ sample
-- where the number of D$_{\rm B}$ steps might be reduced as a
result of facetting \cite{Ranke} --
$\theta_{\rm step}^{\rm sat}$ is found to be in good agreement with the
fraction of Si dangling bonds located at the steps relative to the total
amount of dangling bonds in a unit cell of the vicinal surface $R_{\rm db}$
(Table~\ref{satcover}).
Thus it is tempting to associate the hydrogen species adsorbed at the
steps with monohydrides formed by attaching hydrogen to the rebonded Si
atoms.
For a quantitative comparison of the measured sticking coefficients
for step and terrace adsorption it is important to know whether they
are due to independent processes.
Annealing and readsorption experiments show that surface temperatures in
excess of 600 K are required to cause appreciable depletion of the
step sites by hydrogen migration on a timescale of several hundred
seconds \cite{RaHo98}.
For this reason it can be excluded that hydrogen adsorption on terraces
is mediated by the step sites under the conditions of our experiments.
The two sticking coefficients given in Fig.~\ref{Arrh} are thus a
quantitative measure of the reactivity of different surface sites.
The strongly activated behavior observed for terrace adsorption,
characterized by an Arrhenius energy of $E_a = 0.76\,\rm eV$, is similar
to that reported previously for the well oriented Si(001) and Si(111)
surfaces.
It indicates that distortions of the lattice structure have a pronounced
effect in promoting dissociative adsorption of H$_2$ \cite{BrBr96}.
With $E_a = 0.09\,\rm eV$ the effect of temperature on step
adsorption is comparatively moderate.
\begin{table}
\begin{tabular}{ccc}
$\alpha $ & $R_{\rm db}$ & $\theta_{\rm step}^{\rm sat}$ \\
\hline
2.5$^{\circ}$ & 0.064 & 0.07 \\
5.5$^{\circ}$ & 0.146 & 0.12 \\
10$^{\circ}$ & 0.285 & 0.15 \\
\end{tabular}
\medskip
\caption[]{\label{satcover}
Ratio $R_{\rm db}$ of dangling bonds at rebonded D$_{\rm B}$ steps to total
number of dangling bonds on vicinal Si(001) surfaces with miscut angle
$\alpha$ towards [110] and measured saturation coverage of hydrogen at
the steps $\theta_{\rm step}^{\rm sat}$.
}
\end{table}
The experimental data suggest that the peculiar geometric and electronic
structure of the stepped surface gives rise to reaction channels
that are much more effective than those on ideal surfaces.
To gain an atomistic understanding of the underlying mechanisms we
determined the total energy of various atomic configurations from
electronic structure calculations.
We use density functional theory
with a plane-wave basis set \cite{BoKl97}. The exchange-correlation functional
is treated by the generalized-gradient approximation (GGA) \cite{PeCh92}.
For silicon we generate an {\em ab initio}, norm-conserving pseudopotential
\cite{Hama89},
while the bare Coulomb potential is used for hydrogen.
We perform a transition
state search for H$_2$ dissociation in the configuration space of the H atoms and the Si atoms of the
four topmost layers using
the ridge method \cite{IoCa93}. All calculations, including geometry
optimizations, are performed with
plane waves up to a cut-off energy of 30 Ry in the basis set. For the
barrier heights reported below, the calculations are repeated with the same geometries, but with a cut-off of 50 Ry.
We model the D$_{\rm B}$ step by a vicinal surface with Miller indices $(1 \, 1 \, 7)$, using a
monoclinic supercell.
In this geometry,
periodically repeated rebonded D$_{\rm B}$ steps are separated by terraces
two Si dimers wide. Two special {\bf k}-points in the irreducible part of the
Brillouin zone are used for {\bf k}-space integration.
The uncertainty in chemisorption energies
due to the finite cell size is determined to be less than 30 meV.
The optimized geometry for the rebonded D$_{\rm B}$ step is shown in Fig.~\ref{geo1}.
The rebonded Si atoms form unusually long bonds to
the atoms at the step edge (mean bond strain 6\%).
Furthermore, the height of the rebonded Si atoms at the step edge is
different by 0.67{\AA}, due to a
Jahn-Teller like distortion similar in physical origin to the
buckling of the Si dimers on the Si(001) surface.
Consequently, the surface is semiconducting with a Kohn-Sham gap
of $\sim0.5$eV.
We investigated the following mechanisms of
H$_2$ dissociation close to a D$_{\rm B}$ step:
i) Si dimers at the end of a terrace could have different reactivity
compared to the Si dimers on flat regions of the surface.
ii) The H$_2$ breaks the stretched bond of a rebonded Si atom forming a
dihydride species.
iii) The H$_2$ molecule approaches with orientation parallel to the step
and dissociates, each of the H atoms attaching to one of the Si
rebonded atoms.
To determine the importance of mechanism i), we locate the
transition states both for H$_2$ dissociation on the terrace Si dimer
directly above and below the step (T$_1$ and T$_2$ in Fig.~\ref{geo1}).
The geometries of these transition states are asymmetric, with the H$_2$
molecule dissociating above the lower atom of the Si dimer, similar to the
transition state found for the ideal Si(001) surface \cite{KrHa95}.
For the barrier heights we obtain 0.40 eV and 0.54 eV for the sites T$_1$ and T$_2$, respectively.
These results are close to the adsorption barrier of 0.4 eV determined
previously for the flat Si(001) surface using the same
exchange-correlation functional \cite{GrBo97}.
Hence, the Si dimers near steps are only slightly different in reactivity
from Si dimers on an ideal Si(001) surface.
Thus mechanism i) cannot explain the enhanced reactivity.
Mechanism ii), the formation of a dihydride at the step from a gas phase H$_2$
molecule and a rebonded Si atom, is considerably less exothermic
(0.9 eV) than monohydride formation ($\sim$2 eV), because the Si--Si
bond between the rebonded Si atom and the step must be broken.
Nevertheless, dihydrides would exist as metastable species at the
surface temperatures of the experiment provided the corresponding
adsorpion barrier were sufficiently low.
However, the calculations yield a barrier of 0.50 eV, even slightly
higher than the one for monohydride formation on the terraces, and
we rule out mechanism ii) as well.
\begin{figure}
\leavevmode
\begin{center}
\epsfxsize=7.2cm
\epsffile{fig3.ps}
\end{center}
\medskip
\caption[]{\label{balls}
Monohydride at the step (upper part) and and motion of the highlighted step atoms during H$_2$ dissociation (lower part).
The reaction path is projected
onto a (110) plane parallel to the D$_{\rm B}$ step with
different stages of
dissociation marked by different colors.
The insets show the motion of the rebonded Si atoms (coordinates in {\AA}).
}
\end{figure}
\begin{table}[b]
\begin{tabular}{lrr}
site & $E_{\rm ads}$[eV] & $E_{\rm chem}$[eV] \\
\hline
S (monohydride) & no barrier & $-$2.07 \\
S (dihydride) & 0.50 & $-$0.87 \\
T$_1$ & 0.40 & $-$1.75 \\
T$_2$ & 0.54 & $-$1.93 \\
\end{tabular}
\medskip
\caption[]{\label{Tab2} Adsorption barriers $E_{\rm ads}$ and chemisorption
energies $E_{\rm chem}$ for H$_2$
molecules reacting with the vicinal surface at the sites S, T$_1$ and T$_2$,
as indicated in Fig. \protect\ref{geo1}.
}
\end{table}
For mechanism iii), monohydride formation from a molecule
approaching parallel to the step,
we do not find any barrier.
Using damped {\em ab initio} molecular dynamics for a slowly approaching
molecule,
we can identify the reaction path
connecting the gas phase continuously with the monohydride at the step.
Hence we attribute the highly increased sticking coefficient of H$_2$
observed on the vicinal surfaces to direct monohydride formation.
This conclusion is confirmed by the observation that this mechanism is
compatible with the observed saturation coverages of
Tab.~\ref{satcover}, as opposed to a complete decoration of the step
with dihydride species, which would result in a saturation coverage a
factor of two higher than observed.
Tab.~\ref{Tab2} summarizes the energetics of the three reaction
mechanisms considered.
\begin{figure}
\leavevmode
\begin{center}
\epsfysize=5.0cm
\epsffile{fig4.ps}
\end{center}
\caption[]{\label{path}
Total energy of the H$_2$/surface system
along the adsorption path shown in Fig.~\protect{\ref{balls}} (full line).
The dashed line comes from a separate calculation for the bare Si surface
using the Si coordinates along the reaction path.
The thin dotted line denotes the total energy along a similar reaction
path, with the two H-adsorption sites translated by one
surface lattice constant along the step edge.
}
\end{figure}
Fig.~\ref{balls} illustrates the
concerted motion of the H atoms and the two rebonded Si atoms along the
reaction path.
The Jahn-Teller like splitting
showing up in the different height of the two rebonded Si atoms is undone
during the approach of the H$_2$ molecule.
Additionally, the two rebonded Si atoms move closer together by about
\hbox{0.4{\AA}} during adsorption to assist in breaking the H--H bond.
Both for this optimum pathway and for the similar,
but slightly less favorable, pathway with the adsorption site
shifted by one
surface lattice constant parallel to the step, the total energy
decreases monotonically
when the H$_2$ molecule approaches the surface (Fig.~\ref{path}, full and
dotted lines).
This is to be compared to the adsorption energy barrier of 0.3 eV
we have calculated for a rigid Si substrate.
Obviously, a particular distortion of the step Si atoms is crucial
for adsorption, which, on the bare surface, would be associated
with a sizeable elastic energy (dashed curve in Fig.~\ref{path}).
Together with the small density of step sites (cf. $R_{\rm db}$ in
Tab.~\ref{satcover}) and
the necessary alignment of the H$_2$ axis
parallel to the step,
this explains the small prefactor $A_{\rm step}$
in comparison to $A_{\rm terrace}$.
The reaction path shown in Fig.~\ref{balls} implies some transfer of energy and
momentum from the H$_2$ to the lattice
during the adsorption process, which is not easily achieved due to the large
mass mismatch.
However, thermal fluctuations will sometimes lead to lattice configurations
favorable for adsorption.
Therefore, the surface temperature dependence of sticking,
as suggested by the experiments, appears to be compatible with
the presence of a non-activated adiabatic pathway.
We find that atomic relaxation
close to a step, while inducing only moderate changes in the
chemisorption energy \cite{PeKr98}, has a
pronounced influence on the energetics of adsorption.
Since the early stages of H$_2$ dissociation are quite sensitive to
electronic states around the Fermi energy, we propose that
the increase in reactivity is due to shifts of electronic states in
the gap induced by lattice distortions.
In the surface ground state, the two surface bands
formed from the dangling orbitals located at the
rebonded Si atoms bracket the Fermi energy and are split by $\sim1$ eV due to
the Jahn-Teller mechanism.
However, when the two rebonded
Si atoms are forced to the same geometric height, the energy separation of the
centers of the surface bands is reduced to 0.4 eV.
Upon the approach of molecular hydrogen, these
surface states can rehybridize and thus interact efficiently with the H$_2$
molecular orbitals.
The electronic structure at the step is different from the surface band
structure of the ideal Si(001) dimer reconstruction:
At the Si dimers characteristic for the ideal
surface, the $\pi$-interaction of the
dangling bonds prevents the two band centers to come closer than 0.7eV
\cite{DaSc92},
while the dangling bonds of equivalent step edge Si atoms are almost
degenerate.
Therefore the terrace sites are less capable of dissociating a H$_2$ molecule than the step sites.
Valuable commentaries and support by
W. Brenig, K.~L. Kompa and P. Vogl are gratefully acknowledged.
This work was supported in part by the SFB 338 of DFG.
| proofpile-arXiv_065-8278 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Introduction}
In \cite{pedersen} G.K. Pedersen and M. Takesaki gave a construction
of a normal semifinite faithful (n.s.f.) weight $\varphi(\, \cdot \,
\delta)$ on a von Neumann algebra $\mathcal{M}$, starting from a
n.s.f. weight $\varphi$ on $\mathcal{M}$ and a strictly positive
operator $\delta$ affiliated with the von Neumann algebra of elements
invariant under the modular automorphisms of $\varphi$. These weights
$\varphi(\, \cdot \,
\delta)$ are precisely all the n.s.f. weights $\psi$ on $\mathcal{M}$
which are invariant under the modular automorphisms of $\varphi$. In this
paper we will give a construction for a n.s.f. weight
$\varphi(\delta^{\mbox{\tiny $\frac{1}{2}$}} \cdot \delta^{\mbox{\tiny $\frac{1}{2}$}})$ in the case where $\delta$
satisfies the weaker hypothesis $\sigma_s^\varphi(\delta^{it}) =
\lambda^{ist} \delta^{it}$ for all $s,t \in \mathbb{R}$ and for a
given strictly positive operator $\lambda$, affiliated with
$\mathcal{M}$ and strongly commuting with $\delta$. This way we obtain
precisely all n.s.f. weights $\psi$ on $\mathcal{M}$ for which $[D\psi
: D \varphi]_t = \lambda^{\mbox{\tiny $\frac{1}{2}$} it^2} \delta^{it}$.
The operators $\lambda$ and $\delta$ are uniquely determined by
$\psi$.
When $\psi$ is a n.s.f. weight on $\mathcal{M}$ we prove that
$\sigma^\psi$ and $\sigma^\varphi$ commute if and only if there exist
strictly positive operators $\lambda$ and $\delta$ affiliated with
the centre of $\mathcal{M}$ and $\mathcal{M}$ respectively, such that
$\sigma_s^\varphi(\delta^{it}) = \lambda^{ist} \delta^{it}$ for all $s,t
\in \mathbb{R}$ and such that $\psi = \varphi(\delta^{\mbox{\tiny $\frac{1}{2}$}} \cdot
\delta^{\mbox{\tiny $\frac{1}{2}$}})$. When $\psi$ is a n.s.f. weight on $\mathcal{M}$ and
$\lambda \in \mathbb{R}_0^+$ we prove that $\psi \circ \sigma_t^\varphi =
\lambda^{-t} \psi$ for all $t \in \mathbb{R}$ if and only if $\varphi \circ
\sigma_t^\psi = \lambda^t \psi$ for all $t \in \mathbb{R}$ if and only if
there exists a strictly positive operator $\delta$ affiliated with $\mathcal{M}$
such that $\sigma_s^\varphi(\delta^{it}) = \lambda^{ist} \delta^{it}$
for all $s,t \in \mathbb{R}$ and such that $\psi =\varphi(\delta^{\mbox{\tiny $\frac{1}{2}$}} \cdot
\delta^{\mbox{\tiny $\frac{1}{2}$}})$.
One important application of the Radon-Nikodym theorem of Pedersen and
Takesaki arose in the theory of locally compact quantum groups. In
\cite{masuda} the theorem is used to obtain the modular element as the
Radon-Nikodym derivative of the left and the right Haar weight. Very
recently the new and very simple definition of locally compact
quantum groups we found in cooperation with J.~Kustermans (see
\cite{kustvaes}) implies that in general the right Haar weight is
only relatively invariant under the modular automorphisms of the left
Haar weight. In order
still to be able to obtain the modular element we need the more
general Radon-Nikodym theorem of this paper. This is a very important
application. Further, the possibility to obtain a Radon-Nikodym
derivative from the sole assumption that $\sigma^\varphi$ and
$\sigma^\psi$ commute is new and could give rise to several
applications in von Neumann algebra theory. It is also very important
to notice that the most powerful tool to prove the equality of two
n.s.f. weights, namely showing that the Radon-Nikodym derivative is
trivial, can now be applied in much more situations.
Let us first fix some notations. In paragraphs 1 to 4 we will always
assume that $\mathcal{M}$ is a von Neumann algebra, that $\varphi$ is
a n.s.f. weight on $\mathcal{M}$ and that $\lambda$ and $\delta$ are two
strictly positive, strongly commuting operators affiliated with
$\mathcal{M}$. We suppose $\mathcal{M}$ acts on the GNS-space
$\mathcal{H}$ of $\varphi$. We denote by $J$ and $\Delta$ the modular
operators of $\varphi$ and by $(\sigma_t)$ the modular automorphisms. As
usual we put $\mathfrak{N} = \{a \in \mathcal{M} \mid \varphi(a^*a) < \infty \}$ and
$\mathfrak{M}=\mathfrak{N}^* \mathfrak{N}$. We denote by $\Lambda : \mathfrak{N} \rightarrow \mathcal{H}$
the map appearing in the GNS-construction of $\varphi$ such that
$\langle \Lambda(a), \Lambda(b) \rangle = \varphi(b^*a)$. We remark that
the map $\Lambda$ is weak operator -- weak closed and refer to
\cite{stratila}, Chapter 10 and \cite{stratila2}, Chapter I,
for more details about n.s.f. weights.
We assume the following relative invariance~:
\begin{equation*}
\sigma_t(\delta^{is}) = \lambda^{ist}\delta^{is} \quad\text{for
all}\quad s,t \in \mathbb{R}.
\end{equation*}
Remark that in case $\lambda = 1$ we arrive at the premises for the
construction of Pedersen and Takesaki. Because we will regularly use
analytic continuations, we introduce the notation $S(z)$ for the
closed strip of complex numbers with real part between $0$ and
$\text{Re}(z)$.
Starting from all these assumptions we will construct a n.s.f. weight
$\varphi_\delta$ on $\mathcal{M}$ in the first paragraph. Then we will compute the
modular operators and automorphisms of $\varphi_\delta$ and prove an explicit
formula that justifies the notation $\varphi_\delta=\varphi(\delta^{\mbox{\tiny $\frac{1}{2}$}} \, \cdot
\, \delta^{\mbox{\tiny $\frac{1}{2}$}})$. In the fourth paragraph we compute the Connes
cocycle $[D \varphi_\delta : D \varphi ]$, which will enable us to prove in the last paragraph
the three Radon-Nikodym type theorems mentioned above.
\section{The construction of the weight $\varphi_\delta$}
\begin{defin}
For each $n \in \mathbb{N}_0$ we define an element $e_n \in \mathcal{M}$ by
\begin{align*}
\alpha_n &= \frac{2n^2}{\Gamma(\frac{1}{2})\Gamma(\frac{1}{4})},
\\
e_n &= \alpha_n \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}
\exp(-n^2 x^2-n^4 y^4) \lambda^{ix} \delta^{iy} \; dx \; dy \in \mathcal{M}.
\end{align*}
\end{defin}
The integral makes sense in the strong* topology. We remark that
automotically $\lambda$ satisfies $\sigma_t(\lambda^{is})=\lambda^{is}$ for all
$s,t \in \mathbb{R}$, which can be proven very easily. We also easily
obtain the following lemma, using the 'analytic extension techniques'
of \cite{stratila}, Chapter 9.
\renewcommand{\theenumi}{\roman{enumi}}
\renewcommand{\labelenumi}{\theenumi)}
\begin{lemma}
\begin{enumerate}
\item The elements $e_n \in \mathcal{M}$ are analytic w.r.t. $\sigma$.
For all $x,y,z \in \mathbb{C}$ the operator $\delta^x \lambda^y
\sigma_z(e_n)$ is bounded, with domain $\mathcal{H}$, analytic w.r.t.
$\sigma$
and satisfies $\sigma_t(\delta^x \lambda^y \label{en1}
\sigma_z(e_n)) = \delta^x \lambda^{y+tx}
\sigma_{t+z}(e_n)$ for all $t \in \mathbb{C}$.
\item For all $z \in \mathbb{C}$ we have $\sigma_z(e_n) \rightarrow 1$
strong* and bounded. \label{en2}
\item The function $(x,y,z) \mapsto \delta^x \lambda^y
\sigma_z(e_n)$ is analytic from $\mathbb{C}^3$ to $\mathcal{M}$. \label{en3}
\item The elements $e_n$ are selfadjoint.
\end{enumerate}
\end{lemma}
Inspired by the work of \cite{kustermans1} we give the following
definition :
\begin{defin}
Define a subset $\mathfrak{N}_0$ of $\mathcal{M}$ by
\begin{align*}
&\mathfrak{N}_0 = \{a \in \mathcal{M} \mid a \delta^{\mbox{\tiny $\frac{1}{2}$}} \quad\text{is bounded and}\quad
\overline{a\delta^{\mbox{\tiny $\frac{1}{2}$}}} \in \mathfrak{N}\} \\
\intertext{and a map}
&\Gamma : \mathfrak{N}_0 \rightarrow \mathcal{H} : a \mapsto
\Lambda(\overline{a\delta^{\mbox{\tiny $\frac{1}{2}$}}}),
\end{align*}
where $\overline{a \delta^{\mbox{\tiny $\frac{1}{2}$}}}$ denotes the closure of $a
\delta^{\mbox{\tiny $\frac{1}{2}$}}$.
\end{defin}
Remark that $\Gamma$ is injective and $\mathfrak{N}_0$ is a left ideal in
$\mathcal{M}$. So $\Gamma(\mathfrak{N}_0 \cap \mathfrak{N}_0^*)$ becomes an involutive algebra by
defining
\begin{align*}
\Gamma(a)\Gamma(b) &= \Gamma(ab) \\
\Gamma(a)^\# &= \Gamma(a^*).
\end{align*}
\begin{prop}
When we endow the involutive algebra $\Gamma(\mathfrak{N}_0 \cap \mathfrak{N}_0^*)$
with the scalar product of $\mathcal{H}$, it becomes
a left Hilbert algebra. The generated von Neumann algebra is $\mathcal{M}$.
\end{prop}
\begin{proof}
If $a,b \in \mathfrak{N}_0 \cap \mathfrak{N}_0^*$ we have
\begin{equation*}
\Gamma(ab) = \Lambda(a \overline{b \delta^{\mbox{\tiny $\frac{1}{2}$}}}) = a \Gamma(b)
\end{equation*}
so that $\Gamma(b) \mapsto \Gamma(ab)$ is bounded. For $a,b,c \in \mathfrak{N}_0
\cap \mathfrak{N}_0^*$ we have
\begin{equation*}
\langle \Gamma(a) \Gamma(b),\Gamma(c) \rangle = \varphi((\overline{c \delta^{\mbox{\tiny
$\frac{1}{2}$}}})^*
a (\overline{b \delta^{\mbox{\tiny $\frac{1}{2}$}}})) = \langle \Gamma(b),
\Lambda(a^* (\overline{c \delta^{\mbox{\tiny $\frac{1}{2}$}}})) \rangle = \langle \Gamma(b),
\Gamma(a)^\# \Gamma(c) \rangle.
\end{equation*}
If $a \in \mathfrak{N} \cap \mathfrak{N}^*$ one can easily verify that $e_n a
(\delta^{-\mbox{\tiny $\frac{1}{2}$}}e_n) \in \mathfrak{N}_0 \cap \mathfrak{N}_0^*$. Moreover
\begin{equation*}
\Gamma(e_n a (\delta^{-\mbox{\tiny $\frac{1}{2}$}}e_n)) = \Lambda(e_nae_n) =
J(\sigma_{\mbox{\tiny $\frac{i}{2}$}}(e_n))^*Je_n \Lambda(a) \rightarrow
\Lambda(a).
\end{equation*}
So $\Gamma(\mathfrak{N}_0 \cap \mathfrak{N}_0^*)$ is dense in $\mathcal{H}$. But also $e_n a e_n
\in \mathfrak{N}_0 \cap \mathfrak{N}_0^*$, and this converges strongly to $a$.
Therefore $\mathfrak{N}_0 \cap \mathfrak{N}_0^*$ is strongly dense in $\mathcal{M}$ and thus
$(\Gamma(\mathfrak{N}_0 \cap \mathfrak{N}_0^*))^2$ is dense in $\mathcal{H}$.
We claim that for all $n \in \mathbb{N}_0$ and all $b$ and $b'$ in the
Tomita algebra of $\varphi$, the element $\Lambda(e_n b b' e_n)$ belongs
to the domain of the adjoint of the mapping $\Gamma(a) \mapsto
\Gamma(a)^\#$. This will imply the closedness of that mapping, and so
this will end the proof. To prove the claim, choose $a \in \mathfrak{N}_0 \cap \mathfrak{N}_0^*$. Define
the element $x \in \mathfrak{N}$ by
\begin{equation*}
x:= (\delta^{\mbox{\tiny $\frac{1}{2}$}} \sigma_{-i}(e_n))
\sigma_{-i}({b'}^*b^*(\delta^{-\mbox{\tiny $\frac{1}{2}$}}e_n)).
\end{equation*}
Then we can make the following calculation~:
\begin{align*}
\langle \Lambda(x),\Gamma(a) \rangle &=\varphi((\overline{a \delta^{\mbox{\tiny $\frac{1}{2}$}}})^*
(\delta^{\mbox{\tiny $\frac{1}{2}$}} \sigma_{-i}(e_n))
\sigma_{-i}({b'}^*) \sigma_{-i}(b^*(\delta^{-\mbox{\tiny $\frac{1}{2}$}}e_n))) \\
&=\varphi(b^*(\delta^{-\mbox{\tiny $\frac{1}{2}$}}e_n) (\overline{a \delta^{\mbox{\tiny $\frac{1}{2}$}}})^*
(\delta^{\mbox{\tiny $\frac{1}{2}$}} \sigma_{-i}(e_n)) \sigma_{-i}({b'}^*)) \\
&=\varphi(b^*e_na^*(\delta^{\mbox{\tiny $\frac{1}{2}$}}\sigma_{-i}(e_n))\sigma_{-i}({b'}^*)) \\
&= \varphi(b^* e_n (\overline{a^* \delta^{\mbox{\tiny $\frac{1}{2}$}}}) \sigma_{-i}(e_n{b'}^*))
\\ &=\varphi(e_n {b'}^*b^*e_n (\overline{a^* \delta^{\mbox{\tiny $\frac{1}{2}$}}})) \\
&=\langle \Gamma(a)^\#,\Lambda(e_nbb'e_n) \rangle.
\end{align*}
This proves our claim.
\end{proof}
\begin{defin} \label{def15}
We define $\varphi_\delta$ as the weight associated to the left Hilbert
algebra\linebreak
$\Gamma(\mathfrak{N}_0 \cap \mathfrak{N}_0^*)$. This is a n.s.f. weight on
$\mathcal{M}$.
\end{defin}
We denote by $\mathfrak{N}', \mathfrak{M}'$ and $\Lambda' : \mathfrak{N}' \rightarrow \mathcal{H}$
the evident objects associated to $\varphi_\delta$. We denote by
$(\sigma'_t)$ the modular automorphisms of $\varphi_\delta$. We remark that
$\mathfrak{N}_0 \subset \mathfrak{N}'$ and $\Lambda'(a) = \Gamma(a)$ for all $a \in
\mathfrak{N}_0$.
Up to now the operator $\lambda$ did not appear in our formulas. We
only need the relative invariance property of $\delta$ to construct
the analytic elements $e_n$, which cut down $\delta$ properly.
Further on $\lambda$ will of course appear when we prove properties
about $\varphi_\delta$.
\section{The modular operators of $\varphi_\delta$}
We will now calculate the modular operators and the modular
automorphisms of $\varphi_\delta$. We will give explicit formulas.
\begin{lemma}
For all $s \in \mathbb{R}$ define
\begin{equation*}
u_s = J \lambda^{\mbox{\tiny $\frac{1}{2}$} is^2} \delta^{is} J
\lambda^{\mbox{\tiny $\frac{1}{2}$} is^2} \delta^{is} \Delta^{is}.
\end{equation*}
Then $(u_s)$ is a strongly continuous one-parameter group of unitaries on
$\mathcal{H}$.
\end{lemma}
\begin{proof}
Straightforward, by using the facts that $J \mathcal{M} J = \mathcal{M}'$,
$J\Delta^{is} = \Delta^{is}J$, $\Delta^{is} \delta^{it} =
\lambda^{ist}\delta^{it} \Delta^{is}$ and $\Delta^{is} \lambda^{it} =
\lambda^{it} \Delta^{is}$ for all $s,t \in \mathbb{R}$.
\end{proof}
\begin{defin}
We define $\Delta'$ as the strictly positive operator on $\mathcal{H}$ such
that $u_s = {\Delta'}^{is}$ for all $s \in \mathbb{R}$.
\end{defin}
Further on, in proposition~\ref{prop24}, we will give a more explicit formula for $\Delta'$. We
first need a lemma that we will use several times.
\begin{lemma} \label{lemma8}
Let $z \in \mathbb{C}$ and $n,m \in \mathbb{N}_0$. \\ If $\xi \in
\mathcal{D}({\Delta'}^z)$ then $Je_nJe_m \xi \in \mathcal{D}({\Delta'}^z)
\cap \mathcal{D}(\Delta^z)$ and
\begin{align*}
{\Delta'}^z Je_nJe_m \xi &= J \sigma_{i \bar{z}}(e_n)J \sigma_{-i
z}(e_m) \; {\Delta'}^z \xi \\
\Delta^z Je_nJe_m \xi &= J \lambda^{\mbox{\tiny $\frac{1}{2}$} i \bar{z}^2}
\delta^{\bar{z}} \sigma_{i \bar{z}}(e_n)J \; \lambda^{\mbox{\tiny $\frac{1}{2}$} iz^2}
\delta^{-z} \sigma_{-iz}(e_m) \; {\Delta'}^z \xi.
\end{align*}
If $\xi \in \mathcal{D}(\Delta^z)$ then $Je_nJe_m \xi \in \mathcal{D}({\Delta'}^z)
\cap \mathcal{D}(\Delta^z)$ and
\begin{align*}
{\Delta'}^z Je_nJe_m \xi &= J \lambda^{-\mbox{\tiny $\frac{1}{2}$} i \bar{z}^2}
\delta^{-\bar{z}} \sigma_{i \bar{z}}(e_n)J \; \lambda^{-\mbox{\tiny $\frac{1}{2}$} iz^2}
\delta^{z} \sigma_{-iz}(e_m) \; \Delta^z \xi \\
\Delta^z Je_nJe_m \xi &=J \sigma_{i \bar{z}}(e_n)J \sigma_{-i
z}(e_m) \; \Delta^z \xi.
\end{align*}
\end{lemma}
\begin{proof}
Let $\xi \in \mathcal{D}({\Delta'}^z)$. Recall the notation $S(z)$
from the end of the introduction. We define the function from $S(z)$
to $\mathcal{H}$ that maps $\alpha$ to
\begin{equation*}
J \lambda^{\mbox{\tiny $\frac{1}{2}$} i
\bar{\alpha}^2}
\delta^{\bar{\alpha}} \sigma_{i \bar{\alpha}}(e_n)J \; \lambda^{\mbox{\tiny $\frac{1}{2}$} i\alpha^2}
\delta^{-\alpha} \sigma_{-i\alpha}(e_m) \; {\Delta'}^\alpha \xi.
\end{equation*}
This function is continuous on $S(z)$ and analytic on its interior. In $is$ it attains the value
\begin{equation*}
J \lambda^{-\mbox{\tiny $\frac{1}{2}$} is^2} \delta^{-is} \sigma_s(e_n)J
\; \: \lambda^{-\mbox{\tiny $\frac{1}{2}$} is^2} \delta^{-is} \sigma_s(e_m) \; {\Delta'}^{is}
\xi \quad
= \quad J\sigma_s(e_n)J \; \sigma_s(e_m) \; \Delta^{is} \xi \quad = \quad \Delta^{is} \; Je_nJe_m
\xi.
\end{equation*}
By the results of \cite{stratila}, Chapter 9, the second statement follows. The
three remaining statements are proved analogously.
\end{proof}
\begin{prop} \label{prop24}
Let $r \in \mathbb{R}$. The operator
\begin{equation*}
J \lambda^{-\mbox{\tiny $\frac{1}{2}$} ir^2}J \; \lambda^{-\mbox{\tiny $\frac{1}{2}$} ir^2} \; J
\delta^{-r}J \; \delta^r \; \Delta^r
\end{equation*}
is closable and its closure equals ${\Delta'}^r$.
\end{prop}
\begin{proof}
Let $\xi \in \mathcal{D}(J
\delta^{-r}J \; \delta^r \; \Delta^r)$. Let $n,m \in \mathbb{N}_0$. By
lemma~\ref{lemma8} we have $Je_nJe_m \xi \in \mathcal{D}({\Delta'}^r)$
and
\begin{equation*}
{\Delta'}^rJe_nJe_m \xi = J \lambda^{-\mbox{\tiny $\frac{1}{2}$} ir^2} \sigma_{ir}(e_n)J
\lambda^{-\mbox{\tiny $\frac{1}{2}$} ir^2} \sigma_{-ir}(e_m)
\; J
\delta^{-r}J \delta^r \Delta^r \xi.
\end{equation*}
The operator ${\Delta'}^r$ being closed, we obtain that $\xi \in
\mathcal{D}({\Delta'}^r)$ and
\begin{equation*}
{\Delta'}^r \xi = J \lambda^{-\mbox{\tiny $\frac{1}{2}$} ir^2}J
\lambda^{-\mbox{\tiny $\frac{1}{2}$} ir^2} \; J
\delta^{-r}J \delta^r \Delta^r \xi.
\end{equation*}
On the other hand let $\xi \in \mathcal{D}({\Delta'}^r)$. Let $n,m \in
\mathbb{N}_0$. By lemma~\ref{lemma8} we have that $Je_nJe_m \xi \in
\mathcal{D}(J
\delta^{-r}J \delta^r \Delta^r)$ and ${\Delta'}^rJe_nJe_m \xi \rightarrow
{\Delta'}^r \xi$. This implies that $\mathcal{D}(J
\delta^{-r}J \delta^r \Delta^r)$ is a core for ${\Delta'}^r$, and
this ends our proof.
\end{proof}
Denote by $S'$ the closure of the operator $\Gamma(a) \mapsto
\Gamma(a)^\#$ on $\Gamma(\mathfrak{N}_0 \cap \mathfrak{N}_0^*)$. Define $J' = J
\lambda^{-i/8} J \lambda^{i/8}J$.
\begin{prop}
\begin{equation*}
S'=J' {\Delta'}^{\mbox{\tiny $\frac{1}{2}$}}.
\end{equation*}
So, $J'$ and $\Delta'$ are the modular operators associated with
$\varphi_\delta$.
\end{prop}
\begin{proof}
Let $a \in \mathfrak{N}_0 \cap \mathfrak{N}_0^*$ and $n,m,k,l \in \mathbb{N}_0$. Then
$\Lambda(e_k a (\delta^{\mbox{\tiny $\frac{1}{2}$}} e_l)) \in \mathcal{D}(\Delta^{\mbox{\tiny $\frac{1}{2}$}})$, so
by lemma~\ref{lemma8} we have
\begin{equation*}
Je_nJe_m \Lambda(e_k a (\delta^{\mbox{\tiny $\frac{1}{2}$}}e_l)) \in
\mathcal{D}({\Delta'}^{\mbox{\tiny $\frac{1}{2}$}})
\end{equation*}
and
\begin{align*}
J'{\Delta'}^{\mbox{\tiny $\frac{1}{2}$}} \; Je_nJe_m \Lambda(e_k a (\delta^{\mbox{\tiny $\frac{1}{2}$}}e_l)) &=
\delta^{-\mbox{\tiny $\frac{1}{2}$}}\sigma_{\mbox{\tiny $\frac{i}{2}$}}(e_n) \; J \lambda^{-i/4} \delta^{\mbox{\tiny $\frac{1}{2}$}}
\sigma_{-\mbox{\tiny $\frac{i}{2}$}}(e_m) \; \Delta^{\mbox{\tiny $\frac{1}{2}$}} \Lambda(e_k a (\delta^{\mbox{\tiny $\frac{1}{2}$}}e_l)) \\
&=\delta^{-\mbox{\tiny $\frac{1}{2}$}}\sigma_{\mbox{\tiny $\frac{i}{2}$}}(e_n) \; J \sigma_{-\mbox{\tiny $\frac{i}{2}$}}(\delta^{\mbox{\tiny $\frac{1}{2}$}}e_m)J
\; \Lambda((\delta^{\mbox{\tiny $\frac{1}{2}$}}e_l)a^* e_k) \\
&=\sigma_{\mbox{\tiny $\frac{i}{2}$}}(e_n)e_l \; \Lambda(a^*(\delta^{\mbox{\tiny $\frac{1}{2}$}}e_m)e_k) \\
&=\sigma_{\mbox{\tiny $\frac{i}{2}$}}(e_n)e_l \; J \sigma_{-\mbox{\tiny $\frac{i}{2}$}}(e_m e_k)J \; \Gamma(a^*).
\end{align*}
The last expression converges to $\Gamma(a^*)=S' \Gamma(a)$, while
\begin{equation*}
Je_nJe_m \Lambda(e_k a (\delta^{\mbox{\tiny $\frac{1}{2}$}}e_l)) = Je_n
\sigma_{-\mbox{\tiny $\frac{i}{2}$}}(e_l)Je_m e_k \Gamma(a)
\end{equation*}
converges to $\Gamma(a)$ when $n,m,k,l \rightarrow \infty$. This implies
that $\Gamma(a) \in \mathcal{D}({\Delta'}^{\mbox{\tiny $\frac{1}{2}$}})$ and $J' {\Delta'}^{\mbox{\tiny $\frac{1}{2}$}}\Gamma(a) = S'\Gamma(a)$.
Thus, $S' \subset J' {\Delta'}^{\mbox{\tiny $\frac{1}{2}$}}$.
On the other hand let $\xi \in \mathcal{D}(J \delta^{-\mbox{\tiny $\frac{1}{2}$}}J
\delta^{\mbox{\tiny $\frac{1}{2}$}} \Delta^{\mbox{\tiny $\frac{1}{2}$}})$. Take a sequence $(\xi_k)$ in $\Lambda(\mathfrak{N} \cap
\mathfrak{N}^*)$ such that $\xi_k \rightarrow \xi$ and $\Delta^{\mbox{\tiny $\frac{1}{2}$}}\xi_k \rightarrow
\Delta^{\mbox{\tiny $\frac{1}{2}$}} \xi$. Let $n,m,k \in \mathbb{N}_0$. Then $Je_nJe_m \xi_k
\in \mathcal{D}({\Delta'}^{\mbox{\tiny $\frac{1}{2}$}})$ and
\begin{equation*}
{\Delta'}^{\mbox{\tiny $\frac{1}{2}$}}Je_nJ e_m \xi_k = J \lambda^{-i/8} \delta^{-\mbox{\tiny $\frac{1}{2}$}}
\sigma_{\mbox{\tiny $\frac{i}{2}$}}(e_n)J \lambda^{-i/8}\delta^{\mbox{\tiny $\frac{1}{2}$}} \sigma_{-\mbox{\tiny $\frac{i}{2}$}}(e_m)
\Delta^{\mbox{\tiny $\frac{1}{2}$}} \xi_k.
\end{equation*}
If $k \rightarrow \infty$ this converges to
\begin{equation*}
J \lambda^{-i/8} \sigma_{\mbox{\tiny $\frac{i}{2}$}}(e_n)J \lambda^{-i/8}
\sigma_{-\mbox{\tiny $\frac{i}{2}$}}(e_m) \;
J \delta^{-\mbox{\tiny $\frac{1}{2}$}}J \delta^{\mbox{\tiny $\frac{1}{2}$}} \Delta^{\mbox{\tiny $\frac{1}{2}$}} \xi
= J \sigma_{\mbox{\tiny $\frac{i}{2}$}}(e_n)J \sigma_{-\mbox{\tiny $\frac{i}{2}$}}(e_m){\Delta'}^{\mbox{\tiny $\frac{1}{2}$}} \xi.
\end{equation*}
If $n,m \rightarrow \infty$ this converges to ${\Delta'}^{\mbox{\tiny $\frac{1}{2}$}} \xi$.
Because $Je_nJe_m \xi_k \in \mathcal{D}(S')$ for all $n,m,k \in
\mathbb{N}$ and because of the previous proposition,
we have finally proved that $\mathcal{D}(S')$ is a core
for ${\Delta'}^{\mbox{\tiny $\frac{1}{2}$}}$.
\end{proof}
\begin{cor} \label{Cor}
We have the formula
\begin{equation*}
\sigma'_s(x) = \lambda^{\mbox{\tiny $\frac{1}{2}$} is^2} \delta^{is} \sigma_s(x)
\delta^{-is} \lambda^{-\mbox{\tiny $\frac{1}{2}$} is^2}
\end{equation*}
for all $s \in \mathbb{R}$ and all $x \in \mathcal{M}$.
\end{cor}
\begin{cor}
For all $s \in \mathbb{R}$, $x,y,z \in \mathbb{C}$ and $n \in \mathbb{N}_0$
we have
\begin{equation*}
\sigma_s(\lambda^x \delta^y \sigma_z(e_n)) = \sigma'_s(\lambda^x
\delta^y \sigma_z(e_n)).
\end{equation*}
\end{cor}
Remark that formulas become easier when $\lambda$ is affiliated with the
centre of $\mathcal{M}$, in particular when $\lambda$ is a positive real
number. In that case $\Delta'$ is the closure of $J \delta^{-1} J
\delta \Delta$ and $J'$ equals $\lambda^{i/4}J$, because $J x = x^* J$
for all $x$ belonging to the centre of $\mathcal{M}$. Moreover we have
$\sigma'_s(x) = \delta^{is} \sigma_s(x) \delta^{-is}$ in that case.
\section{A formula for $\varphi_\delta$}
Before we can prove an explicit formula for $\varphi_\delta$ we need two
lemmas. The second one will also be used in the next section.
\begin{lemma}
There exists a net $(x_l)_{l \in L}$ in $\mathfrak{N}_0 \cap \mathfrak{N}_0^*$ such that
$x_l$ is analytic w.r.t. $\sigma'$ for all $l$ and $\sigma'_z(x_l)
\rightarrow 1$ strong* and bounded for all $z \in \mathbb{C}$.
\end{lemma}
\begin{proof}
Because $\mathfrak{N}_0 \cap \mathfrak{N}_0^*$ is a strongly dense *-subalgebra of $\mathcal{M}$ we can take a
net $(a_k)_{k \in K}$ in $\mathfrak{N}_0 \cap \mathfrak{N}_0^*$ such that $a_k^*=a_k$,
$\|a_k\| \leq 1$ for all $k$ and $a_k \rightarrow 1$ strongly. Define $q_k
\in \mathcal{M}$ by
\begin{equation*}
q_k = \frac{1}{\sqrt{\pi}} \int \exp(-t^2) \sigma'_t(a_k) \; dt.
\end{equation*}
Clearly $q_k$ is analytic w.r.t. $\sigma'$ and
\begin{equation*}
\sigma'_z(q_k) = \frac{1}{\sqrt{\pi}} \int \exp(-(t-z)^2)
\sigma'_t(a_k) \; dt.
\end{equation*}
Also $\sigma'_z(q_k) \rightarrow 1$ strong* and bounded.
Define $L = \mathbb{N}_0 \times K \times \mathbb{N}_0$ with the product
order, and $x_{(n,k,m)} = e_n q_k e_m$. Then $x_l$ is analytic w.r.t.
$\sigma'$ for all $l$ and $\sigma'_z(x_l) \rightarrow 1$ strong* and
bounded for all $z \in \mathbb{C}$. Let $n,m \in \mathbb{N}_0$ and $k \in
K$. The operator $e_n q_k e_m \delta^{\mbox{\tiny $\frac{1}{2}$}}$ is bounded, with closure
\begin{equation*}
e_n q_k (\delta^{\mbox{\tiny $\frac{1}{2}$}} e_m) = e_n \frac{1}{\sqrt{\pi}} \int \exp(-t^2)
\sigma'_t(a_k)(\delta^{\mbox{\tiny $\frac{1}{2}$}} e_m) \; dt.
\end{equation*}
For all $t \in \mathbb{R}$ the integrand of this expression equals
\begin{equation*}
\exp(-t^2)\delta^{it} \lambda^{\mbox{\tiny $\frac{1}{2}$} it^2} \sigma_t(\overline{a_k
\delta^{\mbox{\tiny $\frac{1}{2}$}}}(\lambda^{\mbox{\tiny $\frac{1}{2}$} (it^2 - t)} \delta^{-it} \sigma_{-t}(e_m))).
\end{equation*}
This belongs to $\mathfrak{N}$. When we apply $\Lambda$ on it we obtain
\begin{equation*}
\exp(-t^2) \delta^{it} \lambda^{\mbox{\tiny $\frac{1}{2}$} it^2} \Delta^{it} J\lambda^{-\mbox{\tiny $\frac{1}{2}$} it^2}
\delta^{it} \sigma_{-\mbox{\tiny $\frac{i}{2}$} -t}(e_m)J \Lambda(\overline{a_k
\delta^{\mbox{\tiny $\frac{1}{2}$}}}).
\end{equation*}
As a function of $t$ this is weakly integrable. Because the mapping
$\Lambda$ is weak operator -- weak closed we can conclude that $e_n q_k
(\delta^{\mbox{\tiny $\frac{1}{2}$}}e_m) \in \mathfrak{N}$. This means that $e_n q_k e_m \in \mathfrak{N}_0$.
Analogously we obtain that $e_n q_k e_m \in \mathfrak{N}_0^*$.
\end{proof}
\begin{lemma} \label{lemma13}
If $a \in \mathfrak{N}'$, then $a(\delta^z e_n)$ belongs to $\mathfrak{N}$ for all
$z \in \mathbb{C}$ and $n \in \mathbb{N}_0$. We have
$\Lambda(a(\delta^ze_n)) = \Lambda'(a(\delta^{z-\mbox{\tiny $\frac{1}{2}$}}e_n))$.
\end{lemma}
\begin{proof}
Take a net $(x_l)_{l \in L}$ as in the previous lemma. Then
$a x_l (\delta^ze_n) \rightarrow a (\delta^z e_n)$ strongly. Because $x_l \in
\mathfrak{N}_0$ we have $a x_l (\delta^ze_n) \in \mathfrak{N}$ and
\begin{align*}
\Lambda(a x_l (\delta^z e_n)) &= \Lambda'(a x_l (\delta^{z-\mbox{\tiny $\frac{1}{2}$}} e_n))
\\ &= J'(\sigma'_{\mbox{\tiny $\frac{i}{2}$}}(x_l (\delta^{z-\mbox{\tiny $\frac{1}{2}$}}e_n)))^*J'\Lambda'(a) \\
& \rightarrow J'(\sigma'_{\mbox{\tiny $\frac{i}{2}$}} (\delta^{z-\mbox{\tiny $\frac{1}{2}$}}e_n))^*J'
\Lambda'(a)=\Lambda'(a(\delta^{z-\mbox{\tiny $\frac{1}{2}$}} e_n)).
\end{align*}
Because $\Lambda$ is weak operator -- weak closed, we conclude that
$a(\delta^z e_n) \in \mathfrak{N}$ and $\Lambda(a(\delta^ze_n)) = \Lambda'(a(\delta^{z-\mbox{\tiny $\frac{1}{2}$}}e_n))$.
\end{proof}
\begin{prop}
For all $x \in \mathcal{M}^+$ we have
\begin{equation*}
\varphi_\delta(x) = \lim_n \varphi ((\delta^{\mbox{\tiny $\frac{1}{2}$}}e_n) x(\delta^{\mbox{\tiny $\frac{1}{2}$}}e_n)).
\end{equation*}
\end{prop}
\begin{proof}
If $x \in \mathfrak{N}'$ we have $x(\delta^{\mbox{\tiny $\frac{1}{2}$}}e_n) \in \mathfrak{N}$ for all $n$
and
\begin{align*}
\varphi((\delta^{\mbox{\tiny $\frac{1}{2}$}}e_n) x^*x(\delta^{\mbox{\tiny $\frac{1}{2}$}}e_n)) &= \|\Lambda(x
(\delta^{\mbox{\tiny $\frac{1}{2}$}}e_n)) \|^2 = \| \Lambda'(xe_n) \|^2 \\
&= \|J' \sigma_{-\mbox{\tiny $\frac{i}{2}$}}(e_n)J' \Lambda'(x) \|^2 \rightarrow
\|\Lambda'(x)\|^2 = \varphi_\delta(x^*x).
\end{align*}
This gives the proof for all $x \in {\mathfrak{M}'}^+$. Now let $x \in
\mathcal{M}^+$ and $\varphi_\delta(x) = + \infty$. Suppose
$\varphi((\delta^{\mbox{\tiny $\frac{1}{2}$}}e_n) x(\delta^{\mbox{\tiny $\frac{1}{2}$}}e_n))$ does not converge to
$+\infty$. Then there exists a $M > 0$ and a subsequence $(e_{n_k})_k$
such that $\varphi ((\delta^{\mbox{\tiny $\frac{1}{2}$}}e_{n_k}) x(\delta^{\mbox{\tiny $\frac{1}{2}$}}e_{n_k})) \leq
M$ for all $k$. Thus $x^{\mbox{\tiny $\frac{1}{2}$}} (\delta^{\mbox{\tiny $\frac{1}{2}$}}e_{n_k}) \in \mathfrak{N}$ for all
$k$, so $x^{\mbox{\tiny $\frac{1}{2}$}}e_{n_k} \in \mathfrak{N}'$ and $\varphi_\delta(e_{n_k} x e_{n_k}) \leq
M$ for all $k$. Because $e_{n_k} x e_{n_k} \rightarrow x$ strong* and bounded, this
contradicts with $\varphi_\delta(x)= +\infty$ and the $\sigma$-weak lower
semicontinuity of $\varphi_\delta$.
\end{proof}
\section{The Connes cocycle $[D \varphi_\delta : D \varphi]$}
\begin{defin}
The expression $\lambda^{\mbox{\tiny $\frac{1}{2}$} it^2} \delta^{it} \Delta^{it}$ defines a
strongly continuous one parameter group of unitaries on $\mathcal{H}$. So we
define the strictly positive operator $\rho$ such that $\rho^{it}$
equals this expression for all $t \in \mathbb{R}$.
\end{defin}
Recall the notation $S(z)$ from the end of the introduction.
\begin{lemma}
If $x \in \mathfrak{N} \cap {\mathfrak{N}'}^*$, then $\Lambda(x) \in
\mathcal{D}(\rho^{\mbox{\tiny $\frac{1}{2}$}})$ and
\begin{equation*}
J \lambda^{-i/8}\rho^{\mbox{\tiny $\frac{1}{2}$}} \Lambda(x) = \Lambda'(x^*).
\end{equation*}
\end{lemma}
\begin{proof}
Let $x \in \mathfrak{N} \cap {\mathfrak{N}'}^*$ and $n,m \in \mathbb{N}$. Then
$e_m x \in \mathfrak{N} \cap \mathfrak{N}^*$ because of lemma~\ref{lemma13}. We
can define a function from $S(\mbox{\small $\frac{1}{2}$})$ to
$\mathcal{H}$ mapping $\alpha$ to
\begin{equation*}
\lambda^{-\mbox{\tiny $\frac{1}{2}$} i \alpha^2}
\delta^\alpha \sigma_{-i \alpha}(e_n) \Delta^\alpha \Lambda(e_mx).
\end{equation*}
This function is continuous on $S(\mbox{\small $\frac{1}{2}$})$ and analytic on its interior. It attains the
value
\begin{equation*}
\lambda^{\mbox{\tiny $\frac{1}{2}$} it^2}\delta^{it}\sigma_t(e_n) \Delta^{it} \Lambda(e_mx)
= \rho^{it} \Lambda(e_n e_m x)
\end{equation*}
in $it$. So $\Lambda(e_n e_m x) \in \mathcal{D}(\rho^{\mbox{\tiny $\frac{1}{2}$}})$ and
\begin{align*}
J\lambda^{-i/8} \rho^{\mbox{\tiny $\frac{1}{2}$}} \Lambda(e_ne_mx) &= J
\sigma_{-\mbox{\tiny $\frac{i}{2}$}}(\delta^{\mbox{\tiny $\frac{1}{2}$}}e_n) \Delta^{\mbox{\tiny $\frac{1}{2}$}} \Lambda(e_mx) \\
&=J\sigma_{-\mbox{\tiny $\frac{i}{2}$}}(\delta^{\mbox{\tiny $\frac{1}{2}$}}e_n)J \Lambda(x^*e_m) \\
&=\Lambda(x^*(\delta^{\mbox{\tiny $\frac{1}{2}$}}e_n)e_m) = \Lambda'(x^*e_ne_m) \\
&=J' \sigma'_{-\mbox{\tiny $\frac{i}{2}$}}(e_n e_m)J' \Lambda'(x^*).
\end{align*}
Because $\rho^{\mbox{\tiny $\frac{1}{2}$}}$ is closed we can conclude that $\Lambda(x) \in
\mathcal{D}(\rho^{\mbox{\tiny $\frac{1}{2}$}})$ and $J \lambda^{-i/8}\rho^{\mbox{\tiny $\frac{1}{2}$}} \Lambda(x) =
\Lambda'(x^*)$.
\end{proof}
\begin{prop} \label{Connes}
The Connes cocycle $[D\varphi_\delta : D \varphi]_t$ equals
$\lambda^{\mbox{\tiny $\frac{1}{2}$} it^2}\delta^{it}$ for all $t \in \mathbb{R}$.
\end{prop}
\begin{proof}
Let $x \in \mathfrak{N}^* \cap \mathfrak{N}'$ and $y \in \mathfrak{N} \cap {\mathfrak{N}'}^*$.
Denote $u_t = \lambda^{\mbox{\tiny $\frac{1}{2}$} it^2} \delta^{it}$. Define
$F(\alpha)=\langle \rho^\alpha \Lambda(y), \Lambda(x^*) \rangle$ when
$\alpha \in \mathbb{C}$ and $0 \leq \text{Re}(\alpha) \leq \mbox{\small
$\frac{1}{2}$}$. Define $G(\alpha) = \langle \lambda^{i/8} J
\Lambda'(y^*), \rho^{\bar{\alpha}-1} \lambda^{i/8} J \Lambda'(x)
\rangle$ when $\alpha \in \mathbb{C}$ and $\mbox{\small
$\frac{1}{2}$} \leq \text{Re}(\alpha) \leq 1$.
Because of the previous lemma $F$ and $G$ are both well defined,
continuous on their domain and analytic in the interior. For any $t
\in \mathbb{R}$ we have
\begin{align*}
F(it) &= \langle \lambda^{\mbox{\tiny $\frac{1}{2}$} it^2} \delta^{it} \Delta^{it}
\Lambda(y),\Lambda(x^*) \rangle
=\varphi (x u_t \sigma_t(y)) \\
F(it + \mbox{\small
$\frac{1}{2}$}) &=\langle \rho^{it} \lambda^{i/8} J
\Lambda'(y^*),\Lambda(x^*) \rangle \\
G(it + \mbox{\small
$\frac{1}{2}$}) &= \langle \lambda^{i/8} J \Lambda'(y^*), \rho^{-it}
\Lambda(x^*) \rangle = F(it + \mbox{\small
$\frac{1}{2}$}) \\
G(it + 1) &= \langle J \Lambda'(y^*), \lambda^{\mbox{\tiny $\frac{1}{2}$} it^2} \delta^{-it}
\Delta^{-it} J \Lambda'(x) \rangle
=\langle \Lambda'(x),J \lambda^{\mbox{\tiny $\frac{1}{2}$} it^2}\delta^{it} J \Delta^{it}
\Lambda'(y^*) \rangle \\
&=\langle \lambda^{\mbox{\tiny $\frac{1}{2}$} it^2} \delta^{it} \Lambda'(x), {\Delta'}^{it}
\Lambda'(y^*) \rangle = \varphi_\delta(\sigma'_t(y)u_t x).
\end{align*}
So we can glue together the functions $F$ and $G$ and define
$H(\alpha) = F(\alpha)$ when $\alpha$ belongs to the domain of $F$
and $H(\alpha) = G(\alpha)$ when $\alpha$ belongs to the domain of
$G$.
Then $H$ is continuous on $S(1)$ and analytic on its interior. We have
\begin{equation*}
H(it)=\varphi(x u_t \sigma_t(y)) \quad\text{and}\quad H(it+1) =
\varphi_\delta(\sigma'_t(y)u_t x)
\end{equation*}
for all $t \in \mathbb{R}$. Because it is easily verified that
\begin{align*}
u_{t+s} &=u_t \sigma_t(u_s) \\ u_{-t} &=\sigma_{-t}(u_t^*) \\
\sigma'_t(x) &=u_t \sigma_t(x) u_t^*
\end{align*}
for all $s,t \in \mathbb{R}$ and $x \in \mathcal{M}$, we conclude that $u_t =
[D \varphi_\delta : D \varphi]_t$ for all $t$.
\end{proof}
The previous proposition also implies that the operators $\lambda$
and $\delta$ are uniquely determined by $\varphi_\delta$. If we put $u_t
= [D \varphi_\delta : D \varphi]_t$ we have $\lambda^{it} = u_t^* u_1^* u_{t+1}$
and $\delta^{it} = u_t \lambda^{- \mbox{\tiny $\frac{1}{2}$} i t^2}$ for all $t \in \mathbb{R}$,
which proves our claim.
\section{Three Radon-Nikodym theorems}
In this paragraph we denote by $(\sigma_t^\varphi)$ the modular
automorphism group of a n.s.f. weight $\varphi$ on a von Neumann algebra. We
denote by $\mathfrak{N}_\varphi,\mathfrak{M}_\varphi,\Lambda_\varphi,J_\varphi$ and
$\Delta_\varphi$ the same objects as defined in the introduction but we
add a subscript $\varphi$ for the sake of clarity.
\begin{prop} \label{Radon}
Let $\psi$ and $\varphi$ be two n.s.f. weights on a von Neumann algebra
$\mathcal{M}$. Let $\lambda$ and $\delta$ be two strongly commuting,
strictly positive operators affiliated with $\mathcal{M}$. Then the
following are equivalent
\begin{enumerate}
\item $[ D \psi : D \varphi]_t = \lambda^{\mbox{\tiny $\frac{1}{2}$} it^2} \delta^{it}
\quad\text{for all}\quad t \in \mathbb{R}.$ \label{one}
\item $\sigma_t^\varphi(\delta^{is}) = \lambda^{ist}\delta^{is}$ for all $s,t
\in \mathbb{R} \quad\text{and}\quad \psi = \varphi_\delta$. \label{two}
\end{enumerate}
\end{prop}
\begin{proof}
The implication \ref{two}) $\Rightarrow$ \ref{one}) follows from the proposition~\ref{Connes}.
To prove \ref{one}) $\Rightarrow$ \ref{two}) denote $u_t = [D \psi : D \varphi]_t$. Let
$s,t \in \mathbb{R}$. Then
\begin{equation*}
\lambda^{\mbox{\tiny $\frac{1}{2}$} it^2} \lambda^{\mbox{\tiny $\frac{1}{2}$} is^2} \lambda^{ist} \delta^{it}
\delta^{is} = u_{t+s} = u_t \sigma_t^\varphi(u_s) = \lambda^{\mbox{\tiny $\frac{1}{2}$} it^2}\delta^{it}
\sigma_t^\varphi (\lambda^{\mbox{\tiny $\frac{1}{2}$} is^2} \delta^{is}).
\end{equation*}
This implies that
\begin{equation} \label{eerste}
\lambda^{\mbox{\tiny $\frac{1}{2}$} is^2}\lambda^{ist} \delta^{is} =
\sigma_t^\varphi(\lambda^{\mbox{\tiny $\frac{1}{2}$} is^2}\delta^{is}) \quad\text{for all} \quad s,t
\in \mathbb{R}.
\end{equation}
It follows that for all $r,s,t \in \mathbb{R}$
\begin{equation} \label{tweede}
\sigma_r^\varphi(\lambda^{\mbox{\tiny $\frac{1}{2}$} is^2 + ist}) \sigma_r^\varphi(\delta^{is}) =
\sigma_{r+t}^\varphi(\lambda^{\mbox{\tiny $\frac{1}{2}$} is^2} \delta^{is}) =
\lambda^{\mbox{\tiny $\frac{1}{2}$} is^2}\lambda^{is(r+t)} \delta^{is}.
\end{equation}
But equation~(\ref{eerste}) implies that $\sigma_r^\varphi(\delta^{is}) =
\sigma_r^\varphi(\lambda^{-\mbox{\tiny $\frac{1}{2}$} is^2}) \lambda^{\mbox{\tiny $\frac{1}{2}$} is^2}
\lambda^{isr}\delta^{is}$, so by equation~(\ref{tweede}) we get
\begin{equation*}
\sigma_r^\varphi(\lambda^{ist}) \lambda^{\mbox{\tiny $\frac{1}{2}$} is^2} \lambda^{isr} \delta^{is}
= \lambda^{\mbox{\tiny $\frac{1}{2}$} is^2} \lambda^{is(r+t)} \delta^{is}.
\end{equation*}
This gives us $\sigma_r^\varphi(\lambda^{ist}) = \lambda^{ist}$ for all $r,s,t
\in \mathbb{R}$. Then equation~(\ref{eerste}) implies that
$\sigma_t^\varphi(\delta^{is}) = \lambda^{ist}\delta^{is}$ for all $s,t \in
\mathbb{R}$. So we can construct the weight $\varphi_\delta$, such that $[D \varphi_\delta
: D \varphi]_t = \lambda^{\mbox{\tiny $\frac{1}{2}$} it^2}\delta^{it}$. But then $\varphi_\delta = \psi$.
\end{proof}
We will now consider the more specific case in which $\lambda$ is
affiliated to the centre of $\mathcal{M}$. We will prove that we obtain
exactly all the weights whose automorphism group commutes with that
of $\varphi$.
\begin{prop}
Let $\varphi$ and $\psi$ be two n.s.f. weights on a von Neumann algebra
$\mathcal{M}$. Then the following are equivalent.
\begin{enumerate}
\item The modular automorphism groups $\sigma^\psi$ and $\sigma^\varphi$
commute. \label{rn1}
\item There exist a strictly positive operator $\delta$ affiliated
with $\mathcal{M}$ and a strictly positive operator $\lambda$ affiliated
with the centre of $\mathcal{M}$ such that
$\sigma_s^\varphi(\delta^{it})=\lambda^{ist} \delta^{it}$ for all $s,t
\in \mathbb{R}$ and such that $\psi = \varphi_\delta$. \label{rn2}
\item There exist a strictly positive operator $\delta$ affiliated
with $\mathcal{M}$ and a strictly positive operator $\lambda$ affiliated
with the centre of $\mathcal{M}$ such that $[D \psi : D \varphi]_t =
\lambda^{\mbox{\tiny $\frac{1}{2}$} it^2} \delta^{it}$ for all $t \in \mathbb{R}$. \label{rn3}
\end{enumerate}
\end{prop}
\begin{proof}
The equivalence of~\ref{rn2}) and \ref{rn3}) follows from
proposition~\ref{Radon}. The implication
\ref{nr2})~$\Rightarrow$~\ref{nr1}) follows from corollary~\ref{Cor} by
a direct computation. We will prove the implication
\ref{nr1})~$\Rightarrow$~\ref{nr3}). Denote $u_t = [D\psi : D
\varphi]_t$ for all $t \in \mathbb{R}$ and denote by $\mathcal{Z}$ the centre of
$\mathcal{M}$. For all $x \in \mathcal{M}$ and $s,t \in \mathbb{R}$ we have
\begin{equation*}
\sigma_t^\psi(\sigma_s^\varphi(x)) = u_t \sigma_{t+s}^\varphi(x) u_t^*
\quad\text{and}\quad \sigma_s^\varphi(\sigma_t^\psi(x)) =
\sigma_s^\varphi(u_t) \sigma_{t+s}^\varphi(x) \sigma_s^\varphi(u_t^*).
\end{equation*}
Thus we can conclude that $u_t^* \sigma_s^\varphi(u_t) \in \mathcal{Z}$ for
all $s, t \in \mathbb{R}$. But then $u_t^* u_s^* u_{s+t} \in \mathcal{Z}$ for
all $s, t \in \mathbb{R}$. Because $\sigma^\varphi$ acts trivially on
$\mathcal{Z}$ we get
\begin{equation*}
u_t^* \sigma_s^\varphi(u_t) = \sigma_{-t}^\varphi(u_t^* \sigma_s^\varphi(u_t))
=u_{-t} \sigma_{s-t}^\varphi(u_t) = u_{-t}u_{s-t}^*u_s \in \mathcal{Z}.
\end{equation*}
We can conclude that $u_t u_{s+t}^* u_s \in \mathcal{Z}$ for all $s,t \in
\mathbb{R}$. Then we define for $(s,t) \in \mathbb{R}^2$, $w(s,t)=u_t^* u_s^*
u_{s+t}$. The function $w$ is strong* continuous from $\mathbb{R}^2$ to
the unitaries of $\mathcal{Z}$. Let $s,s',t \in \mathbb{R}$. Because of the
previous remarks we can make the following calculation.
\begin{align*}
w(s+s',t) &= u_t^* \: u_{s+s'}^* \: u_{s+s'+t} \\
&= u_t^* \: \sigma_{s'}^\varphi(u_s^*) \: u_{s'}^* \: u_{s'+t} \: u_s \: (u_s^*
\: \sigma_{s'+t}^\varphi(u_s)) \\
&=u_t^* \: u_s^* \: \sigma_{s'}^\varphi(\sigma_t^\varphi(u_s) \: u_s^*) \: u_{s'}^*
\: u_{s'+t} \: u_s \\
&=u_t^* \: u_s^* \: \sigma_{s'}^\varphi(u_t^* \: u_{s+t} \: u_s^*) \: u_{s'}^*
\: u_{s'+t} \: u_s \\
&=u_t^* \: u_s^* \: (u_{s'}^* \: u_{s'+t} \: u_t^*) \: u_{s+t} \: (u_t^* \: u_t) \\
&=u_t^* \: u_s^* \: u_{s+t} \: u_t^* \: u_{s'}^* \: u_{s'+t} = w(s,t) \: w(s',t).
\end{align*}
Next let $s,t,t' \in \mathbb{R}$. We have
\begin{align*}
w(s,t+t') &=u_{t+t'}^* \: u_s^* \: u_{s+t+t'} \\
&=(\sigma_{t'}^\varphi(u_t^*) \: u_t) \: u_t^* \: u_{t'}^* \: u_s^* \: u_{t'}
\: \sigma_{t'}^\varphi(u_{s+t}) \\
&=u_t^* \: u_{t'}^* \: u_s^* \: u_{t'} \: \sigma_{t'}^\varphi(u_{s+t} \: u_t^*) \: u_t \\
&=u_t^* \: u_{t'}^* \: u_s^* \: u_{t'} \: \sigma_{t'}^\varphi(u_s) \: \sigma_{t'}^\varphi
(u_s^* \: u_{s+t} \: u_t^*) \: u_t \\
&=u_t^* \: u_{t'}^* \: u_s^* \: u_{t'} \: \sigma_{t'}^\varphi(u_s) \: u_s^* \: u_{s+t} \\
&=u_t^* \: (u_{t'}^* \: u_s^* \: u_{s+t'}) \: u_s^* \: u_{s+t} \\
&=u_t^* \: u_s^* \: u_{s+t} \: u_{t'}^* \: u_s^* \: u_{s+t'} = w(s,t) \: w(s,t').
\end{align*}
For each $t \in \mathbb{R}$ we can now take a strictly positive operator
$\lambda_t$ affiliated with $\mathcal{Z}$ such that $\lambda_t^{is}
=w(s,t)$ for all $s,t \in \mathbb{R}$. Let $t,t' \in \mathbb{R}$. Because
$\lambda_t$ and $\lambda_{t'}$ are strongly commuting we can write
\begin{equation*}
(\lambda_t \hat{\cdot} \lambda_{t'})^{is} = \lambda_t^{is}
\lambda_{t'}^{is} = w(s,t)w(s,t') = \lambda_{t+t'}^{is}
\end{equation*}
for all $s \in \mathbb{R}$, where $\lambda_t \hat{\cdot} \lambda_{t'}$
denotes the closure of $\lambda_t \lambda_{t'}$. It follows that
$\lambda_{t+t'} = \lambda_t \hat{\cdot} \lambda_{t'}$. Put
$\lambda=\lambda_1$. It follows from functional calculus that
$\lambda^q = \lambda_q$ for all $q \in \mathbb{Q}$. Then we have
\begin{equation*}
\lambda^{isq} = (\lambda^q)^{is} = \lambda_q^{is} = w(s,q)
\end{equation*}
for all $s \in \mathbb{R}$ and $q \in \mathbb{Q}$. Because of strong*
continuity we have $\lambda^{ist} = w(s,t)$ and thus $u_{s+t} =
\lambda^{ist}u_t u_s$ for all $s,t \in \mathbb{R}$. Now we can easily
verify that $v_t = \lambda^{-\mbox{\tiny $\frac{1}{2}$} it^2} u_t$ defines a strong*
continuous one-parameter group of unitaries in $\mathcal{M}$. So we can
take a strictly positive operator $\delta$ affiliated with $\mathcal{M}$
such that $[D \psi : D \varphi]_t = u_t = \lambda^{\mbox{\tiny $\frac{1}{2}$} i t^2}
\delta^{it}$ for all $t \in \mathbb{R}$. This gives us~\ref{rn3}).
\end{proof}
Now we will look at the even more specific case $\lambda \in
\mathbb{R}_0^+$.
So the following proposition becomes meaningful.
\begin{prop}
Let $\varphi$ be a n.s.f. weight on a von Neumann algebra $\mathcal{M}$. Let
$\delta$ be a strictly positive operator affiliated with $\mathcal{M}$ and
$\lambda \in \mathbb{R}_0^+$ such that $\sigma_t^\varphi(\delta^{is}) =
\lambda^{ist} \delta^{is}$ for all $s,t \in \mathbb{R}$. Then we have
\begin{equation*}
\varphi_\delta \circ \sigma_t^\varphi = \lambda^{-t} \varphi_\delta \qquad\text{and}\qquad \varphi
\circ \sigma_t^{\varphi_\delta} = \lambda^t \varphi \qquad\text{for all}\quad t
\in \mathbb{R}.
\end{equation*}
\end{prop}
\begin{proof}
Let $a \in \mathfrak{N}_{\varphi_\delta}$ and $t \in \mathbb{R}$. Then $\sigma_t^\varphi(a) = \delta^{-it} \sigma_t^{\varphi_\delta}(a)
\delta^{it}$. This belongs to $\mathfrak{N}_{\varphi_\delta}$ because $\delta^{it}$ is
analytic w.r.t. $\sigma^{\varphi_\delta}$, and we have
\begin{equation*}
\Lambda_{\varphi_\delta}(\sigma_t^\varphi(a)) = \delta^{-it} J_{\varphi_\delta} \lambda^{-\mbox{\tiny $\frac{1}{2}$} t} \delta^{-it}
J_{\varphi_\delta} \Delta_{\varphi_\delta}^{it} \Lambda_{\varphi_\delta}(a).
\end{equation*}
So we get
\begin{equation*}
\varphi_\delta(\sigma_t^\varphi(a^*a)) = \lambda^{-t} \varphi_\delta(a^*a) \quad\text{for
all}\quad t \in \mathbb{R}.
\end{equation*}
Now, the conclusion follows easily. The second statement is
proved analogously.
\end{proof}
After stating a lemma we will prove our third Radon-Nikodym theorem.
\begin{lemma}
Let $\varphi$ be a n.s.f. weight on a von Neumann algebra $\mathcal{M}$ and $a \in \mathcal{M}$. If
$\mathfrak{N}_\varphi a \subset \mathfrak{N}_\varphi$, $\mathfrak{N}_\varphi a^* \subset \mathfrak{N}_\varphi$
and if there exists a $\lambda \in \mathbb{R}_0^+$ such that
$\varphi(ax)=\lambda \varphi(xa)$ for all $x \in \mathfrak{M}_\varphi$, then
$\sigma_t^\varphi(a)=\lambda^{it} a$ for all $t \in \mathbb{R}$.
\end{lemma}
\begin{proof}
The proof of Result~6.29 in \cite{kustermans1} can be taken over
literally. Also a slight adaptation of the proof of theorem~3.6 in
\cite{pedersen} yields the result.
\end{proof}
\begin{prop}
Let $\psi$ and $\varphi$ be two n.s.f. weights on a von Neumann algebra
$\mathcal{M}$. Let $\lambda \in \mathbb{R}_0^+$. The following statements are
equivalent.
\begin{enumerate}
\item For all $t \in \mathbb{R}$ we have $\varphi \circ \sigma_t^\psi =
\lambda^t \varphi$. \label{nr1}
\item For all $t \in \mathbb{R}$ we have $\psi \circ \sigma_t^\varphi =
\lambda^{-t} \psi$. \label{nr2}
\item There exists a strictly positive operator $\delta$ affiliated
with $\mathcal{M}$ such that $\sigma_t^\varphi(\delta^{is}) = \lambda^{ist}
\delta^{is}$ for all $s,t \in \mathbb{R}$ and such that $\psi = \varphi_\delta$.
\label{nr3}
\item There exists a strictly positive operator $\delta$ affiliated
with $\mathcal{M}$ such that $[D\psi : D \varphi]_t = \lambda^{\mbox{\tiny $\frac{1}{2}$} it^2}
\delta^{it}$ for all $t \in \mathbb{R}$. \label{nr4}
\end{enumerate}
\end{prop}
\begin{proof}
We have already proven the equivalence of \ref{nr3}) and \ref{nr4}) and
the implications \ref{nr3})~$\Rightarrow$~\ref{nr2}) and
\ref{nr3})~$\Rightarrow$~\ref{nr1}). Suppose now that \ref{nr1}) is valid.
Put $u_t = [D\psi : D \varphi]_t$ and let $x \in \mathcal{M}^+$. Then we have
\begin{align*}
\varphi(u_t^*xu_t) &= \varphi(\sigma_{-t}^\varphi(u_t^*) \sigma_{-t}^\varphi(x)
\sigma_{-t}^\varphi(u_t)) \\&=\varphi(u_{-t} \sigma_{-t}^\varphi(x) u_{-t}^*)
= \varphi(\sigma_{-t}^\psi(x)) = \lambda^{-t} \varphi(x).
\end{align*}
So we have $\mathfrak{N}_\varphi u_t \subset \mathfrak{N}_\varphi$ for all $t \in
\mathbb{R}$, and thus $\mathfrak{N}_\varphi u_t^* = \sigma_t^\varphi(\mathfrak{N}_\varphi
u_{-t}) \subset \mathfrak{N}_\varphi$ for all $t \in \mathbb{R}$. Then we get for
every $x \in \mathfrak{M}_\varphi$ that
\begin{equation*}
\varphi(xu_t) = \varphi(u_t^*u_t x u_t) = \lambda^{-t} \varphi(u_t x).
\end{equation*}
From the previous lemma we can conclude that $\sigma_s^\varphi(u_t) =
\lambda^{ist} u_t$ for all $s,t \in \mathbb{R}$. Put $v_t =
\lambda^{-\mbox{\tiny $\frac{1}{2}$} it^2} u_t \in \mathcal{M}$. Then we have that $t \mapsto v_t$
is a strongly continuous one-parameter group of unitaries. Define
$\delta$ such that $\delta^{it} = v_t$ for all $t \in \mathbb{R}$. So
$\delta$ is affiliated with $\mathcal{M}$ and $[D\psi : D \varphi]_t =
\lambda^{\mbox{\tiny $\frac{1}{2}$} it^2} \delta^{it}$ and this gives us \ref{nr4}). Finally
suppose \ref{nr2}) is valid. From the proven implication
\ref{nr1})~$\Rightarrow$~\ref{nr4}) we get the existence of a strictly
positive operator $\delta$ affiliated with $\mathcal{M}$ such that $[D\varphi
: D \psi]_t = \lambda^{-\mbox{\tiny $\frac{1}{2}$} it^2} \delta^{it}$ for all $t \in \mathbb{R}$.
Changing $\delta$ to $\delta^{-1}$ we get $[D\psi : D\varphi]_t =
\lambda^{\mbox{\tiny $\frac{1}{2}$} it^2} \delta^{it}$ for all $t \in \mathbb{R}$. This gives us
again \ref{nr4}).
\end{proof}
\newcommand{\operatorname{Tr}}{\operatorname{Tr}}
We conclude this paper by giving an example that shows all situations
can really occur : we can have $\sigma_t^\varphi(\delta^{is}) =
\lambda^{ist} \delta^{is}$ with $\lambda$ and $\delta$ strongly
commuting but $\lambda$ not central, with $\lambda$ central but not
scalar, and with $\lambda$ scalar. Indeed, define $\mathcal{M}_1 =
B(L^2(\mathbb{R}))$ and define the selfadjoint operators $P$ and $Q$ on
the obvious domains by
\begin{equation*}
(P \xi)(\gamma) = \gamma \xi(\gamma) \quad \text{and} \quad (Q
\xi)(\gamma) = -i \xi'(\gamma).
\end{equation*}
Put $H = \exp(P)$ and $K_1 = \exp(Q)$ and denote by $\operatorname{Tr}$ the
canonical trace on $\mathcal{M}_1$. Remark that $\operatorname{Tr}$ has a trivial modular
automorphism group such that we can define $\varphi_1 = \operatorname{Tr}_H$ as in definition~\ref{def15}. An easy
calculation yields that $\sigma_t^{\varphi_1}(K_1^{is}) = H^{it} K_1^{is} H^{-it}= e^{-its}
K_1^{is}$, where $e$ denotes the well known real number $e$. This gives
an example of our third case. Define
$\mathcal{M}_2$ as the von Neumann algebra of two by two matrices over
$\mathcal{M}_1$ and $\varphi_2$ as the balanced weight
$\theta(\varphi_1,\varphi_1)$ (see \cite{stratila2}). Define $K_2 = \left( \begin{smallmatrix}
K_1 & 0 \\ 0 & K_1^{-1} \end{smallmatrix} \right)$
We easily have $\sigma_t^{\varphi_2}(K_2^{is}) =
\left( \begin{smallmatrix}
e^{-1} & 0 \\ 0 & e \end{smallmatrix} \right)^{its} K_2^{is}$,
which gives an example of our first
case because $\mathcal{M}_2$ is a factor. Define $\mathcal{M}_3$ as the
diagonal matrices in $\mathcal{M}_2$. We can restrict $\varphi_2$ to
$\mathcal{M}_3$ and keep $K_2$. We have the same formula as above, and
this way an example of our second case, $\left( \begin{smallmatrix}
e^{-1} & 0 \\ 0 & e \end{smallmatrix} \right)$ being
central now.
\clearpage
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
| proofpile-arXiv_065-8304 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Introduction}
Since appearance of a theory of unitary symmetry and then of a quark model,
a problem of baryon octet magnetic moment description has been attracting many
theoretical efforts. It is well known that a traditional unitary symmetry
model with only two parameters \cite{Gl} describes experimental data at
the qualitative level.
The quark models (cf.,e.g.,\cite{Morp}) usually introduce the number of
parameters $\ge 3$ and are able to describe baryon magnetic moments
quantitatively better than to $10\%$. In general, these quark models have
strongly improved the agreement with experimental data.
However, at the present time experimental data have achieved
a level of $1 \% $ accuracy \cite{Mont}, which force theoreticians to
upgrade precision of a theoretical and phenomenological
description of the baryon magnetic moments. Many interesting models
have been developed in order to solve this long-stayed problem from
various points of view, and we are able here to cite only
some of them [4-20].
Recently, baryon magnetic moments have been analyzed within two
independent chiral models \cite{Chang} and \cite{Kim}. It has been shown
there that the baryon magnetic moment symmetry
is based on unitary symmetry, and the chiral model ChPT \cite{Chang}
and quark soliton model \cite{Kim} just
describe the way in which this symmetry is broken.
We would try to establish a relation between these two models.
Besides, we shall show that a phenomenological model generated by the
unitary symmetry approach can be formulated in terms
of the electromagnetic baryon current, which proves to be quite
close to these two models.
\section{Chiral model ChPT for the baryon magnetic moments}
Let us briefly remind the ChPT model \cite{Chang} for the baryon octet
magnetic moments.It has been assumed that the leading $SU(3)_{f}$
breaking corrections to the magnetic moments have the same chiral
transformation properties as the strange quark mass operator and
the corresponding coefficients are of the order $m_{s}/\Lambda_{\chi}$,
$m_{s}$ and $\Lambda_{\chi}$, being strange quark mass and chiral
symmetry breaking scale, respectively.
The expressions for the baryon octet
magnetic moments in this model read \cite{Chang}
\begin{eqnarray}
\mu(p)=\frac{1}{3}(b_{1}+\alpha_{4})+(b_{2}+\alpha_{2})+\alpha_{1}+
\frac{1}{3}\alpha_{3}-\frac{1}{3}\beta_{1}
\nonumber\\
\mu(n)=-\frac{2}{3}(b_{1}+\alpha_{4})-\frac{2}{3}\alpha_{3}-
\frac{1}{3}\beta_{1}
\nonumber\\
\mu(\Sigma^{+})=\frac{1}{3}(b_{1}+\alpha_{4})+(b_{2}+\alpha_{2})-\alpha_{2}-
\frac{1}{3}\alpha_{4}-\frac{1}{3}\beta_{1}
\nonumber\\
\mu(\Sigma^{-})=\frac{1}{3}(b_{1}+\alpha_{4})-(b_{2}+\alpha_{2})+\alpha_{2}-
\frac{1}{3}\alpha_{4}-\frac{1}{3}\beta_{1}
\nonumber\\
\mu(\Xi^{0})=-\frac{2}{3}(b_{1}+\alpha_{4})+\frac{2}{3}\alpha_{3}-
\frac{1}{3}\beta_{1}
\nonumber\\
\mu(\Xi^{-})=\frac{1}{3}(b_{1}+\alpha_{4})-(b_{2}+\alpha_{2})+\alpha_{1}-
\frac{1}{3}\alpha_{3}-\frac{1}{3}\beta_{1}
\nonumber\\
\mu(\Lambda^{0})=-\frac{1}{3}(b_{1}+\alpha_{4})-
\frac{5}{9}\alpha_{4}-\frac{1}{3}\beta_{1}
\label{chang}
\end{eqnarray}
These expressions can easily be rewritten in the form demonstrating
that the chiral model ChPT \cite{Chang} is introducing the
unitary symmetry breaking terms in some definite way
\begin{eqnarray}
\mu(p)=F_{N}+\frac{1}{3}D_{N}-\frac{1}{3} \beta_{1} \qquad
\mu(n)=-\frac{2}{3}D_{N}-\frac{1}{3} \beta_{1}
\nonumber\\
\mu(\Sigma^{+})=F_{\Sigma}+\frac{1}{3}D_{\Sigma}-\frac{1}{3} \beta_{1}
\qqua
\mu(\Sigma^{-})=-F_{\Sigma}+\frac{1}{3}D_{\Sigma}-\frac{1}{3} \beta_{1}
\nonumber\\
\mu(\Xi^{0})=-\frac{2}{3}D_{\Xi}-\frac{1}{3} \beta_{1} \qquad
\mu(\Xi^{-})=-F_{\Xi}+\frac{1}{3}D_{\Xi}-\frac{1}{3} \beta_{1}
\nonumber\\
\mu(\Lambda^{0})=-\frac{1}{3}D_{\Lambda}-\frac{1}{3} \beta_{1}
\end{eqnarray}
where
$F_{N}=b_{2}+\alpha_{1}+\alpha_{2},\qquad F_{\Sigma}=b_{2}, \qquad
F_{\Xi}=b_{2}-\alpha_{1}+\alpha_{2},$
$D_{N}=b_{1}+\alpha_{3}+\alpha_{4}, \qquad D_{\Sigma}=b_{1}, \qquad
D_{\Xi}=b_{1}-\alpha_{3}+\alpha_{4}, \qquad D_{\Lambda}=
b_{1}-\frac{8}{3}\alpha_{4}$.
The main difference from other models introducing symmetry breaking
mechanisms of various kinds consists in the unity operator term
which explicitly breaks the octet form of the electromagnetic current
in $SU(3)_{f}$. Renormalization of the constants $F$ and $D$
can be related to the so-called middle-strong interaction contribution,
which follows from the fact that for the baryons B(qq,q')
it can be reduced to the form characteristic of mass breaking
terms of the unitary symmetry
\begin{eqnarray}
\mu(p)=F+\frac{1}{3}D+g_{1}+\beta +\pi_{N} \nonumber\\
\mu(n)=-\frac{2}{3}D+g_{1}+\beta -\pi_{N}\nonumber\\
\mu(\Sigma^{+})=F+\frac{1}{3}D+\beta \nonumber\\
\mu(\Sigma^{-})=-F+\frac{1}{3}D+\beta \nonumber\\
\mu(\Xi^{0})=-\frac{2}{3}D+g_{2}+\beta \nonumber\\
\mu(\Xi^{-})=-F+\frac{1}{3}D+g_{2}+\beta
\label{fdmass}
\end{eqnarray}
$F=b_{2} ,\quad
D=b_{1}+\alpha_{1}-\alpha_{2}-\alpha_{3}+\alpha_{4}$,
$g_{1}=\alpha_{1}-\frac{2}{3}\alpha_{3}+
\frac{1}{3}\alpha_{4},\quad
g_{2}=\alpha_{1}-\alpha_{2}-\frac{1}{3}\alpha_{3}+
\frac{1}{3}\alpha_{4}$,
$\beta=-\frac{1}{3}\beta_{1}+\frac{1}{3}(-\alpha_{1}+
\alpha_{2}+\alpha_{3}-\alpha_{4}),
\quad \pi_{N}=\alpha_{2}+\alpha_{3}$.
The results of Eq.(\ref{fdmass}) can be obtained from the following
electromagnetic current (we disregard space-time indices):
\begin{eqnarray}
J^{e-m,symm1}=-F(\overline{B}^{\gamma}_{1}B_{\gamma}^{1}-
\overline{B}^{1}_{\gamma}B_{1}^{\gamma})+
D(\overline{B}^{\gamma}_{1}B_{\gamma}^{1}+
\overline{B}^{1}_{\gamma}B_{1}^{\gamma})+\nonumber\\
g_{1}\overline{B}_{3}^{\gamma}B^{3}_{\gamma}+
g_{2}\overline{B}_{\gamma}^{3}B^{\gamma}_{3}+(\beta-
\frac{2}{3}D)Sp(\overline{B}^{\gamma}_{\beta}B_{\gamma}^{\beta})+
\pi_{N}(\overline{B}_{3}^{1}B_{1}^{3}-\overline{B}_{3}^{2}B_{2}^{3})
\label{fdch}
\end{eqnarray}
Here $B^{\gamma}_{\eta}$ is a baryon octet, $B^{3}_{1}=p$,
$B^{2}_{3}=\Xi^{0}$ etc.
For the magnetic moment of the $\Lambda$ hyperon this current gives:
\begin{equation}
\mu(\Lambda)^{symm1}=-\frac{1}{3}b_{1}+
\frac{2}{3}\alpha_{1}-\frac{2}{9}\alpha_{4}-\frac{1}{3}\beta_{1},
\end{equation}
so that
\begin{equation}
\mu(\Lambda)^{symm1}-\mu(\Lambda)^{ChPT}=\frac{2}{3}(\alpha_{1}+\alpha_{4}).
\end{equation}
For $\pi_{N}=0$ the current (\ref{fdch}) yields a direct sum of the
traditional electromagnetic current of the theory of
unitary symmetry \cite{Gl} and the traditional baryon current leading
to the Gell-Mann-Okubo mass relation \cite{GMO}.The parameter $\pi_{N}$
defines the value of contribution of the pion current term
and is characteristic of many versions of
the chiral models. Note that with
$\alpha_{1}=-\alpha_{4}$ the ChPT model results reduce to that given
by the current of Eq.(\ref{fdch}). As in \cite{Chang}
$\alpha_{1}=0,32,\alpha_{4}=-0,31$(in $GeV^{-1}$), it goes out that
the chiral model ChPT \cite{Chang} results are in fact given
by the phenomenological unitary current (\ref{fdch}).
\section{Quark soliton model $\chi QSM $ and unitary symmetry}
$\quad$ In \cite{NJL},\cite{Kim} magnetic moments of baryons were studied
within the chiral quark soliton model. In this model, known also as
the semibosonized Nambu-Jona-Lasinio model, the baryon can be
considered as $N_{c}$ valence quarks coupled to the polarized Dirac sea
bound by a nontrivial chiral background hedgehog field in the Hartree-Fock
approximation \cite{Kim}. Magnetic moments of baryons were written in the
form \cite{Kim}
\begin{equation}
\left(\begin{array}{ccccccc}\mu(p)\\ \mu(n)\\ \mu(\Lambda)\\
\mu(\Sigma^{+})\\ \mu(\Sigma^{-})\\ \mu(\Xi^{0})\\ \mu(\Xi{-})
\end{array}\right)=
\left(\begin{array}{ccccccc}-8&4&-8&-5&-1&0&8\\6&2&14&5&1&2&4\\
3&1&-9&0&0&0&9\\-8&4&-4&-1&1&0&4\\2&-6&14&5&-1&2&4\\
6&2&-4&-1&-1&0&4\\2&-6&-8&-5&1&0&8\end{array}\right)
\left(\begin{array}{ccccccc}v\\w\\x\\y\\z\\p\\q\end{array}\right)
\label{kim}
\end{equation}
Here the parameters $v$ and $w$ are linearly related with the usual F,D
coupling constants of the unitary symmetry approach. The parameters
$x,y,z,p,q \simeq m_{s}$ are specific for the model. Upon algebraic
transformations the expressions for 6 baryons $B(qq,q^{'})$
can be rewritten as
\begin{eqnarray}
\mu(p)=F+\frac{1}{3}D-f_{1}+T-3z\nonumber\\
\mu(n)=-\frac{2}{3}D-f_{1}+T+3z\nonumber\\
\mu(\Sigma^{+})=F+\frac{1}{3}D+T \nonumber\\
\mu(\Sigma^{-})=-F+\frac{1}{3}D+T \nonumber\\
\mu(\Xi^{0})=-\frac{2}{3}D-f_{2}+T \nonumber\\
\mu(\Xi^{-})=-F+\frac{1}{3}D-f_{2}+T
\label{alkim}
\end{eqnarray}
where
\begin{eqnarray}
F=-5v+5w-(9x+3y+p)+z\nonumber\\
D=-9v-3w-(13x+7y-4q+p)+3z\nonumber\\
f_{1}=4x+4y-4q-z\nonumber\\
f_{2}=22x+10y-4q+2p-2z\nonumber\\
T=\frac{1}{3}(28x+13y+8q+4p)-z \qquad.
\end{eqnarray}
One can see that the algebraic structures of Eq.(\ref{alkim})
and Eq.(\ref{fdmass}) are the same. It means that magnetic moments
of the octet baryons in the models $\chi QSM$\cite{Kim} and ChPT \cite{Chang}
can be obtained from the unitary electromagnetic
current of the form (we disregard space-time indices)
\begin{eqnarray}
J^{e-m,symm2}=-F(\overline{B}^{\gamma}_{1}B_{\gamma}^{1}-
\overline{B}^{1}_{\gamma}B_{1}^{\gamma})+
D(\overline{B}^{\gamma}_{1}B_{\gamma}^{1}+
\overline{B}^{1}_{\gamma}B_{1}^{\gamma})- \nonumber\\
f_{1}\overline{B}_{3}^{\gamma}B^{3}_{\gamma}-
f_{2}\overline{B}_{\gamma}^{3}B^{\gamma}_{3}+(T-
\frac{2}{3}D)Sp(\overline{B}^{\gamma}_{\beta}B_{\gamma}^{\beta})+
3z(\overline{B}_{3}^{2}B_{2}^{3}-\overline{B}_{3}^{1}B_{1}^{3})
\label{fdkim}
\end{eqnarray}
With this current the magnetic moment of the $\Lambda$ hyperon reads:
\begin{equation}
\mu(\Lambda)^{symm2}=-\frac{1}{3}D-(8x+5y-8q-z)
\end{equation}
which differs from that given by Eq.(\ref{kim})
\begin{equation}
\mu(\Lambda)^{symm2}-\mu(\Lambda)^{\chi QSM}=\frac{1}{3}
(16x-8y-7q+p)
\end{equation}
Eqs.(\ref{kim}) and (\ref{chang}) reduce to each other through
the relations between the parameters
\begin{eqnarray}
b_{1}=-(9v+3w)-\frac{1}{2}(42x+6y-15q+3p),\nonumber\\
b_{2}=-5(v-w)-(9x+3y+-z+p)\nonumber\\
\alpha_{1}=-\frac{1}{4}(94x+34y-31q+7p), \quad
\alpha_{2}=\frac{3}{2}(9x+3y-z+p),\nonumber\\
\alpha_{3}=-\frac{3}{2}(9x+3y+z+p),\quad
\alpha_{4}=\frac{9}{4}(14x+2y-5q+p),\nonumber\\
\beta_{1}=-\frac{9}{2}(8x+2y+q+p) \qquad \qquad,
\label{coin}
\end{eqnarray}
where now
\begin{equation}
9 \alpha_{1}+15 \alpha_{2}-15 \alpha_{3}+3 \alpha_{4}+8 \beta_{1}=0.
\label{beta}
\end{equation}
These formulae yield the following relation between the
octet baryon magnetic moments derived in \cite{Hong} and \cite{Kim}
\begin{equation}
-12\mu(p)-7\mu(n)+7\mu(\Sigma^{-})+
22\mu(\Sigma^{+})-12\mu(\Lambda^{0})+3\mu(\Xi^{-})+
23\mu(\Xi^{0})=0 \quad .
\end{equation}
The relations (\ref{coin}) and (\ref{beta}) close our proof of the
practical coincidence of the magnetic
moment description in the framework of the ChPT \cite{Chang} model
and the $\chi QSM $ \cite{Kim} one.
\section{Summary and conclusion}
It has been shown that the algebraic schemes of the models
\cite{Chang} and \cite{Kim} for the predictions of the octet
baryon magnetic moments have proved to be practically identical. Moreover,
the expressions for the magnetic moments B(qq,q') in these
models are those given by the unitary model with the
phenomenological electromagnetic current given by Eq.(\ref{fdch}) or
Eq.(\ref{fdkim}). The main difference of the models \cite{Chang}
and \cite{Kim} from a direct sum of the traditional unitary
electromagnetic and middle-strong baryon currents lies in the
terms due to pion current contribution which are written
excplicitly in Eqs.(\ref{fdch}) and (\ref{fdkim}). The only real
difference between our phenomenological current predictions and those of
\cite{Chang} and \cite{Kim} is in the formula for the
magnetic moment of the hyperon $\Lambda$.
This difference may prove to have
deeper meaning as $\Lambda$-hyperon being composed of all
different quarks is characterized by zero values of isotopic
spin and hypercharge. Quantitavely it occurs not to be very
important as due to approximate equality $\alpha_{1}=-\alpha_{4}$
the $\Lambda$ magnetic moment proves to be practically the same
in the phenomenological model given by Eq.(\ref{fdch}) and ChPT
\cite{Chang}.
In general, the analysis of the baryon magnetic moments in the
framework of these models has shown once more that unitary
symmetry is the basis which could be hidden in any dynamical
model pretending to an adequate description of the electromagnetic
properties of baryons.
| proofpile-arXiv_065-8308 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Faddeev equations in differential form were introduced by H.P. Noyes and
H.~Fiedelday in 1968 \cite{NF}
\begin{equation}
(H_0-E)\varphi_{\alpha}+V_{\alpha}\sum_{\beta=1}^{3}\varphi_{\beta}=0,
\label{fadeq}
\end{equation}
and since that time are used extensively as for investigating
theoretical aspects of the three-body problem as well as for numerical
solutions of three-body bound-state and scattering state problems.
The simple formula
$$
\sum_{\beta=1}^{3}\varphi_{\beta}=\Psi
$$
allows one to obtain the solution to the three-body Schr\"odinger equation
$$
(H_0+\sum_{\beta=1}^{3}V_{\beta}-E)\Psi=0
$$
in the case when
\begin{equation}
\sum_{\beta=1}^{3}\varphi_{\beta} \ne 0.
\label{nonzero}
\end{equation}
Such solutions of (\ref{fadeq}) can be called {\bf physical}. The proper
asymptotic boundary conditions should be added to Eqs. (\ref{fadeq}) in
order to guarantee (\ref{nonzero}). This conditions were studied by many
authors and are well known \cite{FM}. So that, I will not discuss them
here.
On the other hand, Eqs. (\ref{fadeq}) themselves allow solutions of the
different type (to physical ones) with the property
$$
\sum_{\beta=1}^{3}\varphi_{\beta}=0.
$$
This solutions can be constructed explicitly and have the form
$$
\varphi_{\alpha}=\sigma_{\alpha}\phi^{0},
$$
where $\phi^{0}$ is an eigenfunction of operator $H_0$:
$$
H_{0}\phi^{0}=E^{0}\phi^{0}
$$
and $\sigma_{\alpha}$, $\alpha=1,2,3$ are numbers such that
$\sum_{\alpha=1}^{3}\sigma_{\alpha}=0$.
The solutions of this type can be called {\bf spurious} or {\bf ghost},
because they do not correspond to any three-body system and do not contain
any information about interactions between particles. First observation of
the existence of spurious solutions was made in ref. \cite{Friar}. Some
spurious solutions corresponding to particular values of the total angular
momentum were found in refs. \cite{Pup1}, \cite{Pup2}. All the spurious
solutions on
subspaces with fixed total angular momentum were constructed in ref.
\cite{RYa}.
So that, there exist at least two types of solutions to Eqs.
(\ref{fadeq}) corresponding to real energy:\\
\hspace*{1cm} {\bf physical} ones with the property
$\sum\limits_{\beta=1}^{3}\varphi_{\beta}\ne 0$, \\
\hspace*{1cm} {\bf spurious} ones with the property
$\sum\limits_{\beta=1}^{3}\varphi_{\beta}= 0$. \\
The QUESTION is do these solutions form the complete set or there could be
exist solutions of different type which do not belong to physical and
spurious classes.
The ANSWER is not so evident because the operator corresponding to Eqs.
(\ref{fadeq}) is not selfadjoint and moreover even symmetrical:
\begin{equation}
{\bf H}=
\left(
\begin{array}{ccc}
H_0 & 0 & 0 \\
0 &H_{0} & 0 \\
0 &0 &H_0 \\
\end{array}
\right) +
\left(
\begin{array}{ccc}
V_1 & 0 & 0 \\
0 &V_2 & 0 \\
0 &0 &V_3 \\
\end{array}
\right)
\left(
\begin{array}{ccc}
1 & 1 & 1 \\
1 & 1 & 1 \\
1 & 1 & 1 \\
\end{array}
\right)= {\bf H}_{0}+{\bf V}{\bf X},
\label{boldH}
\end{equation}
and, in principle, this operator can have not real eigenvalues even the
ingredients $H_0$, $V_{\alpha}$ and three-body Hamiltonian
$H=H_{0}+\sum\limits_{\beta=1}^{3}V_{\beta}$ are selfadjoint operators.
In this report I will answer on the QUESTION and will give a
classification of eigenfunctions of the operator ${\bf H}$ and its adjoint.
This report is based on refs. \cite{Ya1}, \cite{Ya2}.
\section{Faddeev operator and its ajoint}
Let us consider the Hilbert space ${\cal H}$ of three component vectors
$F =\{ f_1, f_2, f_3\} $. The operator ${\bf H}$ acts in ${\cal H}$ according to
the formula
\begin{equation}
({\bf H} F)_{\alpha}= H_0 f_{\alpha}+V_{\alpha}\sum_{\beta}f_{\beta}.
\label{fadoper}
\end{equation}
The adjoint ${\bf H}^{*}$ is defined as
$$
{\bf H}^{*}={\bf H}_{0}+{\bf X}{\bf V}=
\left(
\begin{array}{ccc}
H_0 & 0 & 0 \\
0 &H_{0} & 0 \\
0 &0 &H_0 \\
\end{array}
\right) +
\left(
\begin{array}{ccc}
1 & 1 & 1 \\
1 & 1 & 1 \\
1 & 1 & 1 \\
\end{array}
\right)
\left(
\begin{array}{ccc}
V_1 & 0 & 0 \\
0 &V_2 & 0 \\
0 &0 &V_3 \\
\end{array}
\right)
$$
and acts as follows
\begin{equation}
({\bf H}^{*} G)_{\alpha}= H_0 g_{\alpha}+\sum_{\beta}V_{\beta}g_{\beta}.
\label{adjoint}
\end{equation}
The equations for eigenvectors of operators ${\bf H}$ and ${\bf H}^{*}$
$$
{\bf H}\Phi = E\Phi, \ \ \ \ \ {\bf H}^{*}\Psi = E\Psi
$$
in components have the form
$$
H_0\varphi_{\alpha}+V_{\alpha}\sum_{\beta=1}^{3}\varphi_{\beta}=E\varphi_{\alpha},
$$
$$
H_0\psi_{\alpha}+\sum_{\beta=1}^{3}V_{\beta}\psi_{\beta}=E\psi_{\alpha}.
$$
The first one coincides to the Faddeev equations (\ref{fadeq}) and the
second one has the direct connection to the so called triad of
Lippmann-Schwinger equations \cite{Gloeckle}.
It follows directly from the definitions (\ref{fadoper}) and (\ref{adjoint})
that operators ${\bf H}$ and ${\bf H}^{*}$ have the following invariant
subspaces:\\
for ${\bf H}$
$$
{\cal H}_{s} =\{ F\in {\cal H}_{s}:\ \sum_{\alpha}f_{\alpha}=0\} ,
$$
for ${\bf H}^{*}$
$$
{\cal H}_{p}^{*}=\{ G\in {\cal H}_{p}^{*}:\
g_{1}=g_{2}=g_{3}=g\} .
$$
It is worth to notice that operators ${\bf H}$ and ${\bf H}^{*}$ on the subspaces
${\cal H}_{s}$ and ${\cal H}_{p}^{*}$ act as free Hamiltonian $H_0$ and
three-body Hamiltonian $H$, respectively:
$$
({\bf H} F)_{\alpha} = H_{0}f_{\alpha} \ \ , \mbox{if}\ \ F\in {\cal H}_{s},
$$
$$
({\bf H}^{*}G)_{\alpha}= Hg = H_{0}g+\sum_{\beta}V_{\beta}g \ \ , \mbox{if}\ \ G\in
{\cal H}_{p}^{*} .
$$
As a consequence the spectrum of ${\bf H}$ on ${\cal H}_{s}$ coincides to the
spectrum of $H_{0}$ and the spectrum of ${\bf H}^{*}$ on ${\cal H}^{*}_{}$
does to the spectrum of three-body Hamiltonian $H$.
In order to describe eigenfunctions of operators ${\bf H}$ and ${\bf H}^{*}$ let
us introduce the resolvents
$$
{\bf R}(z)=({\bf H}-z)^{-1},
$$
$$
{\bf R}^{*}(z)=({\bf H}^{*}-z)^{-1}.
$$
The components of these resolvents can be expressed through the resolvent
of three-body Hamiltonian and free Hamiltonian as follows
\begin{equation}
R_{\alpha \beta}(z)=R_{0}(z)\delta_{\alpha \beta} - R_{0}(z)V_{\alpha}R(z),
\label{R}
\end{equation}
\begin{equation}
R^{*}_{\alpha \beta}(z)=R_{0}(z)\delta_{\alpha \beta} - R(z)V_{\beta}R_{0}(z).
\label{R*}
\end{equation}
Here
$$
R(z)=(H-z)^{-1}=(H_{0}+\sum_{\beta}V_{\beta}-z)^{-1},\ \ \
R_{0}(z)=(H_{0}-z)^{-1} .
$$
It is worth to note that the components of resolvents obey the following
Faddeev equations
\begin{equation}
R_{\alpha \beta}(z) = R_{\alpha}(z)\delta_{\alpha
\beta}-R_{\alpha}(z)V_{\alpha}\sum_{\gamma\ne\alpha}R_{\gamma \beta}(z),
\label{Rfad}
\end{equation}
\begin{equation}
R^{*}_{\alpha \beta}(z) = R_{\alpha}(z)\delta_{\alpha \beta}-R_{\alpha}(z)\sum_{\gamma\ne\alpha}
V_{\gamma} R^{*}_{\gamma \beta}(z).
\label{R*fad}
\end{equation}
Here $R_{\alpha}(z)=(H_0+V_{\alpha}-z)^{-1}$ is the two-body resolvent for the
pair $\alpha$ in the three-body space.
In order to proceed it is convenient to introduce the spectral
representation for the resolvent of three-body Hamiltonian
$$
R(z)= \sum_{E_{i}}\frac{|\psi^{i}\rangle \langle \psi^{i}|}
{E_{i}-z} +
\sum_{\gamma}\int dp_{\gamma}\frac{|\psi^{\gamma}(p_{\gamma})\rangle \langle
\psi^{\gamma}(p_{\gamma})|}{p^{2}_{\gamma}-z} +
\int dP \frac{|\psi^{0}(P)\rangle \langle \psi^{0}(P)|}
{P^{2}-z}.
$$
It is implied here that the system of eigenfunctions of the operator $H$
is complete {\it i.e.,}
$$
I= \sum_{i}|\psi^{i}\rangle \langle \psi^{i}|
+
\sum_{\gamma}\int dp_{\gamma}|\psi^{\gamma}(p_{\gamma})\rangle \langle
\psi^{\gamma}(p_{\gamma})|
+
\int dP |\psi^{0}(P)\rangle \langle \psi^{0}(P)|.
$$
Introducing this representation into (\ref{R}) and (\ref{R*})
one arrives to the spectral representations for components $R_{\alpha
\beta}(z)$:
\begin{eqnarray}
R_{\alpha \beta}(z)= \sum_{E_{i}}\frac{|\psi^{i}_{\alpha}\rangle \langle \psi^{i}|}
{E_{i}-z} +
\sum_{\gamma}\int dp_{\gamma}\frac{|\psi^{\gamma}_{\alpha}(p_{\gamma})\rangle \langle
\psi^{\gamma}(p_{\gamma})|}{p^{2}_{\gamma}-z} +
\nonumber \\
\int dP \frac{|\psi^{10}_{\alpha}(P)\rangle \langle \psi^{0}(P)|}
{P^{2}-z}
+\sum_{k=1}^{2}\int dP \frac{|u_{\alpha}^{k}(P)\rangle \langle w^{k}_{\beta}(P)|}
{P^{2}-z}.
\label{Rsr}
\end{eqnarray}
Here $\psi^{i}_{\alpha}$, $\psi^{\gamma}_{\alpha}(p_{\gamma})$ and
$\psi^{10}_{\alpha}(P)$ are the Faddeev components of eigenfunctions of
three-body Hamiltonian:
$$
\psi^{i}_{\alpha}=-R_{0}(E_i)V_{\alpha}\psi^{i},
$$
$$
\psi^{\gamma}_{\alpha}(p_{\gamma})=
-R_{0}(\varepsilon_{\gamma}+p_{\gamma}^{2}+i0)V_{\alpha}\psi^{\gamma}(p_{\gamma}),
$$
$$
\psi^{10}_{\alpha}(P)=\delta_{\alpha 1} \phi^{0}(P)
-R_{0}(P^{2}+i0)V_{\alpha}\psi^{0}(P),
$$
where $\phi^{0}(P)$ is an eigenfunction of the free Hamiltonian:
$$
H_{0}\phi^{0}(P)=P^{2}\phi^{0}(P).
$$
A new feature in (\ref{Rsr}) is the appearance of the last term related to
the
spurious solutions of Faddeev equations and its adjoint. The explicit
formulas for the spurious eigenfunctions $u^{k}_{\alpha}(P)$ of ${\bf H}$ are
of the form
$$
u^{k}_{\alpha}(P)=\sigma^{k}_{\alpha}\phi^{0}(P),
$$
where $\sigma_{\alpha}^{k}$, $k=1,2$, are the components of two noncollinear
vectors from ${\bf R}^{3}$ lying on the plane $\sum_{\alpha} \sigma_{\alpha} =0$.
The spurious eigenfunctions $w^{k}_{\beta}(P)$ of ${\bf H}^{*}$ can be
expressed by the formula
$$
w^{k}_{\beta}(P)= \theta^{k}_{\beta}\phi^{0}(P)-
\sum_{\alpha} [{\cal P}^{*}_{p}]_{\beta \alpha}
\theta_{\alpha}^{k}\phi^{0}(P),
$$
where
$$
[{\cal P}^{*}_{p}]_{\beta \alpha}=
\sum_{i}|\psi^{i}\rangle \langle \psi^{i}_{\alpha}|
+
\sum_{\gamma}\int dp^{'}_{\gamma}|\psi^{\gamma}(p^{'}_{\gamma})\rangle \langle
\psi^{\gamma}_{\alpha}(p^{'}_{\gamma})|
+
\int dP^{'} |\psi^{0}(P^{'})\rangle \langle \psi^{01}_{\alpha}(P^{'})|.
$$
Here the vectors $\theta^{k}\in {\bf R}^{3}$ are defined by following
biorthogonality conditions
$$
\sum_{\alpha}\theta_{\alpha}^{i}\sigma_{\alpha}^{j}=\delta_{ij},\ \ \ i,j=0,1,2,
$$
with $\sigma^{0}_{\alpha}=\delta_{\alpha 1}$ and $\theta_{\alpha}^{0}=1$.
For the components of resolvent $R^{*}_{\alpha \beta}(z)$ one can obtain the
similar to (\ref{Rsr}) formula
$$
R^{*}_{\alpha \beta}(z)= \sum_{E_{i}}\frac{|\psi^{i}\rangle \langle
\psi^{i}_{\beta}|}
{E_{i}-z} +
\sum_{\gamma}\int dp_{\gamma}\frac{|\psi^{\gamma}(p_{\gamma})\rangle \langle
\psi^{\gamma}_{\beta}(p_{\gamma})|}{p^{2}_{\gamma}-z} +
\int dP \frac{|\psi^{0}(P)\rangle \langle \psi^{10}_{\beta}(P)|}
{P^{2}-z} +
$$
\begin{equation}
+\sum_{k=1}^{2}\int dP \frac{|w_{\alpha}^{k}(P)\rangle \langle u^{k}_{\beta}(P)|}
{P^{2}-z}.
\label{R*sr}
\end{equation}
It is follows from (\ref{Rsr}) and (\ref{R*sr}) that operators ${\bf H}$ and
${\bf H}^{*}$ have the following system of eigenfunctions:\\
$\{ $ $\Phi^{i}$, $\Phi^{\gamma}(p_{\gamma})$, $\Phi^{10}(P)$ and $U^{k}(P)$ $\} $
$$
{\bf H}\Phi^{i}=E_{i}\Phi^{i},
$$
$$
{\bf H}\Phi^{\gamma}(p_{\gamma})=(\varepsilon_{\gamma}+p_{\gamma}^{2})\Phi^{\gamma}(p_{\gamma}),
$$
$$
{\bf H}\Phi^{10}(P)=P^{2}\Phi^{10}(P),
$$
$$
{\bf H} U^{k}(P)=P^{2}U^{k} , \ \ k=1,2;
$$
$\{$ $\Psi^{i}$, $\Psi^{\gamma}(p_{\gamma})$, $\Psi^{10}(P)$ and $W^{k}(P)$ $\} $
$$
{\bf H}^{*}\Psi^{i}=E_{i}\Psi^{i},
$$
$$
{\bf H}^{*}\Psi^{\gamma}(p_{\gamma})=(\varepsilon_{\gamma}+p_{\gamma}^{2})\Psi^{\gamma}(p_{\gamma}),
$$
$$
{\bf H}^{*}\Psi^{10}(P)=P^{2}\Psi^{10}(P),
$$
$$
{\bf H}^{*} W^{k}(P)=P^{2}W^{k} , \ \ k=1,2,
$$
with components of physical eigenfunctions:
$$
\phi^{i}_{\alpha}=-R_{0}(E_i)V_{\alpha}\psi^{i},
$$
$$
\phi^{\gamma}_{\alpha}(p_{\gamma})=
-R_{0}(\varepsilon_{\gamma}+p_{\gamma}^{2}+i0)V_{\alpha}\psi^{\gamma}(p_{\gamma}),
$$
$$
\phi^{10}_{\alpha}(P)=\delta_{\alpha 1} \phi^{0}(P)
-R_{0}(P^{2}+i0)V_{\alpha}\psi^{0}(P),
$$
for ${\bf H}$ and with components for physical eigenfunctions:
$$
\psi^{i}_{\alpha}=\psi^{i},
$$
$$
\psi^{\gamma}_{\alpha}(p_{\gamma})=
\psi^{\gamma}(p_{\gamma}),
$$
$$
\psi^{10}_{\alpha}(P)=\psi^{0}(P)
$$
for ${\bf H}^{*}$.
Physical eigenfunctions span the physical subspace of ${\cal H}$. This
subspace can be defined as
$$
{\cal H}_{p} = {\cal P}_{p}{\cal H},
$$
where the projection ${\cal P}_{p}$ is defined by formula
$$
{\cal P}_{p}=
\sum_{i}|\Phi^{i}\rangle \langle \Psi^{i}|
+
\sum_{\gamma}\int dp_{\gamma}|\Phi^{\gamma}(p_{\gamma})\rangle \langle
\Psi^{\gamma}(p_{\gamma})|
+
\int dP |\Phi^{10}(P)\rangle \langle \Psi^{10}(P)|.
$$
Spurious solutions span the spurious subspace of ${\cal H}$:
$$
{\cal H}_{s}= {\cal P}_{s}{\cal H}.
$$
where
$$
{\cal P}_{s} = \sum_{k=1}^{2} \int dP |U^{k}(P)\rangle \langle W^{k}(P)|.
$$
It is follows from construction and completeness of eigenfunctions of
three-body Hamiltonian, that physical and spurious subspaces are complete
in ${\cal H}$:
$$
{\cal H}= {\cal H}_{p}+{\cal H}_{s}.
$$
The same is valid for physical and spurious subspaces of operator
${\bf H}^{*}$:
$$
{\cal H}= {\cal H}_{p}^{*}+{\cal H}_{s}^{*},
$$
where the subspaces ${\cal H}_{p}^{*}$ and ${\cal H}_{s}^{*}$ are defined
as
$$
{\cal H}_{p}^{*}= {\cal P}_{p}^{*}{\cal H}, \ \ \
{\cal H}_{s}^{*}= {\cal P}_{s}^{*}{\cal H}.
$$
Here the operators ${\cal P}_{p}^{*}$ and ${\cal P}_{s}^{*}$ are Hilbert
space adjoints for ${\cal P}_{p}$ and ${\cal P}_{s}$.
\noindent
The results described above can be summarized as the following \\
{\bf Theorem}: {\it Faddeev operator} {\bf H}
$$
{\bf H}=
\left(
\begin{array}{ccc}
H_0 & 0 & 0 \\
0 &H_{0} & 0 \\
0 &0 &H_0 \\
\end{array}
\right) +
\left(
\begin{array}{ccc}
V_1 & 0 & 0 \\
0 &V_2 & 0 \\
0 &0 &V_3 \\
\end{array}
\right)
\left(
\begin{array}{ccc}
1 & 1 & 1 \\
1 & 1 & 1 \\
1 & 1 & 1 \\
\end{array}
\right)
$$
{\it and its adjoint} ${\bf H}^{*}$
$$
{\bf H}^{*}=
\left(
\begin{array}{ccc}
H_0 & 0 & 0 \\
0 &H_{0} & 0 \\
0 &0 &H_0 \\
\end{array}
\right) +
\left(
\begin{array}{ccc}
1 & 1 & 1 \\
1 & 1 & 1 \\
1 & 1 & 1 \\
\end{array}
\right)
\left(
\begin{array}{ccc}
V_1 & 0 & 0 \\
0 &V_2 & 0 \\
0 &0 &V_3 \\
\end{array}
\right)
$$
{\it have coinciding spectrums of real eigenvalues}
$$
\sigma({\bf H})=\sigma({\bf H}^{*})=\sigma(H)\cup \sigma(H_{0}),
$$
{\it where the physical part of the spectrum} $\sigma(H)$ {\it is the
spectrum of the three-body
Hamiltonian} $H=H_{0}+\sum_{\alpha}V_{\alpha}$ {\it and the spurious part}
$\sigma(H_{0})$
{\it is the spectrum of the free Hamiltonian} $H_{0}$.
{\it The sets of physical and spurious eigenfunctions are complete and
biorthogonal in the sense:}
$$
{\cal P}_{p}+{\cal P}_{s}={\cal P}^{*}_{p}+{\cal P}^{*}_{s}=I,
$$
$$
{\cal P}^{2}_{p(s)}={\cal P}_{p(s)}, \ \ {{\cal P}^{*}}^{2}_{p(s)}= {{\cal
P}^{*}}_{p(s)}, \ \
{\cal P}_{p}{\cal P}^{*}_{s}=0, \ \ {\cal P}_{s}{\cal P}^{*}_{p}=0.
$$
\section{Extension on CCA equations}
It is shown that the matrix operator generated by Faddeev equations in
differential form has the (additional to physical) spurious spectrum. The
existence of this spectrum strongly relates to the invariant spurious
subspace formed by components which sum is equal to zero. The theorem
formulated in preceding section can be extended on any matrix operator
corresponding to few-body equations for components of wave-function
obtained in framework of so called coupled channel array (CCA) method
\cite{Levin} as follows. CCA equations can be written in the matrix form
as
\begin{equation}
{\bf H}\Phi = E \Phi,
\label{CCA}
\end{equation}
where ${\bf H}$ is a $n\times n$ matrix operator acting in the Hilbert
space ${\cal H}$ of vector-functions $\Phi $ with components $\phi_{1},
\phi_{2},..., \phi_{n}$ each belonging to few-body system Hilbert space $h$.
The equivalence of Eq. (\ref{CCA}) to the Schr\"{o}dinger equation
$H\psi=(H_{0}+\sum_{\beta}V_{\beta})\psi=E\psi$ by requiring
$\sum_{\alpha}\phi_{\alpha}=\psi$ can be reformulated as the following
intertwining property for operators ${\bf H}$ and $H$
\begin{equation}
{\cal S}{\bf H} = H {\cal S}.
\label{SHHS}
\end{equation}
Here ${\cal S}$ is the summation operator
$$
{\cal S}\Phi = \sum_{\alpha}\phi_{\alpha}
$$
acting from ${\cal H}$ to $h$. Due to (\ref{SHHS}) the subspace ${\cal
H}_{s}$ formed by spurious vectors such that ${\cal S}\Phi =0$ is
invariant with respect to ${\bf H}$ and as a consequence the operator
${\bf H}$ has the spurious spectrum $\sigma_{s}$. Clearly, that the
concrete form of $\sigma_{s}$ and of corresponding eigenfunctions depends
on the particular form of the matrix operator ${\bf H}$ and is the
subject of special investigation.
The physical part $\sigma_{p}$ of the spectrum of ${\bf H}$ can be found
with adjoin variant of (\ref{SHHS})
\begin{equation}
{\bf H}^{*}{\cal S}^{*}= {\cal S}^{*}H,
\label{SHHS*}
\end{equation}
where adjoint ${\cal S}^{*}$ acts from $h$ to ${\cal H}$ according to the
formula
$$
[{\cal S}^{*}\phi]_{\alpha}= \phi.
$$
It follows from Eq. (\ref{SHHS*}) that the range ${\cal H}^{*}_{p}$ of
operator ${\cal S}^{*}$ consisting of vector-functions with the same
components
is invariant with respect to ${\bf H}^{*}$ and the restriction of ${\bf
H}^{*}$ on ${\cal H}^{*}_{p}$ is reduced to few-body Hamiltonian $H$. So
that, $\sigma_{p}=\sigma(H)$ and, similarly to the case of the Faddeev
operator, the same formula
for the spectrums of operators ${\bf H}$ and ${\bf H}^{*}$ is valid
$$
\sigma({\bf H})=\sigma({\bf H}^{*})= \sigma(H)\cup \sigma_{s},
$$
where $\sigma(H)$ is the spectrum of few-body Hamiltonian
$H=H_{0}+\sum_{\alpha}V_{\alpha}$.
\section*{Acknowledgement}
This work was partially supported by Russian Foundation for Basic Research
grant No. 98-02-18190. Author is grateful to Organizing Committee of 16
European Conference on Few-Body Problem in Physics for financial support
of his participation in the Conference.
| proofpile-arXiv_065-8311 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The correlation of mean multiplicity with some trigger is the
important problem of high energy heavy ion physics. For example,
the $J/\psi$ suppression can be explained at least partially
\cite{CKKG,ACF} by their
interactions with co-moving hadrons, and for the numerical
calculations we should know the multiplicity of comovers namely
in events with $J/\psi$ production.
In the present paper we give some results for the dependences of
the number of interacting nucleons and the multiplicity of produced
secondaries on the impact parameter. These results are based practically
only on geometry, and do not depend on the model of interaction. In the
case of minimum bias interactions the dispersion of the distribution on
the number of interacting nucleons (which is similar to the
distributions on the transverse energy, multiplicity of secondaries,
etc.) is very large. This allows in principle to have a significant
dependence of some characteristic of the interaction, say, mean
multiplicity of the secondaries, on the used trigger. On the other hand,
in the case of central collisions the discussed dispersion is small that
should result in weak dependence on any trigger.
We consider the high energy nucleus-nucleus collision as a superposition
of the independent nucleon-nucleon interactions. So our results can be
considered also as a test for search the quark-gluon plasma formation.
In the case of any collective interactions, including the case of
quark-gluon plasma formation we can not see any reason for existance the
discussed ratios. We present an estimation of possible violation which
is based on the quark-gluon string fusion calculations.
\section{Distributions on the number of interacting nucleons
for different impact parameters}
Let us consider the events with secondary hadron production in
nuclei A and B minimum bias collisions. In this case the average number
of inelastically interacting nucleons of a nucleus A is equal \cite{BBC}
to
\begin{equation}
<N_A>_{m.b.} = \frac{A \sigma^{prod}_{NB}}{\sigma^{prod}_{AB}} \;.
\end{equation}
If both nuclei, A and B are heavy enough, the production cross sections
of nucleon-nucleus and nucleus-nucleus collisions can be written as
\begin{equation}
\sigma^{prod}_{NB} = \pi R_B^2 \;,
\end{equation}
and
\begin{equation}
\sigma^{prod}_{AB} = \pi (R_A + R_B)^2 \;.
\end{equation}
It is evident that in the case of equal nuclei, A = B,
\begin{equation}
<N_A>_{m.b.} = A/4 \;.
\end{equation}
So in the case of minimum bias events the average number of
interacting nucleons should be four times smaller than in the
case of central collisions, where $<N_A>_c \approx A$.
For the calculation of the distribution over the number of
inelastically ineracting nucleons of A nucleus we will use the
rigid target approximation \cite{Alk,VT,PR}, which gives the
probability of $N_A$ nucleons interaction as \cite{GCSh,BSh}
\begin{equation}
V(N_A) = \frac{1}{\sigma^{prod}_{AB}} \frac{A!}{(A-N_A)! N_A!}
\int d^2 b [I(b)]^{A-N_A} [1-I(b)]^{N_A} \;,
\end{equation}
where
\begin{equation}
I(b) = \frac{1}{A} \int d^2 b_1 T_A(b_1-b)
\exp {- \sigma^{inel}_{NN}T_B(b_1)]} \;,
\end{equation}
\begin{equation}
T_A(b) = A \int dz \rho (b,z) \;.
\end{equation}
Eq. (5) is written for minimum bias events. In the case of events
for some interval of impact parameter $b$ values, the integration
in Eq. (5) should be fulfilled by this interval,
$b_{min} < b < b_{max}$. In particular, in the case of central
collisions the integration should be performed with the condition
$b \leq b_0$, and $b_0 \ll R_A$.
The calculated results for averaged values of the number of inelastic
interacting nucleons of the projectile nucleus, $<N_{in}>$ are presented
in Fig. 1, as the functions of impact parameter $b$ for the cases of
$Pb-Pb$ collisions at three different energies (we define $\sqrt{s}$ =
$\sqrt{s_{NN}}$ as the c.m.energy for one nucleon-nucleon pair), and for
$S-U$ collisions at $\sqrt{s_{NN}}$ = 20 GeV. One can see very
weak energy dependence of these distributions on the initial energy,
which appears in our approach only due to the energy dependence of
$\sigma_{NN}^{inel}$.
In the case of the collisions of equal heavy ions ($Pb-Pb$ in our case)
at zero impact parameter, about 6\% of nucleons from every nucleus do
not interact inelastically at energy $\sqrt{s_{NN}}$ = 18 GeV. More
accurate, we obtain on the average 11.8 non-interacting nucleons at this
energy, that is in agreement with the value $13 \pm 2$ nucleons
\cite{Alb}, based on the VENUS 4.12 \cite{Kla} model prediction. The
number of non-interacting nucleons decreases to the value about 3\% at
$\sqrt{s_{NN}}$ = 5.5 TeV. This is connected with the fact that the
nucleons at the periphery of one nucleus, which are overlapped with the
region of small density of nuclear matter at the periphery of another
nucleus, have large probability to penetrate without inelastic
interaction. It is clear that this probability decrease with increase of
$\sigma_{NN}^{inel}$, that results in the presented energy dependence.
The value of $<\!N_{in}\!>$ decreases with increase of the impact
parameter because even at small $b \neq 0$ some regions of colliding
ions are not overlapping.
In the case of different ion collisions, say $S-U$, at small impact
parameters all nucleons of light nucleus go throw the regions of
relatively high nuclear matter density of heavy nucleus, so
practically all these nucleons interact inelastically. For the case of
$S-U$ interactions at $\sqrt{s_{NN}}$ = 20 GeV it is valid for
$b < 2\div 3$ fm.
It is interesting to consider the distributions on the number of
inelastically interacting nucleons at different impact parameters.
The calculated probabili\-ties to find the given numbers of
inelastically interacting nucleons for the case of minimum bias $Pb-Pb$
events are presented in Fig. 2a. The average value, $<N_{in}>$ = 50.4
is in reasonable agreement with Eq. (4). The disagreement of the order
of 3\% can be connected with different values of effective nuclear radii
in Eqs. (2) and (3). The dispersion of the distribution on $N_{in}$ is
very large.
The results of the same calculations for different regions of impact
parame\-ters are presented in Fig. 2b, where we compare the cases of the
central ($b < 1$ fm), peripheral (12 fm $< b <$ 13 fm) and intermediate
(6 fm $< b <$ 7 fm) collisions. One can see that the dispersions of all
these distributions are many times smaller in comparison with the
minimum bias case, Fig. 2a.
In the cases of the central and peripheral interactions, the
distributions over $N_{in}$ are significantly more narrow than in the
intermediate case. The reason is that in the case of central collision
the number of nucleons at the periphery on one nucleus, which have the
probabilities to interact or not of the same order, is small enough.
In the case of very peripheral collision the total number of nucleons
which can interact is small. However, in the intermediate case the
comparatively large number of nucleons of one nucleus go via
peripheral region of another nucleus with small nuclear matter density,
and every of these nucleons can interacts or not.
\section{Ratio of secondary hadron multiplicities in the central
and minimum bias heavy ion collisions}
Let us consider now the multiplicity of the produced secondaries in the
central region. First of all, it should be proportional to the number of
interacting nucleons of projectile nucleus. It should depends also on
the average number, $<\! \nu_{NB} \!>$, of inelastic interactions of
every projectile nucleon with the target nucleus. At asymptotically
high energies the mean multiplicity of secondaries produced in
nucleon-nucleus collision should be proportional to $<\! \nu\! >$
\cite{Sh1,CSTT}. As was shown in \cite{Sh}, the average number of
interactions in the case of central nucleon-nucleus collisions,
$<\! \nu \!>_c$, is approximately 1.5 times larger than in the case of
minimum bias nucleon-nucleus collisions, $<\! \nu \!>_{m.b.}$. It means
that the mean multiplicity of any secondaries in the central heavy ion
collisions (with A = B), $<\! n\!>_c$ should be approximately 6 times
larger than in the case of minimum bias collisions of the same nuclei,
$<\! n \!>_{m.b.}$, $<\! n \!>_c \approx 6 <\! n \!>_{m.b.}$. Of course,
this estimations is valid only for secondaries in the central region of
inclusive spectra.
There exist several corrections to the obtained result. At existing
fixed target energies the multiplicity of secondaries is proportional
not to $<\! \nu \!>$, but to $\frac{1 + <\nu>}{2}$ \cite{CSTT,CCHT}.
For heavy nuclei the values of $<\! \nu \!>_{m.b.}$ are about
$3 \div 4$. It means, that the $<\nu_{NB}>_c$ to $<\nu_{NB}>_{m.b.}$
ratio equal to 1.5 will results in enhancement factor about 1.4 for the
multiplicity of secondaries. More important correction comes from the
fact that in the case of central collision of two nuclei with the same
atomic weights, only a part of projectile nucleons can interact with the
central region of the target nucleus. It decrease the discussed
enhancement factor to, say, 1.2. As it was presented in the previous
Sect., even in central collisions (with zero impact parameter) of equal
heavy nuclei, several percents of projectile nucleons do not interact
with the target because they are moving through the diffusive region of
the target nucleus with very small density of nuclear matter.
As a result we can estimate our prediction
\begin{equation}
<\! n \!>_c \sim 4.5 <\! n \!>_{m.b.} \;.
\end {equation}
In the case of quark-gluon plasma formation or some another collective
effects we can not see the reason for such predictions. For example,
the calculation of $<n>_c$ and $<n>_{m.b.}$ with account the string
fusion effect \cite{USC} violate Eq. (8) on the level of 40\% for the
case of $Au-Au$ collisions at RHIC energies.
Moreover, in the conventional approach considered here, we obtain the
prediction of Eq. (8) for any sort of secondaries including pions,
kaons, $J/\psi$, Drell-Yan pairs, direct photons, etc. Let us imagine
that the quark-gluon plasma formation is possible only at comparatively
small impact parameters (i.e. in the central interactions). In this case
Eq. (8) can be strongly violated, say, for direct photons and, possibly,
for light mass Drell-Yan pairs, due to the additional contribution to
their multiplicity in the central events via thermal mechanism. At the
same time, Eq. (8) can be valid, say, for pions, if the most part of
them is produced at the late stage of the process, after decay of the
plasma state. So the violation of Eq. (8) for the particles which can
be emitted from the plasma state should be considered as a signal for
quark-gluon plasma formation. Of course, the effects of final state
interactions, etc. should be accounted for in such test.
It was shown in Ref. \cite{DPS} that the main contribution to the
dispersion of multiplicity distribution in the case of heavy ion
collisions comes from the dispersion in the number of nucleon-nucleon
interactions. The last number is in strong correlation
with the value of impact parameter.
For the normalized dispersion $D/<\! n \!>$, where
$D^2 = <\! n^2 \!> - <\! n \!>^2$
we have \cite{DPS}
\begin{equation}
\frac{D^2}{<\! n \!>^2} =
\frac{\nu_{AB}^2 - <\nu_{AB}>^2}{<\nu_{AB}>^2}
+ \frac{1}{<\nu_{AB}>} \frac{d^2}{\overline{n}^2} \;,
\end {equation}
where $<\nu_{AB}> = <\! N_A \!> \cdot <\nu_{NB}>$ is the average number
of nucleon-nucleon interactions in nucleus-nucleus collision,
$\overline{n}$ and $d$ are the average multiplicity and the dispersion
in one nucleon-nucleon collision.
In the case of heavy ion collisions $<\nu_{AB}> \sim 10^2 - 10^3$,
so the second term in the right hand side of Eq. (9) becomes negligible
\cite{DPS}, and the first term, which is the relative dispersion in the
number of nucleon-nucleon interactions, dominates. In the case of
minimum bias A-B interaction the last dispersion is comparatively large
due to large dispersion in the distribution on $N_A$, see Fig. 2a. So in
the case of some trigger (say, $J/\psi$ production) without fixing of
the impact parameter, the multiplicity of secondaries can change
significantly in comparison with its average value. In the case of some
narrow region of impact parameters the dispersion in the distribution on
$N_A$ is many times smaller, as one can see in Fig. 2b, especially in
the case of central collisions. The dispersion in the number of
inelastic interactions of one projectile nucleon with the target
nucleus, $\nu_{NB}$, should be the same or slightly smaller in
comparison with the minimum bias case. So the dispersion in the
multiplicity of secondaries can not be large. It means that any trigger
can not change significantly the average multiplicity of secondaries in
the central heavy ion collisions, even if this trigger strongly
influents on the multiplicity in the nucleon-nucleon interaction.
\section{Conclusions}
We calculated the distributions on the number of interacting nucleons
in heavy ion collisions as the functions of impact parameters. The
dispersions of these distributions are very small for the central and
very peripheral interactions and significantly larger for
intermediate values of impact parame\-ters.
We estimated also the ratio of mean multiplicities of secondaries in
minimum bias and central collisions, which can be used for search of
quark-gluon plasma formation. We presented that in the case of
central collisions any trigger can not change significantly (say,
more than 10-15\%) the average multiplicity. This fact can be used
experimentally to distinguish collective effects on $J/\psi$ production
like quark-gluon plasma from more conventional machanisms.
In conclusion we express our gratitude to A.Capella and A.Kaidalov
for useful discussions. We thank the Direcci\'on General de
Pol\'{\i}tica Cient\'{\i}fica and the CICYT of Spain for financial
support. The paper was also supported in part by grant NATO OUTR.LG
971390.
\newpage
\begin{center}
{\bf Figure captions}\\
\end{center}
Fig. 1. Average numbers of inelastically interected nucleons in
$Pb-Pb$ and $S-U$ collisions at different energies as the functions of
impact parameter.
Fig. 2. Distributions on the numbers of inelastically interected
nucleons in $Pb-Pb$ collisions at $\sqrt{s_{NN}} = 18$ GeV for minimum
bias (a) interactions and for different regions of impact parameter (b).
\newpage
| proofpile-arXiv_065-8314 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Optical pumping is an established
technique in atomic and molecular
physics to selectively populate or depopulate specific states or
superpositions \cite{Kastler50,Cohen-Tannoudji75}.
It is based on the absorption of photons of a
specific mode and subsequent spontaneous emission into many modes.
The dissipative nature of the latter part
makes it possible to transform mixed into pure atomic states.
From this results the importance of optical pumping
for state preparation in
systems with a thermal distribution of population and for
laser cooling \cite{cooling}.
The maximum achievable rate of pumping is
determined by the escape time of the emitted photons, which in
optically thin media is given by the free-space radiative
lifetime. When the medium becomes optically thick, however, i.e. when
the absorption length becomes smaller than the smallest sample dimension,
the escape time of photons can be substantially reduced.
This phenomenon, known as radiation trapping \cite{Holstein}, is due to
reabsorption and multiple scattering of spontaneously emitted photons
and can drastically reduce the rate of optical pumping in
dense media. These limitations could be of major importance in
many different fields as for instance near-resonance
linear and nonlinear optics in dense media \cite{HI,NLO} or
the realisation of Bose condensation by velocity selective
coherent population trapping (VSCPT) \cite{VSCPT-problem}.
To describe the reabsorption and multiple scattering of
photons we here utilize a recently developed approach to
radiative interactions in dense atomic media \cite{Rad}.
In this approach a nonlinear and nonlocal single-atom density
matrix equation is derived which generalizes
the linear theory of radiation trapping
\cite{Holstein} to the nonperturbative regime.
As a model system a 3-level $\Lambda$ configuration
driven by a strong broad-band field is considered and the limits of
(i) large inhomogeneous and (ii) purely radiative broadening are studied.
Let us consider the $\Lambda$-type
system shown in Fig.~1. A strong driving field with (complex) Rabi-frequency
$\Omega(t)$ couples the lower state $|c\rangle$ to the excited state
$|a\rangle$, which
spontaneously decays into $|c\rangle$ and
$|b\rangle$. Since $|b\rangle$ is not coupled by the driving field,
this results
in optical pumping from $|c\rangle$ to $|b\rangle$.
We also take into account a possible finite lifetime of the
target state described
by a population exchange between the lower states at rate
$\gamma_0$.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=6 true cm
\epsffile{pump_system.eps}
\caption{Optical pumping in a $\Lambda$ system.}
\end{center}
\end{figure}
It was shown in \cite{Rad} that the effect of the incoherent background
radiation can be described by additional (nonlinear and nonlocal)
pump and relaxation rates and level shifts in the
single-atom density matrix equation.
If we assume orthogonal dipole moments or sufficiently different
frequencies of the two optical transitions, the level shifts
are negligible. Also if the driving field is strong, the
incoherent photons do not affect the pump transition
$a \leftrightarrow c$.
Thus we are left with a pump and decay rate $\Gamma(t)$ on the
$a \leftrightarrow b$ transition and the effective
single-atom equations of motion
read in a rotating frame:
\begin{eqnarray}
{\dot\rho}_{aa} &=& -(\gamma+\gamma^\prime+\Gamma)\rho_{aa} +
\Gamma\rho_{bb} +i(\Omega^*\rho_{ac}-c.c),\label{rho_aa}\\
{\dot\rho}_{cc} &=& \gamma^\prime\rho_{aa} +
\gamma_0\rho_{bb}-\gamma_0\rho_{cc}
-i(\Omega^* \rho_{ac}-c.c),\label{rho_cc}\\
{\dot\rho}_{ac} &=& -(i\Delta_{ac} +\Gamma_{ac})\rho_{ac} +i\Omega(\rho_{aa}
-\rho_{cc}).\label{rho_ac}
\end{eqnarray}
$\Delta_{ac}$ is the detuning of the drive field from resonance
and $\Gamma_{ac}$ is the respective coherence
decay rate. It should be noted, that $\Gamma$ is a function of
the density matrix elements of all other atoms, and hence the
Eqs. (\ref{rho_aa}-\ref{rho_ac}) are
nonlinear and nonlocal.
We are here interested in genuine optical pumping and therefore
consider a broad-band pump \cite{remark1}, i.e. $\Omega(t)$ is
assumed to have a vanishing mean value and Gaussian $\delta$-like
correlations
$\bigl\langle \Omega^*(t)\Omega(t^\prime)\bigr\rangle =
R\, \delta(t-t^\prime).$
Formally intergating Eq.(\ref{rho_ac}), substituting the result back into
Eqs.(\ref{rho_aa}) and (\ref{rho_cc}), and averaging over the Gaussian
distribution of the pump field leads to the rate equations
\begin{eqnarray}
{\dot\rho}_{aa} &=& -(\gamma+\gamma^\prime+\Gamma)\rho_{aa} +
\Gamma\rho_{bb} -R (\rho_{aa}-\rho_{cc}),\label{rate_aa}\\
{\dot\rho}_{cc} &=& \gamma^\prime\rho_{aa} +
\gamma_0\rho_{bb}-\gamma_0\rho_{cc} +R (\rho_{aa}-\rho_{cc}).
\label{rate_cc}
\end{eqnarray}
\section{Collective decay rate}
We now have to determine the collective rate
$\Gamma$. $\Gamma$ is
proportional to the spectrum of the incoherent field at the position
$\vec r_0$ and the resonance frequency $\omega$ of
the atom under consideration \cite{Rad}
\begin{eqnarray}
\Gamma(\omega,t) =\frac{\wp^2}{\hbar^2}
{\widetilde D}(\vec r_0,\omega;t)
= \frac{\wp^2}{\hbar^2}\,
\int_{-\infty}^\infty\!\!\!d\tau\, \langle\langle {\hat E}^-(\vec r_0,t)
{\hat E}^+(\vec r_0,t+\tau)\rangle\rangle\, e^{i\omega\tau}.
\end{eqnarray}
Here ${\hat E}^\pm$ are the positive and negative frequency parts of the
field operators, $\wp$ is the dipole matrix element of the
atomic transition, and $\langle\langle AB\rangle\rangle \equiv
\langle AB\rangle-\langle A\rangle\langle B\rangle$.
${\widetilde D}(\omega)$ can be obtained by summing
the spontaneous emission contributions of all atoms propagated through the
medium \cite{Rad}
\begin{equation}
D(1,1) = \int\!\!\!\int d3\, d4\,
D^{\rm ret}(1,3)\,\Bigl(D^{\rm ret\,}(1,4)\Bigr)^*\,
\Pi^{\, \rm s}(3,4).\label{GF_eq1}
\end{equation}
Here $D^{\rm ret}(1,2)$
is the retarded propagator of the electric field inside the medium, which
obeys a Dyson-equation in self-consistent Hartree approximation:
\begin{equation}
D^{\rm ret}(1,2) = D_{0}^{\rm ret}(1,2) -
\int\!\!\!\int d3\,d4\,
D_{0}^{\rm ret}(1,3)\, \Pi^{\rm ret}(3,4)\,
D^{\rm ret}(4,2).
\label{GF_eq2}
\end{equation}
In Eqs.(\ref{GF_eq1}) and (\ref{GF_eq2}) the numbers $1,2\dots$ stand for
$\{\vec r_1,t_1\},\{\vec r_2,t_2\}\dots$, and the intergrations
extend over time from $-\infty$
to $+\infty$ and over the whole sample volume.
$D_0^{\rm ret}$ is the free-space retarded propagator of the
electric field.
For simplicity we here have disregarded polarisation.
We also have introduced the atomic source correlation
\begin{equation}
\Pi^{\, \rm s}(\vec r_1,t_1;\vec r_2,t_2) =
\frac{\wp^2}{\hbar^2}\sum_j
\bigl\langle\bigl\langle \sigma_j^\dagger(t_1) \sigma_j
(t_2)\bigr\rangle\bigr\rangle\, \delta(\vec r_1-\vec r_j)\,
\delta(\vec r_2-\vec r_j)\label{source}
\end{equation}
and the atomic response function
\begin{equation}
\Pi^{\rm ret}(\vec r_1,t_1;\vec r_2,t_2) =
\frac{\wp^2}{\hbar^2}\,
\Theta(t_1-t_2)\sum_j
\bigl\langle \bigl[\sigma_{j}^\dagger(t_1), \sigma_{j}
(t_2)\bigr]\bigr\rangle\, \delta(\vec r_1-\vec r_j)\,
\delta(\vec r_2-\vec r_j),\label{response}
\end{equation}
where $\sigma_j=|b\rangle_{jj}\langle a|$ is the spin-flip operator
of the $j$th atom and $\Theta$ is the Heaviside step function.
In terms of the $\sigma$'s the dipole operator of the $j$th atom
reads $d_j=\wp(\sigma_j + \sigma_j^\dagger)$.
The names reflect the physical meaning of the quantities (\ref{source},
\ref{response}).
The Fourier-transform of $\Pi^{\, \rm s}$ is proportional to
the spontaneous emission spectrum of the atoms and that of $\Pi^{\rm ret}$
gives the susceptibility of the medium.
Eqs.(\ref{GF_eq1}) and (\ref{GF_eq2}) represent a nonperturbative summation
of the spontaneous radiation contributions of all atoms propagated through the
medium. It assumes a Gaussian statistics, which is however a good
approximation for the background radiation.
The Dyson-equation (\ref{GF_eq2}) was solved in \cite{Rad}
with some approximations in a macroscopic (continuum) limit
where $\Pi(\vec r_1,t_1;\vec r_2,t_2) =
\int d^3\vec r\,
P(\vec r,t_1,t_2)\, \delta(\vec r_1-\vec r)\, \delta(\vec r_2 -\vec r)$.
This yielded for the collective decay rate
\begin{equation}
\Gamma(\omega;t) =
\frac{\wp^2 \omega^4 }{(6 \pi)^2 \epsilon_0^2 c^4}
\int_V\! d^3\vec r \, \frac{e^{2 q_0^{\prime\prime}(\vec r,\omega;t) r}}{r^2}
\, {\widetilde P}^{\rm \, s} (\vec r,\omega;t),
\label{G_sol}
\end{equation}
where $r=|\vec r_0-\vec r|$ is the distance bewteeen the source and the
probe atom. The probability that a photon reaches the
probe atom is determined by the absorption coefficient
\begin{equation}
q_0^{\prime\prime}(\vec r,\omega,t) =
\frac{\hbar \omega}{3\epsilon_0 c} \, {\rm Re}\, \left[{\widetilde P}^{\rm ret}
(\vec r,\omega;t)\right].
\end{equation}
One can easily calculate the atomic source and response functions
for the $\Lambda$-system of Fig.~1.
\begin{eqnarray}
{\widetilde P}^{\rm ret}(\vec r_j,\omega,t)&=& \frac{\wp^2}{\hbar^2}
N\, \overline{\, \frac{\rho_{aa}^j(t) -\rho_{bb}^j(t)}{\Gamma_{ab}
+i(\omega-\omega_{ab}^j)}\, },\label{Pi_ret_TLA}\\
&&\nonumber\\
{\widetilde P}^{\rm\, s}(\vec r_j,\omega,t)&=& \frac{2\wp^2}{\hbar^2}
N\overline{\, \frac{\rho_{aa}^j(t)\Gamma_{ab}}{(\Gamma_{ab})^2
+(\omega-\omega_{ab}^j)^2}\, },\label{Pi_s_TLA}
\end{eqnarray}
where $N$ is the density of atoms, $\omega_{ab}^j$ is the resonance frequency
of the $j$th atom, $\Gamma_{ab}$ the coherence decay rate of the
corresponding transition
and the overbar denotes averaging over a
possible inhomogeneous distribution of frequencies.
At this points we shall distinguish two limiting cases. We first consider
the limit of large Doppler-broadening
and secondly the case of purely radiative broadening.
\section{Inhomogeneously broadened system}
The approach of \cite{Rad} is
based on the Markov approximation of a spectrally broad
incoherent radiation.
This approximation is justified for example in an inhomogeneously
broadened system. We therefore discuss first the case of large
Doppler-broadening. If we are interested
in the population
dynamics on a time scale slow compared to velocity changing
collissions, we may set $\rho_{\mu\mu}^j(t)=\overline{\, \rho_{\mu\mu}^j(t)\, }
\equiv \rho_{\mu\mu}(\vec r_j,t)$ and thus have the same population dynamics
in all velocity classes. Since $\Gamma$ depends on the
populations of all atoms, Eqs.(\ref{rate_aa}) and (\ref{rate_cc})
are nonlocal.
In the case of a constant density of atoms and
a homogeneous pump field, $\Gamma$ and hence all density matrix elements
will be approximately homogeneous.
We therefore make a simplifying approximation and
disregard the space dependence. The volume integral
is then carried out by placing the probe atom in the center
of the sample.
This yields for a Gaussian Doppler-distribution of
width $\Delta_D\gg \gamma$
\begin{equation}
\frac{\Gamma(\omega,t)}{\gamma}=
\frac{\rho_{aa}(t)}{\rho_{bb}(t)-\rho_{aa}(t)}
\left[ 1-\exp\left(-H(t) e^{-\Delta^2/2\Delta_D^2}\right)\right],
\label{Gamma_omega}
\end{equation}
where $\Delta=\omega-\omega_{ab}^0$ is the detuning from
the atomic resonance at rest, and
$ H(t)=K \, [\rho_{bb}(t)-\rho_{aa}(t)].$
$K=g\, N \lambda^2 d_{\rm eff}$ with $g=\gamma/\sqrt{2 \pi}\, \Delta_D$
characterizes the number of atoms within one relevant velocity class
in a volume given by the wavelength squared
and the effective escape distance $d_{\rm eff}$. In deriving
(\ref{Gamma_omega}) we
have used the relation between the free-space radiative
decay rate $\gamma$ and the dipole moment $\wp$:
$\wp^2 = 3\pi\hbar\epsilon_0 c^3 \gamma/\omega^3$
\cite{Louisell}.
$d_{\rm eff}$ corresponds for a long cylindrical slab to the
cylinder radius; for a thin disk to its thickness
and for a sphere to its radius.
Averaging over the inhomogeneous velocity distribution of the atoms
eventually yields
\begin{eqnarray}
\Gamma(t)=\overline{\, \Gamma(\omega,t)\, }
&=&
\int_{-\infty}^\infty \!\!\!d\omega\, \frac{1}{\sqrt{2\pi}\Delta_D}
e^{-\Delta^2/2\Delta_D^2} \, \Gamma(\omega,t)\nonumber\\
&=& \gamma\, \frac{\rho_{aa}(t)}{\rho_{bb}(t)-\rho_{aa}(t)}
\frac{1}{\sqrt{\pi}}\, \int_{-\infty}^\infty\!\!\! dy\, e^{-y^2}
\left[ 1-\exp\left(-H(t) e^{-y^2}\right)\right].\label{G_TLA_approx}
\end{eqnarray}
In Fig.~2a we have shown the population in the target state $|b\rangle$
as function of time starting from equal populations of
levels $|c\rangle$ and $|b\rangle$ at $t=0$.
We here have assumed that the target
state is stable, i.e. $\gamma_0=0$. One recognizes that optical
pumping is considerably slowed down already for
values of $K$ on the order of 10, which usually corresponds to much less
than one atom per $\lambda^3$. The slow-down of pumping
is further illustrated in Fig.~2b,
where the effective pump rate defined as
\begin{equation}
\Gamma_{\rm p}\equiv
- {\frac{d}{dt}}{\rm ln}[\rho_{aa}+\rho_{cc}]
\end{equation}
is plotted normalized to the value in an optically
thin medium ($\Gamma_{\rm p}^0=\gamma/2$).
One can see that the optical pump rate approaches a constant
asymptotic value, which for $K\gg 1$ and large pump rates $R$ is given by
\begin{equation}
\Gamma_{\rm p}^{\rm as}=
\frac{\gamma}{2 K \bigl(\pi\, {\rm ln} K\bigr)^{1/2}}\ll\frac {\gamma}{2}.
\end{equation}
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=14.3 true cm
\epsffile{pump1a.eps}
\caption{(a) Time evolution of
population of level $|b\rangle$ for $\gamma_0=0$,
$R/\gamma=10$ and $\gamma^\prime/\gamma=1$ for different density
parameters $K=g\, N\lambda^2 d_{\rm eff}$,
$g=\gamma/\sqrt{2\pi}\Delta_D$.
(b) Corresponding effective rate of optical pumping}
\end{center}
\end{figure}
Since we have assumed in the plots of Fig.~2
an infinitely long-lived target state ($\gamma_0=0$),
all populations eventually ends up in $|b\rangle$.
However if $\gamma_0$ is nonzero and in particular if it becomes comparable to
the asymptotic rate $\Gamma_p^{\rm as}$, the steady-state populations of all
states equalize. In this case optical pumping is less and less efficient
and becomes eventually impossible.
This is illustrated in Fig.~3, where the stationary population
in state $|b\rangle$ is shown as a function of the density parameter $K$
for different values of $\gamma_0$.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=7 true cm
\epsffile{pump3.eps}
\caption{Stationary population in level $|b\rangle$ for
$R/\gamma=10$, $\gamma^\prime/\gamma=1$ and different values of $\gamma_0$
as function of density parameter $K$.}
\end{center}
\end{figure}
\section{Radiatively broadened system}
We now discuss the case of a radiatively broadened system.
In analogy to the case of inhomogeneous
broadening, we find for the spectral distribution
\begin{equation}
\frac{\Gamma(\omega,t)}{\gamma}=
\frac{\rho_{aa}(t)}{\rho_{bb}(t)-\rho_{aa}(t)}
\left[1-\exp\left(-H(t)
\frac{\gamma_{ab}\Gamma_{ab}}
{\Gamma_{ab}^2+\Delta^2}\right)\right],\label{G_rad}
\end{equation}
where $\Delta=\omega-\omega_{ab}$, $\Gamma_{ab}=\gamma_{ab}+\Gamma$,
and $\gamma_{ab}=(\gamma+\gamma^\prime+R+\gamma_0)/2$, and
$ H(t)={\widetilde K} \, [\rho_{bb}(t)-\rho_{aa}(t)].$
Here ${\widetilde K}={\widetilde g}\, N\lambda^2 d_{\rm eff}$ with
${\widetilde g}=\gamma/2\pi \gamma_{ab}$.
As opposed to the corresponding relation in the inhomogeneous case,
Eq.(\ref{G_rad}) determines the collective decay rate only implicitly,
and $\Gamma$ needs to be calculated self-consistently.
For small atomic densities or $\rho_{aa}\approx\rho_{bb}$ the exponential
function in Eq.(\ref{G_rad}) can be expanded into a power series.
The first nonvanishing term found from this has the same
spectral shape than the single-atom response function.
In such a case the Markov approximation used in \cite{Rad} is
no longer valid and the approach is quantitatively incorrect.
We shall nevertheless use it and discuss the
range of validity afterwards.
We find that in the case of radiative
broadening the rate of optical pumping decreases exponentially with the
density parameter as opposed to $[N\lambda^2 d_{\rm eff}]^{-1}$
in the inhomogeneous case.
For sufficiently large pump rates $R$ and stable target state ($\gamma_0=0$)
the asymptotic rate of optical
pumping is here
\begin{equation}
\Gamma_{\rm p}^{\rm as}=\frac{\gamma}{2}\exp\bigl\{-{\widetilde K}\bigr\}.
\end{equation}
Physically this is due to the fact that here the incoherent photons are
in resonance with all atoms, which drastically increases the
scattering probability. As a consequence much smaller decay rates
$\gamma_0$ out of the target state are sufficient to make optical
pumping impossible. This is illustrated in Fig.~4, where we have plotted
the stationary population in state $|b\rangle$ as function of the
density parameter $K_0=N \lambda^2 d_{\rm eff} /2\pi$
for different values of $\gamma_0$.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=7 true cm
\epsffile{pump4.eps}
\caption{Same as Fig.3 for radiatively broadened system;
$ K_0=
N \lambda^2 d_{\rm eff}/2\pi$,
$R/\gamma=10$, $\gamma^\prime/\gamma=1$}
\end{center}
\end{figure}
In order to check the validity of the Markov approximation, we have
shown in Fig.~6 the stationary normalized spectral distribution
$\Gamma(\omega)/\Gamma$ for $K_0=1$, $10$ and $100$ and $\gamma_0/
\gamma=10^{-4}$. Also plotted is the atomic absorption spectrum for $K_0=1$
(solid line).
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=6.5 true cm
\epsffile{pump_sp.eps}
\caption{Spectral distribution of incoherent background radiation
for $R/\gamma=10$, $\gamma^\prime=\gamma$, $\gamma_0/\gamma=10^{-4}$,
and $K_0=1$ (dotted), $K_0=10$ (dashed) and $K_0=100$ (dashed-dotted).
Also shown is the normalized absorption spectrum for $K_0=1$.
}
\end{center}
\end{figure}
\noindent One recognizes that spectrum of the background radiation has only a
slightly larger width than the atomic response for $K_0=1$. In this
case the Markov approximation is not valid.
The situation however improves
when the density is increased. Thus Fig.~4 has only qualitative character
for lower densities.
\
\section{Summary}
We have shown that resonant optical pumping in a dense atomic medium
is substantially different from optical puming in dilute systems.
When the absorption length of spontaneously emitted photons
process becomes less than the minimum escape distance,
these photons are trapped inside the medium and cause repumping
of population.
This leads to a considerable slow-down of the transfer rate and
can make optical pumping impossible if the target state
of the pump process has a finite lifetime. The effect
is much less pronounced in inhomogeneously broadened systems due to
the reduction of the spectral density of background photons.
These results may have some important consequences.
It is practically impossile to use resonant optical pumping in media with
$N\lambda^3\sim 1$. This sets strong limits to the possibility to prepare
pure states or coherent superpositions in systems with
initial thermal occupation of states, such as Hyperfine ground levels
of alkali at room temperature. Even though the above analysis did not
take into account quantum properties of the atoms and considers
only resonant pumping, the results indicate, that
it may be very difficult to achieve Bose Condensation via VSCPT in optical
lattices \cite{lattices}.
Also the present results show that electromagnetically induced
transparency (EIT) \cite{EIT} in dense media cannot be understood
as the result of optical puming into a dark state.
Essential for EIT in dense media is an entirely coherent evolution
\cite{Lu} via stimulated adiabatic Raman passage \cite{STIRAP}.
Some of these aspects will be discussed in more detail elsewhere.
\section*{Acknowledgement}
The author would like to thank C.M. Bowden and
S.E. Harris for stimulating discussions.
| proofpile-arXiv_065-8321 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Spatial analysis of ROSAT HRI observations is often plagued by poor
aspect solutions, precluding the attainment of the potential
resolution of about 5''. In many cases (but not all), the major
contributions to the degradation in the effective Point Response
Function (PRF) come from aspect errors associated either with the
ROSAT wobble or with the reacquisition of the guide stars.
To avoid the possibility of blocking sources by the window support
structures (Positional Sensitive Proportional Counter) or to minimize
the chance that the pores near the center of the microchannel plate
would become burned out from excessive use (High Resolution Imager),
the satellite normally operates with a constant dither for pointed
observations. The period of the dither is 402s and the phase is tied
to the spacecraft clock. Any given point on the sky will track back
and forth on the detector, tracing out a line of length $\approx$~3
arcmin with position angle of 135$^{\circ}$ in raw detector
coordinates (for the HRI). Imperfections in the star tracker (see
section~\ref{sec:MM}) can produce an erroneous image if the aspect
solution is a function of the wobble track on the CCD of the star
tracker.
This work is similar to an analysis by Morse (1994) except that we do
not rely on a direct correlation between spatial detector coordinates
and phase of the wobble. Moreover, our method addresses the
reacquisition problem which produces the so-called cases of
``displaced OBIs''. An ``OBI'' is an observation interval, normally
lasting for 1 ks to 2 ks (i.e. a portion of an orbit of the
satellite). A new acquisition of the guide stars occurs at the
beginning of each OBI and we have found that different aspect
solutions often result. Occasionally a multi-OBI observation consists
of two discrete aspect solutions. A recent example (see
section~\ref{sec:120B}) showed one OBI for which the source was
10$^{\prime\prime}$ north of its position in the other 17 OBIs. Note
that this sort of error is quite distinct from the wobble error.
Throughout this discussion, we use the term ``PRF'' in the dynamic
sense: it is the point response function realized in any given
situation: i.e. that which includes whatever aspect errors are
present. We start with an observation for which the PRF is much worse
than it should be. We seek to improve the PRF by isolating the
offending contributions and correcting them if possible or rejecting
them if necessary.
\section{Model and Method}\label{sec:MM}
The ``model'' for the wobble error assumes that the star tracker's CCD has
some pixels with different gain than others. As the wobble moves the
de-focused star image across the CCD, the centroiding of the stellar
image gets the wrong value because it is based on the relative response
from several pixels. If the roll angle is stable, it is likely that the
error is repeated during each cycle of the wobble since the star's path
is over the same pixels (to a first approximation if the aspect `jitter'
is small compared to the pixel size of $\approx$~1 arcmin). What is not
addressed is the error in roll angle induced by erroneous star
positions. If this error is significant, the centroiding technique with
one strong source will fix only that source and its immediate environs.
The correction method assigns a 'wobble phase' to each event; then
divides each OBI (or other suitably defined time interval) into a number
of wobble phase bins. The centroid of the reference source is measured
for each phase bin. The data are then recombined after applying x and y
offsets in order to ensure that the reference source is aligned for each
phase bin. What is required is that there are enough counts in the
reference source to obtain a reliable centroid. Variations of this
method for sources weaker than approx 0.1 count/s involve using all
OBIs together before dividing into phase bins. This is a valid approach
so long as the nominal roll angle is stable (i.e. within a few tenths
of a degree) for all OBIs, and so long as major shifts in the aspect
solutions of different OBIs are not present.
\section{Diagnostics}
Our normal procedure for evaluation is to measure the FWHM (both the
major and minor axes) of the observed response on a map smoothed with
a 3$^{\prime\prime}$ Gaussian. For the best data, we find the
resulting FWHM is close to 5.7$^{\prime\prime}$. While there are many
measures of source smearing, we prefer this approach over measuring
radial profiles because there is no uncertainty relating to the
position of the source center; we are normally dealing with elliptical
rather than circular distributions; and visual inspection of the two
dimensional image serves as a check on severe abnormalities.
It has been our experience that when we are able to reduce the
FWHM of the PRF, the wings of the PRF are also reduced.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{hz43.eps}}
\caption{The FWHM of a HZ43 (observation number rh142545)
observation was measured for multiple dewobble runs while increasing
the number of phase bins.}
\label{fig:hz43}
\end{figure}
\subsection{Wobble Errors}
If the effective PRF is evaluated for each OBI separately, the wobble
problem is manifest by a degraded PRF in one or more OBIs. Most OBIs
contain only the initial acquisition of the guide stars, so when the
PRF of a particular OBI is smeared, it is likely to be caused by the
wobble error and the solution is to perform the phased `de-wobbling'.
\subsection{Misplaced OBI}
For those cases where each OBI has a relatively good PRF but the
positions of each centroid have significant dispersion, the error
cannot be attributed to the wobble. We use the term `misplaced OBI'
to describe the situation in which a different aspect solution is
found when the guide stars are reacquired. In the worst case,
multiple aspect solutions can produce an image in which every source
in the field has a companion displaced by anywhere from 10 to 30
arcsec or more. When the separation is less than 10 arcsec, the
source can appear to have a tear drop shape (see
section~\ref{sec:120A}) or an egg shape. However, depending on the
number of different aspect solutions, almost any arbitrary distortion
to the (circularly symmetric) ideal PRF is possible. The fix for
these cases is simply to find the centroid for each OBI, and shift
them before co-adding (e.g., see Morse et al. 1995).
\section{IRAF/PROS Implementation}
The ROSAT Science Data Center (RSDC) at SAO has developed scripts to
assist users in evaluating individual OBIs and performing the
operations required for de-wobbling and alignment. The scripts are
available from our anonftp area: sao-ftp.harvard.edu. cd to
pub/rosat/dewob.
An initial analysis needs to be performed to determine the stable roll
angle intervals, to check for any misalignment of OBIs and to examine
the guide star combinations. These factors together with the source
intensity are important in deciding what can be done and the best
method to use.
\subsection{OBI by OBI Method}\label{sec:ObyO}
If the observation contains a strong source ($\ge$~0.1 counts/s) near
the field center (i.e. close enough to the center that the mirror
blurring is not important), then the preferred method is to dewobble
each OBI. The data are thus divided into n~$\times$~p qpoe files (n =
number of OBIs; p = number of phase bins). The position of the
centroid of the reference source is determined and each file is
shifted in x and y so as to align the centroids from all OBIs and all
phase bins. The data are then co-added or stacked to realize the
final image (qpoe file).
\subsection{Stable Roll Angle Intervals}
For sources weaker than 0.1 counts/s, it is normally the case that there
are not enough counts for centroiding when 10 phase bins are used.
If it is determined that there are no noticeable shifts between OBIs,
then it is possible to use many OBIs together so long as the roll
angle does not change by a degree or more.
\subsection{Method for Visual Inspection}
On rare occasions, it may be useful to examine each phase bin visually
to evaluate the segments in order to decide if some should be deleted
before restacking for the final result. We have found it useful to do
this via contour diagrams of the source. This approach can be labor
intensive if there are a large number of OBIs and phase bins but
scripts we provide do most of the manipulations.
\section{MIDAS/EXSAS Implementation}
The X-ray group at the Astrophysical Institute Potsdam (AIP) has
developed some MIDAS/EXSAS routines to correct for the ROSAT wobble
effect. The routines can be obtained by anonymous ftp from ftp.aip.de
at directory pub/users/rra/wobble. The correction procedure works
interactively in five main steps:
\begin{itemize}
\item{Choosing of a constant roll angle interval}
\item{Folding the data over the 402 sec wobble period}
\item{Creation of images using 5 or 10 phase intervals}
\item{Determining the centroid for the phase resolved images}
\item{Shifting the photon X/Y positions in the events table}
\end{itemize}
We have tested the wobble correction procedures for 21 stars and 24
galaxies of the ROSAT Bright Survey using archival HRI data. The
procedures work successfully down to an HRI source count rate of about
0.1 counts/s. In the case of lower count rates the determination of
the centroid position failed because of the few photons available in
the phase-binned images. The number of phase bins which can be used
is of course dependent on the X-ray brightness of the source.
\section{Limitations}
We briefly describe the effects which limit the general use of the
method. In so doing, we also indicate the process one can use in
deciding if there is a problem, and estimating the chances of
substantial improvement.
\subsection{Presence of Aspect Smearing}
The FWHM of all sources in the field should
be~$\ge$~7$^{\prime\prime}$~(after smoothing with a 3$^{\prime\prime}$
Gaussian). If any source is smaller than this value, it is likely
that aspect problems are minimal and little is to be gained by
applying the dewobbling method.
If there is only a single source in the field, without {\it a~priori}
knowledge or further analysis it is difficult to determine whether a
distribution significantly larger than the ideal PRF is caused by
source structure or aspect smearing. The best approach in this case
is to examine the image for each OBI separately to see if some or all
are smaller than the total image (i.e. OBI aspect solutions are
different).
\subsection{Wobble Phase}
It is important that the phase of the wobble is maintained. This is
ensured if there is no 'reset' of the space craft clock during an
observation. If an observation has a begin and end time/date that
includes a reset, it will be necessary to divide the data into two
segments with a time filter before proceeding to the main analysis.
Dates of clock resets (Table 1) are provided by MPE:
http://www.ROSAT.mpe-garching.mpg.de/$\sim$prp/timcor.html.
\begin{tabular*}{45mm}{cl}
\multicolumn{2}{c}{Table 1} \\
\multicolumn{2}{c}{ROSAT Clock Resets}\\
\hline\\
Year &Day \\
\hline
90 & 151.87975 (launch)\\
91 & 25.386331\\
92 & 42.353305\\
93 & 18.705978\\
94 & 19.631352\\
95 & 18.169322\\
96 & 28.489871\\
97 & 16.069990\\
98 & 19.445738\\
\end{tabular*}
\\
\\
\begin{figure}[h]
\begin{minipage}{8.5cm}
\rotatebox{-90}{
\resizebox{7.0cm}{!}{\includegraphics{120A_raw_3.ps}}}
\caption{The original data for 3C 120 (segment A, rh702080n00),
smoothed with a Gaussian of FWHM = 3$^{\prime\prime}$. The peak
value on the map is 70.9 counts per 0.5$^{\prime\prime}$ pixel.
Contour levels are 1, 10, 20, 30, ... 90\% of the peak value, with
the 50\% contour, doubled. The nominal roll angle is -167$^{\circ}$
and the wobble direction is at PA = 122$^{\circ}$. The FWHM of this
smoothed image is
11.6$^{\prime\prime}~\times$~7.4$^{\prime\prime}$.}
\label{fig:120A}
\rotatebox{-90}{
\resizebox{7.0cm}{!}{\includegraphics{120A_dew_3.ps}}}
\caption{The results after dewobbling 3C 120A, smoothed with a
Gaussian of FWHM = 3$^{\prime\prime}$. The peak value on the map is
now 104.8 counts per 0.5$^{\prime\prime}$ pixel. Contour levels are
1, 10, 20, 30, ... 90\% of the peak value, with the 50\% contour,
doubled. The FWHM of this smoothed image
is 8.1$^{\prime\prime}~\times$~6.7$^{\prime\prime}$.}
\label{fig:120Ade}
\end{minipage}
\end{figure}
\subsection{Characteristics of the Reference Source}
In most cases, the reference source (i.e. the source used for
centroiding) will be the same as the target source, but this is not
required. Ideally, the reference source should be unresolved in the
absence of aspect errors and it should not be embedded in high
brightness diffuse emission (e.g. the core of M87 does not work
because of the bright emission from the Virgo Cluster gas). Both of
these considerations are important for the operation of the
centroiding algorithm, but neither is an absolute imperative. For
accurate centroiding, the reference source needs to stand
well above any extended component.
Obviously the prime concern is that there be enough counts in a phase
bin to successfully measure the centroid. The last item is usually
the determining factor, and as a rule of thumb, it is possible to use
10 phase bins on a source of 0.1 counts/s. We have tested a strong
source to see the effect of increasing the number of phase bins. In
Fig.~\ref{fig:hz43}, we show the results of several runs on an
observation of HZ 43 (12 counts/s). This figure demonstrates that ten
phase bins is a reasonable choice, but that there is little to be
gained by using more than 20 phase bins.
\section{Examples}
\subsection{3C 120}
3C 120 is a nearby radio galaxy (z=0.033) with a prominent radio jet
leaving the core at PA $\approx 270^{\circ}$. The ROSAT HRI
observation was obtained in two segments, each of which had aspect
problems. Since the average source count rate is 0.8 count/s, the
X-ray emission is known to be highly variable (and therefore most of
its flux must be unresolved), and each segment consisted of many OBIs,
we used these observations for testing the dewobbling scripts.
\subsubsection{Segment A: Two aspect solutions, both found multiple
times}\label{sec:120A}
The smoothed data (Figure~\ref{fig:120A}) indicated that in addition
to the X-ray core, a second component was present, perhaps associated
with the bright radio knot 4$^{\prime\prime}$ west of the core. When
analyzing these two components for variability, it was demonstrated
that most of the emission was unresolved, but that the aspect solution
had at least two different solutions, and that the change from one to
the other usually coincided with OBI boundaries. The guide star
configuration table showed that a reacquisition coincided with the
change of solution.
The 24 OBIs comprising the 36.5 ksec exposure were obtained between
96Aug16 and 96Sep12. Because 3C 120 is close to the ecliptic, the
roll angle hardly changed, and our first attempts at dewobbling
divided the data into 2 'stable roll angle intervals'. This effort
made no noticeable improvement.
We then used the method described in section \ref{sec:ObyO}. The
results are shown in Figure~\ref{fig:120Ade}. It can be seen that a
marked improvement has occurred, but some of the E-W smearing remains.
\begin{figure}[h]
\begin{minipage}{8.5cm}
\rotatebox{-90}{
\resizebox{7.0cm}{!}{\includegraphics{120B_raw_3.ps}}}
\caption{The original data of 3C 120 (segment B, rh702080a01),
smoothed with a Gaussian of FWHM = 3$^{\prime\prime}$. The peak
value on the map is 45.8 counts per 0.5$^{\prime\prime}$ pixel. The
contour levels are the same percentage values as those of
Fig.~\ref{fig:120A}. The roll angle is 8$^{\circ}$ and the wobble
PA is 127$^{\circ}$. FWHM for this image is
8.0$^{\prime\prime}~\times$~6.7$^{\prime\prime}$.} \label{fig:120B}
\rotatebox{-90}{
\resizebox{7.0cm}{!}{\includegraphics{120B_dew_3.ps}}} \caption{The
results of 3C 120 (segment B) after dewobbling. The contour levels
are the same percentage values as those of Fig.~\ref{fig:120B}, but
the peak is now 55.4. The FWHM is
7.2$^{\prime\prime}~\times$~6.5$^{\prime\prime}$.} \label{fig:120Bde}
\end{minipage}
\end{figure}
\subsubsection{Segment B: A single displaced OBI}\label{sec:120B}
The second segment of the 3C 120 observation was obtained in 1997 March.
In this case, only one OBI out of 17 was displaced. It was positioned
10$^{\prime\prime}$ to the north of the other positions, producing a
low level extension (see Fig.~\ref{fig:120B}). After dewobbling, that
feature is gone, the half power size is reduced, and the peak value is
larger (Fig.~\ref{fig:120Bde}).
\subsection{M81}
M81 is dominated by an unresolved nuclear source. The count rate is
0.31 count/s. The observation has 14 OBIs for a total exposure of
19.9 ks. Figure~\ref{fig:m81A} shows the data from SASS processing.
After running the `OBI by OBI' method, the source is more circularly
symmetric, has a higher peak value, and a smaller FWHM (Fig.~\ref{fig:m81B}).
\begin{figure}[h]
\begin{minipage}{8.5cm}
\resizebox{\hsize}{!}{\includegraphics{m81_orig.ps}} \caption{The
original M81 data (rh600739), smoothed with a Gaussian of FWHM =
3$^{\prime\prime}$. The peak value on the map is 15.3 counts per
0.5$^{\prime\prime}$ pixel. The contour levels are 1, 10, 20, 30,
40, 50 (the 50\% contour, doubled), 60, 70, 80, and 90 percent of
the peak value. The nominal roll angle is 135$^{\circ}$ and the
wobble direction is 0$^{\circ}$. The
FWHM of this smoothed image is
10.4$^{\prime\prime}~\times$~7.5$^{\prime\prime}$.}
\label{fig:m81A}
\resizebox{\hsize}{!}{\includegraphics{m81_dewob.ps}}
\caption{The results after dewobbling of M81 smoothed with a
Gaussian of FWHM = 3$^{\prime\prime}$. The peak value on the map is
22.5 counts per 0.5$^{\prime\prime}$ pixel. The contour levels are
1, 10, 20, 30, 40, 50 (the 50\% contour, doubled),
60, 70, 80, and 90 percent of the peak value. Ten phase bins have been
used. The
FWHM of this smoothed image is
7.2$^{\prime\prime}~\times$~6.5$^{\prime\prime}$.}
\label{fig:m81B}
\end{minipage}
\end{figure}
\subsection{NGC 5548}
This source was observed from 25 June to 11 July 1995 for a livetime of
53 ks with 33 OBIs. The average count rate was 0.75 counts/s and
the original data had a FWHM =
8.2$^{\prime\prime}\times$6.8$^{\prime\prime}$. Most of the OBIs
appeared to have a normal PRF
but a few displayed high distortion. After applying
the OBI by OBI method, the resulting FWHM was 6.3$^{\prime\prime}$ in
both directions and the peak value on the smoothed map increased from
138 to 183 counts per 0.5$^{\prime\prime}$ pixel.
\subsection{RZ Eri}
The observation of this star was reduced in MIDAS/EXSAS. The source
has a count rate of 0.12 count/s. The reduction selected only a group
of the OBIs which comprised a 'stable roll angle interval'; almost
half the data were rejected. The original smoothed image had a FWHM =
8.4$^{\prime\prime}\times$6.6$^{\prime\prime}$. After dewobbling, the
resulting FWHM was 6.9$^{\prime\prime}\times$5.8$^{\prime\prime}$.
\section{Summary}
We have developed a method of improving the spatial quality of ROSAT
HRI data which suffer from two sorts of aspect problems. This
approach requires the presence of a source near the field
center which has a count rate of $\approx$ 0.1 counts/s or greater.
Although the method does not fix all bad aspect problems, it produces
marked improvements in many cases.
\section{Acknowledgments}
We thank M. Hardcastle (Bristol) for testing early versions of the
software and for suggesting useful improvements. J. Morse contributed
helpful comments on the manuscript. The 3C 120 data were kindly
provided by DEH, A. Sadun, M. Vestergaard, and J. Hjorth (a paper is
in preparation). The other data were taken from the ROSAT archives.
The work at SAO was supported by NASA contract NAS5-30934.
| proofpile-arXiv_065-8332 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
An understanding of the properties and physics of the medium
where supernova remnants (SNR's) expand is essential in order to
develop consistent scenarios for their evolution and physical
structure. In turn, the sequence of events leading to a supernova
remnant and its demise define to a very large extent the structure,
physical and chemical state and evolution of the interstellar medium.
The Cygnus Loop is particularly well suited to study these
questions. Being in an advanced evolutionary stage, different
regions of the remnant are found interacting with different components
of the interstellar medium. The remnant is close enough to study
in great detail these interactions. In some circumstances the
structure of the surrounding medium is deduced from observations
of the SNR in different spectral domains (e.g. Graham {\it et al.} 1991;
Hester, Raymond \& Blair 1994; Decourchelle {\it et al.} 1997). When
dealing with atomic or molecular gas this information has been
gathered directly (e.g. DeNoyer 1975; Scoville {\it et al.} 1977).
Direct observations of ionized gas in the surrounding medium are
considerably more complicated, since the emission lines arising from
these regions are bound to be faint and prone to background confusion.
In this respect the Cygnus Loop offers a substantial advantage, since
it is placed at a large galactic latitude ({\it b} = -8.6$^\circ$).
Observations of faint emission lines in diffuse media have been
carried out with a scanning Fabry-P\'erot spectrophotometer. This
technique has been successfully applied when searching for emission from
Fe$^{+9}$ and Fe$^{+13}$ in SNR's (Ballet {\it et al.} 1989; Sauvageot
{\it et al.} 1990; Sauvageot \& Decouchelle 1995),
or exploring line emission in the warm
component of the interstellar medium (e.g. Reynolds 1983, 1985). In this
paper we searched for line emission from warm ionized gas
in the direction of the Cygnus Loop using such an instrument,
ESOP (Dubreuil {\it et al.} 1995). In comparison with
a grating instrument, ESOP can sample a large solid angle with a
substantial luminosity advantage, at the cost of a reduced spectral
range. It has no spatial resolution and is less efficient than
direct imaging observations, but the line of interest can be
isolated from other spectral features and the underlying continuum.
This is a precious advantage when the line is faint.
A description of the instrumental setup and the process followed in
data reduction is presented in $\S$2. Results are described in detail
in $\S$3, and a discussion on the viability of several possible
ionizing sources is given in $\S$4. Finally, conclusions and research
perspectives are summarized in $\S$5.
\section{Observations and data reduction}
ESOP was mounted on the 1.5 meter telescope of the Observatorio
Astron\'omico Nacional at San Pedro M\'artir, B.C., M\'exico. Data
was gathered in four runs: July 1994, October 1994, July 1995 and
August 1996.
The observations reported in this paper are for the [OII] lines
at 3726 and 3729 \AA, [OIII] at 5007 \AA~and HeI 5876 \AA.
We used narrow band ($\sim$ 15 \AA~FWHM) interference filters centered at
these lines when T = 20.5 $^\circ$C. Each observation consists of 50 to 100
scans of a hundred steps each. The integration time at each step is 170 ms.
Thus, the typical total integration time for each data point is around
20 minutes.
The free spectral range (FSR) was 11.3 \AA~during the first [OII]
observations and 14.5 \AA~in the following ones, in both cases
centered at 3727 \AA~(with the [O II] filter's FWHM = 15 \AA).
The FSR was not completely scanned by the Fabry-P\'erot in the
first [OII] observations. All [OIII] observations were conducted with
a FSR of 10.8 \AA~centered at 5007 \AA~(with the [O III] filter's
FWHM = 15 \AA). Because our filters are well centered on the oxygen
lines and the FSR is greater than half the filters' bandwidth,
these are not contaminated. Sky lines
lying far from the center of the filter, may have a contribution
from another order.
Since temperature variations shift the filter's transmission
curve and humidity affects the air capacitance in the FP gap, the
instrument works in a dry nitrogen atmosphere and is under temperature
control (at T $\simeq~20.5~^\circ$C, with $\delta$T $\simeq ~ 0.2~^\circ$C).
White and comparison lamps were regularly measured
in order to follow any bandpass drift in the filter or any variation in the
FP gap. All observations were carried out with a 150" circular diaphragm.
Data reduction and analysis has been amply described in
Ballet {\it et al.} (1989) and Sauvageot and Decourchelle (1995). It takes
into account that the geocoronal contribution - lines and continuum -
is unstable. No attempt is made to subtract a blank sky directly. Instead,
the observation is modelled with a combination of sky and source lines
and continuum. $\chi^2$ tests are performed to estimate the statistical
error of measurements.
Sky continuum and lines must be monitored carefully in this type of
observations. Typical spectra at the medium contiguous to the Cygnus
Loop and randomly selected positions are presented in Figures 1
and 2 ([OII] and [OIII]).
Though absolute wavelength calibrations are wrong, relative ones (meaning
dispersion) are correct. This is not consequential since this paper is
only concerned with photometric results and there can hardly be
any confusion regarding the identification of the line given its
relative prominence (for instance, the bright feature at 5010.94 \AA~
in Figure 2b is obviously the [OIII]5007 line). For further clarity,
arrows located on the upper part of Figures 1a, 1b, 2a and 2b indicate
which is the line that is being studied.
As can be seen, the O$^+$ lines are easily recognized and discriminated.
Regarding the [OIII] filter, there can hardly be a confusion on the
identification and measurement of [OIII]5007 since sky lines were found
to be very weak in this spectral region.
Since HeI 5876 was not detected outside the bright
optical filaments of the Cygnus Loop, a discussion on the sky
lines admitted by this filter is not necessary.
Our observations were flux calibrated observing standard stars 29 Pisc and
58 Aql several times during the night. For the [OIII] calibration we had
to take into account the contamination from the other order to the
standard star continuum. At its worst, we estimate that
the absolute values for the specific intensity are accurate at the 25$\%$
level, comparable to our statistical error bars.
\section{Results}
Regions within the Cygnus Loop and the medium surrounding it were observed
in three directions (northeast, east and southwest), along lines
approximately perpendicular to the shock front. Our observational
pointings are shown in Figure 3. The shock front is defined
as the outer X-ray boundary of the supernova remnant (Ku {\it et al.} 1984;
Ballet $\&$ Rothenflug 1989). Our results for the O$^+$ and O$^{+2}$
lines are compiled in Tables 1, 2 and 3 (NE, E and SW traces).
The information contained in these
tables is as follows: code name and coordinates for each position, angular
distance ($\delta$, in arcseconds) between the observed position and the
shock front (negative values for regions inside the SNR), angular distance
from the center of the SNR in terms of its radius, 1 + $\delta / \theta$,
where $\theta$ = 5850" is the angular radius of the Cygnus Loop
(mean value of the semi-axes, Green 1988), the specific intensity of
[OII]3729 \AA~and [OIII]5007 \AA~(henceforth [OII]3729 and
[OIII]5007) in 10$^{-7}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$, and the ratio of these lines (henceforth
3729/5007). Except for pointings in the optical filaments of the Cygnus
Loop (such as NE0, NE1, E0, E1 and SW0), the ratio [OII]3727/3729 is
always at the low density limit ($\simeq$ 0.67, N$_e \leq$ 50 cm$^{-3}$),
so there is no need to report the flux of the other [OII] line.
Upper and lower bounds for the specific intensity are for a 90$\%$ confidence
level in the fitting procedure. Uncertainties on the 3729/5007 line ratio
were determined from these bounds.
Both lines were detected at every position, some of them
very far from the shock front: for instance, region E26 is 3894" distant
from it, $\simeq$ 15 pc at the distance where the Cygnus Loop is (770 pc).
Except for the remnant's optical filaments, the HeI line was not detected.
The NE and E traces of [OII]3729 and [OIII]5007 as a function
of 1 + $\delta / \theta$ are plotted in Figure 4. As can be seen, both
traces are nearly identical, and the specific intensities well away
from the shock front are roughly constant (particularly [OII]3729).
Notice that the adiabatic shock transition is revealed as
[OII]3729 and [OIII]5007 brighten up smoothly as the X-ray perimeter
of the SNR is crossed (at 1 + $\delta / \theta$ = 1).
This brightening is caused by adiabatic compression of
the plasma and enhanced line emissivities, which overcompensate
the decreasing concentration of both ions since these are now immersed
in a higher temperature medium. The SW trace is not
plotted, but our measurements (see Table 4) also reveal the shock transition.
The adiabatic transition is apparently complete at 1 + $\delta / \theta
\simeq$ 0.95 or $\simeq$ 1 pc inside the shock front. This is the
most compelling evidence that this medium is not a projected HII region or
parcel from the general background, but is in the immediate vicinity of
the Cygnus Loop. Our most distant pointing (region E26) is $\simeq$
36 pc away from the remnant's center, which implies that the
surrounding medium is ionized up to a distance of at least $\simeq$ 50 pc.
An additional and less conspicuous feature is that the
specific intensity of both lines seems to rise slightly even before
the shock front is crossed (from 1 + $\delta / \theta~\simeq$ 1.1, or
$\simeq$ 2 pc away from the shock). This seems to indicate that the
plasma immediately beyond the edge of the Cygnus Loop is being affected
by the SNR before it encounters the blast wave, though this turn up
may also be due to irregularities in the boundary of the remnant.
As mentioned above, [OII]3729 maintains an approximately constant level
beyond $\simeq$ 0.1 - 0.2 times the radius of the SNR. This level is
nearly identical in the NE and E directions (for which we have sufficient
data points). It is worth noticing that [OII]3729 is practically the
same in regions separated by as much as $\simeq$ 5800" (NE22 and E26),
some 22 pc at the distance to the Cygnus Loop. Thus, the [OII]3729 data indicates that the medium beyond the shock front of the Cygnus Loop
is very extended, and quite possibly surrounds the eastern face of the
remnant. Furthermore, the data for the SW trace, albeit limited,
suggests that this medium exists all around the Cygnus Loop.
[OIII]5007 does not display such a regular behaviour. Along the NE trace
it dims slightly at positions NE8, NE9 and NE15, but it brightens up
again further away (NE18 and NE22). Along the eastern trace [OIII]5007
behaves more regularly, weakening very noticeably in the two most distant
pointings (E18 and E26). Thus, the [OIII]5007 observations imply that
physical conditions in the surrounding medium are not homogeneous. If
the temperature is uniform, there are variations in the oxygen degree of
ionization: in the low density limit, N(O$^+$)/N(O$^{+2}$) $\simeq$
2.7, 1.8 or 1.5 in most positions, and up to 4.8, 3.3 or 2.9 at E18
and E26 (for T$_e$ = 6000, 8000 and 10000 $^\circ$K). Alternatively,
if the degree of ionization is constant, the temperature would have to
be twice as large where [OIII]5007 is faintest. It seems difficult to
maintain such high temperatures.
In order to assess if there is a difference between the medium
surrounding the Cygnus Loop and the general background, observations
were carried out on randomly selected directions located
at {\it $\vert$b$\vert~\geq~5^\circ$}, as the Cygnus Loop is. Specific
intensities as low as $\sim 0.07 \times$ 10$^{-7}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$~ were measured, which
gives an idea on the instrumental sensitivity. As expected, line brightness
is not uniform in the galactic background. [OII]3729 was observed and
detected 13 times over the four observing runs.
Specific intensities comparable to some of those observed in
regions contiguous to the Cygnus Loop (0.89 and 0.64 $\times$
10$^{-7}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$ ) were found in only two directions. Elsewhere they were
markedly smaller, and in the mean [OII]3729 = 0.47$\pm 0.17 \times$ 10$^{-7}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$ .
Out of 9 measurements, [OIII] was detected in 7 positions. [OIII]5007 =
0.68 $\times$ 10$^{-7}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$~in the brightest sky background region,
similar to what we found in some regions around the SNR, but in all
other directions the line was much fainter. In the mean,
[OIII]5007 = 0.25$\pm 0.24 \times$ 10$^{-7}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$ . In most sites we
measured either one or the other line due to the complications involved
in changing the instrumental setup. Both lines were measured only in two
directions, where we found 3729/5007 = 3.13 and 7.80. This implies that
N(O$^+$)/N(O$^{+2}) \simeq$ 6.4 and 16.8 (for T$_e$ = 8000 $^\circ$K),
a rather low level of ionization. HeI 5876 was not detected, confirming
the faintness of this line in the general background (Reynolds
\& Tufte 1995).
As far as we know, this is the first time that O$^+$ emission
from the diffuse interstellar medium has been reported.
Reynolds (1985) observed [OIII]5007 in three directions: the
specific intensity was less than 0.5 $\times$ 10$^{-7}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$~in
one of them, 2 and 1.8 $\times$ 10$^{-7}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$~in the other two. In
these two, which are amongst the brightest sky background regions
as defined by the H$\alpha$ intensity (Reynolds 1983), [OIII]5007 is
substantially brighter than anything we find beyond the Cygnus Loop.
We measured [OIII]5007 at the second brightest
region ($\it l$ = 96.0$^\circ$, $\it b$ = 0.0$^\circ$)
and obtained 0.68 \menmas 0.15 0.20 $\times$ 10$^{-7}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$, 2.6 times less
than Reynolds. The discrepancy is probably related to the
vast difference in aperture sizes (49' $\it vs.$ 2.5'), so that
a region with particularly intense emission was included in
Reynolds' diaphragm but not in ours.
Thus, there is a measurable background contribution to the
intensity of the oxygen lines but, as can be seen from
Tables 1 to 3, emission is generally larger in the medium surrounding
the Cygnus Loop: [OII]3729 is at least 1.5 times brighter
around the Cygnus Loop than in the general backround, whereas [OIII]5007
is between 2 and 5 times more intense. Thus, the data supports the
conclusion that [OII]3729 and [OIII]5007 in the medium just beyond the
Cygnus Loop are usually brighter than in the general galactic
background at least up to a distance of $\simeq$ 15 pc from
the shock front (about 0.6 times the radius of the remnant). There is
also marginal but significant evidence indicating that the degree of
ionization is higher in the medium around the Cygnus Loop.
The absence of HeI 5876 emission merits a discussion given the detection
of [OIII]5007, which requires more energetic photons for this ion to exist
(35.1 {\it vs.} 24.6 eV). We take notice that there is an antecedent:
Reynolds (1985) found intense [OIII]5007 emission at
$\it l$ = 194.0$^\circ$ $\it b$ = 0.0$^\circ$, but Reynolds \&
Tufte (1995) searched for HeI 5876 at this location with negative
results, implying that He$^+$/He $\leq$ 0.3. In general, I(5876)/I(5007) =
$\epsilon$(5876)/$\epsilon$(5007) He$^+$/O$^{+2}$, where
$\epsilon$(5876) and $\epsilon$(5007) are the emissivities for
HeI 5876 and [OIII]5007. For cosmic abundances, and in the low density
limit, I(5876)/I(5007) = (0.08, 0.02, 0.008) (He$^+$/He)(O/O$^{+2}$)
for T = 6000, 8000 and 10000 K. The fraction of doubly ionized
oxygen is determined from the previously calculated
O$^+$/O$^{+2}$ ratio, assuming that there are no higher ionization
stages and that O$^0$/O$^{+2}$= 2. And since I(5007) $\simeq~0.9 \times$
10$^{-7}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$~in the medium surrounding the Cygnus Loop, it follows that
I(5876) $\simeq$ (0.42, 0.09, 0.03) (He$^+$/He) $\times$ 10$^{-7}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$.
Finally, the non-detection of HeI 5876 implies that I(5876)
$\leq$ 0.05 $\times$ 10$^{-7}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$ (our detection limit). Consequently,
HeI 5876 emission will be under our detection threshold if
(He$^+$/He) $\leq$ (0.17, 0.78 or 1) for the aforementioned temperatures.
Additionally, since He$^+$/He must be larger than O$^{+2}$/O,
we conclude that the ambient temperature is larger than $\sim$ 6000 K.
\section{Discussion}
The medium revealed by our observations has the general
properties of the partially ionized warm component of the interstellar
medium, which is so favourable for the observability of supernova remnants
(Kafatos {\it et al.} 1980). But it is different insofar as it is
generally brighter, and a couple of observations on sky background regions
also suggest that the degree of ionization, given by N(O$^+$)/N(O$^{+2}$),
is larger around the Cygnus Loop. On the other hand we did not
detect HeI 5876 emission, and in this respect there is no difference
between the medium just beyond the Cygnus Loop and the general galactic
background (Reynolds and Tufte 1995). But there are
several reasons to expect somewhat different properties in the
medium surrounding the Cygnus Loop: the remnant has been included in the
so called Cygnus superbubble, along with several OB associations,
the SN progenitor might have been a source of ionizing energy, the SN
itself produced a large amount of UV photons and, finally, ionizing
radiation is also generated by the expanding shock wave. We will
discuss these possible sources in the following paragraphs.
The Cygnus Loop is at the southern edge of the extremely rich and complex
region known as the Cygnus superbubble, which has been extensively
described and analysed by Bochkarev \& Sitnik (1985). The superbubble
has seven OB associations containing
48 O type stars and nearly 70 B type stars. With the exception of
Cyg OB4 and Cyg OB7, all of them are at a distance of 1.2 kpc or
more. Cyg OB7 is at approximately the same distance as the Cygnus Loop,
but is located some 20$^\circ$ away from it (about 280 pc).
Cyg OB4 is also relatively near, but though it has been classified
as an OB association, it does not contain any O or B star. Thus, it is
doubtful that these OB associations can account for the ionization
of the medium surrounding the Cygnus Loop. We also contemplated
the possibility that this medium is an extended low surface brightness
HII region produced by an early type star in the vicinity of the SNR.
A visual inspection of POSS plates and a thorough search
of O and B stars catalogs (Cruz-Gonz\'alez {\it et al.} 1974; Garmany,
Conti \& Chiosi 1982) renders no support to this possibility. The SAO
catalog was also explored with negative results.
The ionized medium around the Cygnus Loop may also be the relic HII
region of the progenitor star. The progenitor should have produced
P$_{UV}~\sim~5 \times 10^{48}$ N$_H ^2$ UV photons per second to
create a 50 pc Str\"omgren sphere if the medium is fully ionized,
as the presence of O$^{+2}$ seems to imply (but notice that
Graham {\it et al.} (1991) found shocked H$_2$ in the Cygnus Loop). The
mean particle density in the medium where the remnant evolved at least
until recently is $\sim$ 0.1 - 0.2 cm$^{-3}$ (Ku {\it et al.} 1984;
Levenson {\it et al.} 1997), which signifies that the required spectral
type of the progenitor star must have been earlier than or equal to B0.
Furthermore, the 3729/5007 ratio ($\simeq$ 1 with no
reddening correction, about 1.5 for a 1 magnitude visual extinction),
implies that the effective temperature of the ionizing star is close to
35000 K (Stasinska 1978), corresponding to a spectral type slightly
later than O8. Under this hypothesis, the mass of the progenitor
would have been between 20 and 25 \msol.
Such a star spends most of its lifetime as a blue giant and only
$\sim 1 \%$ of its existence ($\sim 10^5$ yr) as a red supergiant
(Brunish \& Truran 1982). This is substantially less than the
recombination time ($\sim~10^5/N_H$ yr). Thus, an O8 or O9 progenitor
surrounded by a pervasive low density medium can account for our
observations.
On the other hand, based on the X-ray morphology of the SNR,
Levenson {\it et al.} (1997) favor a scenario with a progenitor
of spectral type later than B0. According to them, the Cygnus Loop
evolved within the $\sim$ 20 pc homogeneous low-density HII region created
by this progenitor, and is now bursting into the relatively dense and inhomogeneous medium surrounding it, in the manner described by McKee
{\it et al.} (1984). This would explain the existence of abundant local
inhomogeneities on the external surface of a nearly circular remnant.
Notice that an earlier type progenitor would create a larger homogeneous
cavity.
The UV radiation produced by the SN explosion can also be an important
ionizing source, as was palpably revealed when narrow emission
lines appeared in the UV spectra of SN1987A $\sim$ 70 days after
the event (Fransson {\it et al.} 1989). At least some 10$^{44}$ erg
of ionizing energy was required to produce these lines (Fransson
{\it et al.} 1989), substantially less than the $10^{46}-10^{47}$ erg that
hydrodynamical models had predicted for the ionizing burst of
SN1987A (Shigeyama, Nomoto \& Hashimoto 1988; Woosley 1988). But values as
large as $10^{48}-10^{49}$ erg in ionizing energy have been mentioned in
the literature (Chevalier 1977; Chevalier 1990). If the mean photon energy
is 20 eV, the largest radiation ``pulse" ionizes the surrounding medium
up to a distance of 14 N$_H ^{-1/3}$ pc at the most, where N$_H$
is the mean hydrogen density. The density would have to
be smaller than 0.02 cm$^{-3}$ in order to produce a 50 pc bubble of
ionized gas. Observations and models for the evolution of the Cygnus
Loop lead to substantially larger mean densities in the surrounding medium
(e.g. Ku {\it et al.} 1984).
Ultraviolet photons are constantly being supplied by the expansion of
the SNR as the shock heated particles produce ionizing radiation as they
move downstream. This has been discussed in the numerous radiative shock
wave models developed over the last 25 years (e.g. Cox 1972;
Dopita 1977; Raymond 1979; Cox \& Raymond 1985; Shull \& McKee 1979;
Binette, Dopita \& Tuohy 1985; Sutherland,
Bicknell \& Dopita 1993). The effect of this ionizing
radiation on the upstream gas has been considered in the context of AGN
(Daltabuit \& Cox 1972) or, more recently, the emission line filaments
in Centaurus A (Sutherland {\it et al.} 1993). A review on the many topics
opened to this question was written by Dopita (1995). But to the best of
our knowledge, little attention has been directed to the effect of the
photoionizing flux of SNR's on the galactic interstellar medium.
At this point we are specifically interested in determining the size
of the bubble of ionized gas that can result from the UV flux produced
by the shock heated particles, in order to establish if this energy
source is sufficient to create the large sphere of ionized gas that is
implied by our observations. An estimate of this quantity can
be determined following a very simple line of arguments.
The number of upstream moving photons produced each second by a SNR
expanding into a medium with density $N_0$, is given by
\begin{equation}
P_{UV} = 4 \pi R_0^2 N_0 V_0 \phi_{UV}
\end{equation}
where $\phi_{UV}$ is the number of upstream moving UV photons produced
per shocked particle, and $R_0$ and $V_0$ are the remnant's radius and
velocity. If this quantity is equal to the number of recombinations, then
\begin{equation}
(R_i/R_0)^3 = 74.8 V_7 \phi_{UV}/(N_0 R_{pc}) + 1
\end{equation}
where $R_i$ is the radius of the ionized volume measured from
the remnant's center, $V_7$ is the shock velocity in 100 km s$^{-1}$
and $R_{pc}$ is the radius of the SNR in parsec. The latter can
be determined assuming that the evolution of the Cygnus Loop is
described by Sedov's (1959) solution. In the case of
the Cygnus Loop this assumption can be objectionable,
but is probably adequate given the scope of this discussion. In this
case,
\begin{equation}
R_{pc} = 19.4 (E_{50}/N_0)^{1/3} V_7^{-2/3}
\end{equation}
where $E_{50}$ is the kinetic energy deposited in the SNR in
10$^{50}$ erg. Equations (2) and (3) lead to,
\begin{equation}
(R_i/R_0)^3 = 3.86 V_7 ^{5/3} \phi_{UV}/(E_{50} N_0^2)^{1/3} + 1
\end{equation}
For a given metallicity the number of ionizing photons per particle
only depends on the shock velocity. Shull $\&$ McKee (1979) present their
results in a more
amenable fashion than Binette {\it et al.} (1985) and Dopita (1995), and
the following analytical approximation for $\phi_{UV}$ can be derived
from their work,
\begin{equation}
\phi_{UV} \simeq 1.08 (V_7^2 - 0.58)
\end{equation}
Their calculations stop at $V_7$ = 1.3, but it is probably correct
to extend this approximation to larger velocities (the functional
dependence should not change, see Dopita 1995). For an
equilibrium cooling function the shock can be radiative up to
$V_7 ~ \simeq$ 1.5 - 2. But the plasma behind the shock front will
be underionized with respect to collisional ionization equilibrium,
and in this condition cooling is more efficient (Sutherland and
Dopita 1993). Thus, the shock will become radiative at somewhat higher
velocities.
The size of the region that can be ionized by photons produced by the
shock heated particles, R$_i$, can now be determined from equations (3),
(4) and (5). Results as a function of shock velocity are presented in
Table 4 for various combinations of ($E_{50}$,$N_0$): (1,1), (3,0.2)
and (1,0.2). The second set of parameters is representative of the
Cygnus Loop (Ku {\it et~al.\ } 1984). As
can be seen, a 50 pc ionized bubble can be produced even in the most
conservative case. The size of the region of ionized gas is surprisingly
large when the standard parameters for the Cygnus Loop are considered.
Furthermore, since the evolutionary timescale of a SNR is much
shorter than the recombination time, it follows that the ionizing
radiation supplied by the remnant as it continues evolving would
further increase the size of the ionized region. Consequently it
appears that, at least from the point of view of the energy budget,
radiative shock waves can produce a very extended environment of ionized
matter around them. A stricter analysis is no doubt necessary, but it seems
improbable that it will lead to a qualitatively different conclusion
on the number of ionizing photons produced by the expanding SNR,
and consequently on the size of the region that will be influenced by them.
But if there seems to be little doubt that SNR's can ionize large
regions of the interstellar medium, it has to be shown that these
objects can do so in the course of their evolution. This
is an essential point in relation to this work.
It is worth pointing out that, in comparison to other ionizing sources,
the ionizing energy produced by
SNR's can be of considerable importance. Integrating equation (1) with
the aforementioned hypotheses, it is easy to see that a SNR
will produce 1.2 $\times 10^{60} E_{50}$ UV photons as it slows
down from 250 to 80 km s$^{-1}$, $\sim 30 \%$ of its initial kinetic
energy if the mean photon energy is 15 eV. On the other hand, any
main sequence B0 or O type star will produce some 10$^{63}$ UV photons
during its lifetime. Considering that SN's are some 20 times more
abundant than O type stars, this implies that, during their radiative
phase, SNR's will generate about a tenth of the UV flux produced by all B0
and O type stars. This is not a small number. Furthermore, SNR's will
be a major source of UV radiation in stellar systems lacking massive stars.
\section{Conclusions}
Evidence was presented for the existence of an extended ionized medium
surrounding the eastern face of the Cygnus Loop, and quite possibly
the entire remnant. The shock transition is revealed by the
slow rise in the specific intensity of [OII]3729 and [OIII]5007
as the X-ray perimeter of the SNR is crossed. This is indisputable
proof that this medium is in the immediate vicinity of the Cygnus Loop.
Our most distant pointing (region E26) is $\simeq$ 36 pc away from the
remnant's center, which implies that there is ionized gas at least
up to a distance of $\simeq$ 50 pc. It would be interesting to observe
more distant regions, preferably in the O$^{+2}$ line, in order to see
whether there is an outer boundary or the medium merges smoothly with
the general background. The medium around the Cygnus
Loop is somewhat different to the general galactic backround: [OII]3729
and [OIII]5007 are usually brighter, and there are indications
that the degree of ionization, given by N(O$^+$)/N(O$^{+2}$), is also
larger around the SNR. On the other hand it is similar insofar as
HeI 5876 emission is also conspicuously absent.
We explored several possible sources which may produce the ionizing
energy required to account for the existence of this medium. Viable
external sources, such as an isolated early type star or an OB
association, could not be found. We also concluded that the ionizing
radiation produced by the SN explosion was probably insufficient.
An early type (between O8 and O9, but closer to the former) progenitor
embedded in a low density medium can account for the required energy
budget, but a later type progenitor has been suggested
by Levenson {\it et al.} (1997). Finally, we
showed that the UV radiation produced by the shock heated particles
{\it can} generate a large bubble of ionized gas, but detailed modelling
is required in order to see if it {\it will} do so during the SNR's lifetime.
From the observational point of view, it is advisable to inspect other
emission lines in order to explore the spectral properties of the medium
surrounding the Cygnus Loop, and decide if it is indeed distinct
to the warm component of the interstellar medium.
Unfortunately, our instrumental resolution is insufficient to discriminate
coronal and galactic H$\alpha$ emission, the key line in Reynolds' research
on the properties of this component of the interstellar medium. But
other spectral lines, such as [NII]6584 \AA~and [SII]6717,6731 \AA,
are open to inspection since they are less affected by geocoronal emission.
Needless to say, similar observations of the medium surrounding other
SNR's should furnish valuable information. Targets located away from the
galactic plane are preferable, since background confusion is avoided.
Further research along
these lines may be helpful regarding the still open question on the origin
of the warm partially ionized component of the interstellar medium, given
the relatively large flux of UV photons produced by radiative shock waves.
As we pointed out, the photoinizing flux produced by SNR's will be
particularly important in systems lacking massive stars.
{\bf Acknowledgments}
The excellent support received from C. Blondel, P. Mulet and the technical
staff at San Pedro M\'artir observatory is gratefully acknowledged.
We thank the anonymous referee for the comments and suggestions
that lead to great improvements on this paper, and in particular for
pointing out the effect of contamination from the other order to
the standard star continuum.
\begin{center} References \end{center}
\begin{description}
\item Ballet, J., Caplan, J., Rothenflug, R., Dubreuil, D. \& Soutoul, A.
1989, \aa 211 217
\item Ballet, J. \& Rothenflug, R. 1989, \aa 218 277
\item Binette, L., Dopita, M.A. \& Tuohy, I.R. 1985, \apj 297 476
\item Bochkarev, N.G. \& Sitnik, T.G. 1985, \apss 108 237
\item Brunish, W.M. \& Truran, J.W. 1982, \apjsupp 49 447
\item Chevalier, R.A. 1977, \annrev 15 175
\item Chevalier, R.A. 1990, in Supernovae, A\&A Library, ed. A.G. Petschek
(Springer-Verlag, New York), 91
\item Cox, D.P. 1972, \apj 178 143
\item Cox, D.P. $\&$ Raymond, J.C. 1985, \apj 298 651
\item Cruz-Gonzalez, C., Recillas-Cruz, E., Costero, R., Peimbert, M. \&
Torres-Peimbert, S. 1974, \revmex 1 211
\item Daltabuit, E. \& Cox, D.P. 1972, \apj 173 L173
\item Decourchelle, A., Sauvageot, J.L., Ballet, J. \& Aschenbach, B. 1997, \aa 326
811
\item DeNoyer, L.K. 1975, \apj 196 479
\item Dopita, M.A. 1977, \apjsupp 33 437
\item Dopita, M.A. 1995, in The analysyis of emission lines, STScI Symp. Series
8, ed. R.E. Williams \& M. Livio (Cambridge, New York), 65
\item Dubreuil, D., Sauvageot, J.L., Blondel, C., Dhenain, G., Mestreau, P. \&
Mullet, P. 1995, Exp. Astron. 6, 257
\item Fransson, C., Cassatella, A., Gilmozzi, R., Kirshner, R.P., Panagia, N.,
Sonneborn, G. $\&$ Wamsteker, W. 1989, \apj 336 429
\item Garmany, C.D., Conti, P.S., Chiosi, C. 1982, \apj 263 777
\item Georgelin, Y.M., Lortet-Zuckerman, M.C. $\&$ Monnet, G. 1975, \aa 42 273
\item Graham, J.R., Wright, G.S., Hester, J.J. $\&$ Longmore, A.J. 1991, \aj 101 175
\item Green, D. A. 1988, \apss 148 3
\item Hester, J.J., Raymond, J.C. \& Blair, W.P. 1994 \apj 1994, \apj 420 721
\item Kafatos, M., Sofia, S., Bruhweiler, F. $\&$ Gull, T. 1980, \apj 242 294
\item Ku, W.H.M., Kahn, S.M., Pisarski, R. \& Long, K.S. 1984, \apj 278 615
\item Levenson, N.A., Graham, J.R., Aschenbach, W.P., Blair, W.P.,
Brinkmann, W., Busser, J.U., Egger, R., Fesen, R.A., Hester, J.J.,
Kahn, S.M., Klein, R.M., McKee, C.F., Petre, R., Pisarski, R.,
Raymond, J.C. \& Snowden, S.L. 1997, \apj 484 304
\item McKee, C.F., Van Buren, D. $\&$ Lazareff, B. 1984, \apj 278 L115
\item Raymond, J.C. 1979, \apjsupp 35 419
\item Reynolds, R.J. 1983, \apj 268 698
\item Reynolds, R.J. 1985, \apj 298 L27
\item Reynolds, R.J. \& Tufte, S.L. 1995, \apj 439 L17
\item Sauvageot, J.L. \& Decourchelle, A. 1995, \aa 296 201
\item Sauvageot, J.L., Ballet, J., Dubreuil, D., Rothenflug, R., Soutoul, A.
\& Caplan, J. 1990, \aa 232 203
\item Scoville, N.Z., Irvine, W.M., Wannier, P.G. \& Predmore, C.R. 1977,
\apj 216 320
\item Sedov, L.I. 1959, Similarity and dimensional methods in mechanics,
Academic Press, New York.
\item Shigeyama, T., Nomoto, K. $\&$ Hashimoto, M. 1988, \aa 196 141
\item Shull, J.M. \& McKee, C.F. 1979, \apj 227 131
\item Stasinska, G. 1978 \aasupp 32 429
\item Sutherland, R.S. \& Dopita, M.A. 1993, \apjsupp 88 253
\item Sutherland, R.S., Bicknell, G.V. \& Dopita, M.A. 1993, \apj 414 510
\item Woosley, S.E. 1988, \apj 324 466
\end{description}
\newpage
{\noindent {\bf Figure Captions}}
\begin{description}
\item Figure 1. [OII] lines: (a) sky, (b) region NE14.
\item Figure 2. [OIII] lines: (a) sky, (b) region NE18.
\item Figure 3. Observational pointings: (a) NE, (b) E and (c) SW.
North is up, east is to the left.
\item Figure 4. [OII] and [OIII] NE and E traces.
\end{description}
\vbox{
\halign {\strut ~#~ \hfil & \hfil ~#~ \hfil & \hfil ~#~ \hfil & \hfil ~#~
\hfil & \hfil ~#~ \hfil & \hfil ~#~ \hfil & \hfil ~#~ \hfil & \hfil # \cr
\multispan 8 \strut {\bf Table 1. Northeastern trace~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~} \hfil \cr
\noalign {\medskip \hrule \medskip}
~ \hfil & \hfil RA (1950) DEC \hfil & \hfil $\delta(")$ \hfil & \hfil 1+ $\delta/\theta$
\hfil & \hfil I([OII]3729) \hfil & \hfil I([OIII]5007) \hfil & \hfil 3729/5007 \cr
\noalign {\medskip \hrule \medskip}
NE0 \hfil & \hfil 20 53 55.8 +31 35 05 \hfil & \hfil -884 \hfil & \hfil 0.85 \hfil & \hfil ~
\hfil & \hfil 30.1 \menmas 6.1 6.1 \hfil & \hfil ~ \cr
NE1 \hfil & \hfil 20 54 05.0 +31 37 35 \hfil & \hfil -663 \hfil & \hfil 0.89 \hfil & \hfil 79.0 \menmas 0.0 0.0 \hfil & \hfil 67.8 \menmas 9.3 9.3 \hfil & \hfil 1.17 \menmas 0.14 0.18 \cr
NE2 \hfil & \hfil 20 54 20.0 +31 39 20 \hfil & \hfil -390 \hfil & \hfil 0.93 \hfil & \hfil 2.91 \menmas 0.34 0.23 \hfil & \hfil 2.89 \menmas 0.60 0.21 \hfil & \hfil 1.01 \menmas 0.18 0.37 \cr
NE3 \hfil & \hfil 20 54 19.0 +31 41 05 \hfil & \hfil -325 \hfil & \hfil 0.94 \hfil & \hfil 1.71 \menmas 0.23 0.19 \hfil & \hfil 1.83 \menmas 0.31 0.36 \hfil & \hfil 0.93 \menmas 0.26 0.32 \cr
NE4 \hfil & \hfil 20 54 26.0 +31 42 50 \hfil & \hfil -162 \hfil & \hfil 0.97 \hfil & \hfil 1.13 \menmas 0.22 0.16 \hfil & \hfil 1.38 \menmas 0.33 0.33 \hfil & \hfil 0.82 \menmas 0.30 0.41 \cr
NE5 \hfil & \hfil 20 54 33.0 +31 44 35 \hfil & \hfil 0 \hfil & \hfil 1 \hfil & \hfil 1.02 \menmas 0.23 0.18
\hfil & \hfil 0.85 \menmas 0.24 0.24 \hfil & \hfil 1.20 \menmas 0.47 0.76 \cr
NE6 \hfil & \hfil 20 54 40.0 +31 46 20 \hfil & \hfil 162 \hfil & \hfil 1.03 \hfil & \hfil 0.90 \menmas 0.10 0.20 \hfil & \hfil 0.99 \menmas 0.35 0.48 \hfil & \hfil 0.91 \menmas 0.37 0.82 \cr
NE7 \hfil & \hfil 20 54 47.0 +31 48 05 \hfil & \hfil 325 \hfil & \hfil 1.06 \hfil & \hfil 0.80 \menmas 0.21 0.15
\hfil & \hfil 0.96 \menmas 0.25 0.24 \hfil & \hfil 0.83 \menmas 0.34 0.50 \cr
NE8 \hfil & \hfil 20 54 54.0 +31 49 50 \hfil & \hfil 487 \hfil & \hfil 1.08 \hfil & \hfil 0.67 \menmas 0.19 0.13 \hfil & \hfil 0.71 \menmas 0.19 0.15 \hfil & \hfil 0.94 \menmas 0.38 0.58 \cr
NE9 \hfil & \hfil 20 55 01.0 +31 51 35 \hfil & \hfil 650 \hfil & \hfil 1.11 \hfil & \hfil 0.78 \menmas 0.23 0.29
\hfil & \hfil 0.73 \menmas 0.21 0.26 \hfil & \hfil 1.07 \menmas 0.42 1.02 \cr
NE12 \hfil & \hfil 20 55 22.0 +31 54 50 \hfil & \hfil 1137 \hfil & \hfil 1.19 \hfil & \hfil 0.65 \menmas 0.21 0.10
\hfil & \hfil ~ \hfil & \hfil ~ \cr
NE14 \hfil & \hfil 20 55 31.0 +32 00 00 \hfil & \hfil 1485 \hfil & \hfil 1.25 \hfil & \hfil 0.70 \menmas 0.21 0.10
\hfil & \hfil ~ \hfil & \hfil ~ \cr
NE15 \hfil & \hfil 20 55 43.0 +32 02 05 \hfil & \hfil 1702 \hfil & \hfil 1.29 \hfil & \hfil 0.77 \menmas 0.25 0.20 \hfil & \hfil 0.65 \menmas 0.16 0.19 \hfil & \hfil 1.18 \menmas 0.56 0.88 \cr
NE18 \hfil & \hfil 20 56 04.0 +32 07 20 \hfil & \hfil 2189 \hfil & \hfil 1.37 \hfil & \hfil 0.83 \menmas 0.18 0.13 \hfil & \hfil 0.85 \menmas 0.31 0.31 \hfil & \hfil 0.98 \menmas 0.42 0.81 \cr
NE22 \hfil & \hfil 20 56 23.0 +32 17 02 \hfil & \hfil 2754 \hfil & \hfil 1.47 \hfil & \hfil 0.65 \menmas 0.15 0.20
\hfil & \hfil 0.95 \menmas 0.24 0.30 \hfil & \hfil 0.69 \menmas 0.37 0.33 \cr
\noalign {\medskip \hrule \medskip}
}}
\begin{description}
\item Intensity in 10$^{-7} erg~cm^{-2}~s^{-1}~sr^{-1}$
\end{description}
\vbox{
\halign {\strut ~#~ \hfil & \hfil ~#~ \hfil & \hfil ~#~ \hfil & \hfil ~#~
\hfil & \hfil ~#~ \hfil & \hfil ~#~ \hfil & \hfil ~#~ \hfil & \hfil # \cr
\multispan 8 \strut {\bf Table 2. Eastern trace~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~} \hfil \cr
\noalign {\medskip \hrule \medskip}
~ \hfil & \hfil RA (1950) DEC \hfil & \hfil $\delta(")$ \hfil & \hfil 1+ $\delta/\theta$
\hfil & \hfil I([OII]3729) \hfil & \hfil I([OIII]5007) \hfil & \hfil 3729/5007 \cr
\noalign {\medskip \hrule \medskip}
E0 \hfil & \hfil 20 55 13.8 +30 54 30 \hfil & \hfil -705 \hfil & \hfil 0.88
\hfil & \hfil 45.20 \menmas 1.70 1.30 \hfil & \hfil 198.0 \menmas 15.8 15.8 \hfil & \hfil 0.23
\menmas 0.03 0.02 \cr
E1 \hfil & \hfil 20 55 24.0 +30 54 30 \hfil & \hfil -531 \hfil & \hfil 0.91
\hfil & \hfil 18.20 \menmas 1.10 0.40 \hfil & \hfil 111.3 \menmas 11.8 11.8 \hfil & \hfil 0.16
\menmas 0.02 0.02 \cr
E2 \hfil & \hfil 20 55 34.0 +30 54 30 \hfil & \hfil -354 \hfil & \hfil 0.94
\hfil & \hfil 2.98 \menmas 0.27 0.36 \hfil & \hfil 4.86 \menmas 0.25 0.24 \hfil & \hfil 0.62
\menmas 0.09 0.10 \cr
E3 \hfil & \hfil 20 55 44.0 +30 54 30 \hfil & \hfil -177 \hfil & \hfil 0.97
\hfil & \hfil 1.45 \menmas 0.20 0.03 \hfil & \hfil 2.04 \menmas 0.70 0.34 \hfil & \hfil 0.71
\menmas 0.18 0.39 \cr
E4 \hfil & \hfil 20 55 54.0 +30 54 30 \hfil & \hfil 0 \hfil & \hfil 1.
\hfil & \hfil 1.54 \menmas 0.35 0.10 \hfil & \hfil 1.43 \menmas 0.25 0.29 \hfil & \hfil 1.08
\menmas 0.38 0.31 \cr
E5 \hfil & \hfil 20 56 04.0 +30 54 30 \hfil & \hfil 177 \hfil & \hfil 1.03
\hfil & \hfil 1.56 \menmas 0.47 0.14 \hfil & \hfil 1.61 \menmas 0.35 0.43 \hfil & \hfil 0.96
\menmas 0.42 0.38 \cr
E6 \hfil & \hfil 20 56 14.0 +30 54 30 \hfil & \hfil 354 \hfil & \hfil 1.06
\hfil & \hfil 1.15 \menmas 0.29 0.22 \hfil & \hfil 2.14 \menmas 0.24 0.34 \hfil & \hfil 0.54
\menmas 0.19 0.18 \cr
E8 \hfil & \hfil 20 56 34.0 +30 54 30 \hfil & \hfil 708 \hfil & \hfil 1.12
\hfil & \hfil 0.91 \menmas 0.19 0.16 \hfil & \hfil 1.09 \menmas 0.33 0.26 \hfil & \hfil 0.84
\menmas 0.30 0.56 \cr
E10 \hfil & \hfil 20 56 54.0 +30 54 30 \hfil & \hfil 1062 \hfil & \hfil 1.18
\hfil & \hfil 0.78 \menmas 0.24 0.15 \hfil & \hfil 1.04 \menmas 0.39 0.18 \hfil & \hfil 0.75
\menmas 0.30 0.68 \cr
E14 \hfil & \hfil 20 57 34.0 +30 54 30 \hfil & \hfil 1770 \hfil & \hfil 1.30
\hfil & \hfil 0.77 \menmas 0.14 0.11 \hfil & \hfil 0.93 \menmas 0.21 0.23 \hfil & \hfil 0.83
\menmas 0.29 0.40 \cr
E18 \hfil & \hfil 20 58 14.0 +30 54 30 \hfil & \hfil 2478 \hfil & \hfil 1.42
\hfil & \hfil 0.81 \menmas 0.19 0.10 \hfil & \hfil 0.45 \menmas 0.16 0.18 \hfil & \hfil 1.80
\menmas 0.81 1.37 \cr
E26 \hfil & \hfil 20 59 34.0 +30 54 30 \hfil & \hfil 3894 \hfil & \hfil 1.67
\hfil & \hfil 0.61 \menmas 0.08 0.16 \hfil & \hfil 0.46 \menmas 0.15 0.20 \hfil & \hfil 1.31
\menmas 0.29 1.15 \cr
\noalign {\medskip \hrule \medskip}
}}
\begin{description}
\item Intensity in 10$^{-7} erg~cm^{-2}~s^{-1}~sr^{-1}$
\end{description}
\vbox{
\halign {\strut ~#~ \hfil & \hfil ~#~ \hfil & \hfil ~#~ \hfil & \hfil ~#~
\hfil & \hfil ~#~ \hfil & \hfil ~#~ \hfil & \hfil ~#~ \hfil & \hfil # \cr
\multispan 8 \strut {\bf Table 3. Southwestern trace~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~} \hfil \cr
\noalign {\medskip \hrule \medskip}
~ \hfil & \hfil RA (1950) DEC \hfil & \hfil $\delta(")$ \hfil & \hfil 1+ $\delta/\theta$
\hfil & \hfil I([OII]3729) \hfil & \hfil I([OIII]5007) \hfil & \hfil 3729/5007 \cr
\noalign {\medskip \hrule \medskip}
SW0 \hfil & \hfil 20 44 55.3 +30 00 36 \hfil & \hfil -631 \hfil & \hfil .89
\hfil & \hfil 7.02 \menmas 0.52 0.48 \hfil & \hfil 15.1 \menmas 0.8 0.9 \hfil & \hfil 0.46
\menmas 0.06 0.06 \cr
SW1 \hfil & \hfil 20 44 35.3 +29 57 36 \hfil & \hfil -314 \hfil & \hfil 0.95
\hfil & \hfil \hfil & \hfil 48.5 \hfil & \hfil \cr
SW1.5 \hfil & \hfil 20 44 25.3 +29 56 08 \hfil & \hfil -158 \hfil & \hfil 0.97
\hfil & \hfil 1.71 \menmas 0.20 0.16 \hfil & \hfil 1.66 \menmas 0.64 0.35 \hfil & \hfil 1.03
\menmas 0.28 0.79 \cr
SW2 \hfil & \hfil 20 44 15.3 +29 54 36 \hfil & \hfil 0 \hfil & \hfil 1
\hfil & \hfil 1.14 \menmas 0.17 0.22 \hfil & \hfil 0.78 \menmas 0.30 0.54 \hfil & \hfil 1.47
\menmas 0.74 1.38 \cr
SW4 \hfil & \hfil 20 43 35.3 +29 48 36 \hfil & \hfil 633 \hfil & \hfil 1.11
\hfil & \hfil 0.85 \menmas 0.18 0.23 \hfil & \hfil 0.89 \menmas 0.34 0.16 \hfil & \hfil 0.96
0.32 1.00 \cr
\noalign {\medskip \hrule \medskip}
}}
\begin{description}
\item Intensity in 10$^{-7} erg~cm^{-2}~s^{-1}~sr^{-1}$
\end{description}
\vbox{
\halign {\strut ~#~ \hfil & \hfil ~#~ \hfil & \hfil ~#~ \hfil & \hfil
~#~ \hfil & \hfil ~#~ \hfil & \hfil ~#~ \hfil & \hfil
~#~ \hfil & \hfil ~#~ \hfil & \hfil ~#~ \hfil & \hfil # \cr
\multispan 9 \strut {\bf Table 4. Photoionizing shock}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \hfil \cr
\noalign {\medskip \hrule \medskip}
\hfil & \hfil $E_{50}=1$, \hfil & \hfil $N_0=1$
\hfil & \hfil $E_{50}=3$, \hfil & \hfil $N_0=0.2$
\hfil & \hfil $E_{50}=1$, \hfil & \hfil $N_0=0.2$ \cr
$V_7$ \hfil & \hfil R$_{pc}$ \hfil & \hfil R$_i$ (pc)
\hfil & \hfil R$_{pc}$ \hfil & \hfil R$_i$ (pc)
\hfil & \hfil R$_{pc}$ \hfil & \hfil R$_i$ (pc) \cr
\noalign {\medskip \hrule \medskip}
1.0 \hfil & \hfil 19 \hfil & \hfil 27
\hfil & \hfil 48 \hfil & \hfil 79
\hfil & \hfil 33 \hfil & \hfil 61 \cr
1.5 \hfil & \hfil 15 \hfil & \hfil 36
\hfil & \hfil 37 \hfil & \hfil 112
\hfil & \hfil 25 \hfil & \hfil 87 \cr
2.0 \hfil & \hfil 12 \hfil & \hfil 44
\hfil & \hfil 30 \hfil & \hfil 136
\hfil & \hfil 21 \hfil & \hfil 107 \cr
2.5 \hfil & \hfil 11 \hfil & \hfil 50
\hfil & \hfil 26 \hfil & \hfil 157
\hfil & \hfil 18 \hfil & \hfil 123 \cr
\noalign {\medskip \hrule \medskip}
}}
\begin{figure}
\psfig{file=fig1a.ps,height=11cm,width=17cm}
Figure 1a.\\
\psfig{file=fig1b.ps,height=11cm,width=17cm}
Figure 1b.\\
\end{figure}
\begin{figure}
\psfig{file=fig2a.ps,height=11cm,width=17cm}
Figure 2a.\\
\psfig{file=fig2b.ps,height=11cm,width=17cm}
Figure 2b.\\
\end{figure}
\begin{figure}
\psfig{file=fig3a.ps}
Figure 3a.\\
\end{figure}
\begin{figure}
\psfig{file=fig3b.ps}
Figure 3b.\\
\end{figure}
\begin{figure}
\psfig{file=fig3c.ps}
Figure 3c.\\
\end{figure}
\begin{figure}
\plotone{fig4.ps}
Figure 4.\\
\end{figure}
\end{document}
| proofpile-arXiv_065-8333 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{intro}
In reactions leading to hadronic final states
Bose-Einstein correlations\ (BEC ) between identical bosons are well known.
These correlations lead to an enhancement of
the number of identical bosons over that of
non-identical bosons when
the two particles are close to each other in phase space.
Experimentally this effect
was first observed for pions by Goldhaber et al.~\cite{goldhaber}.
For recent reviews see,
for example, reference~\cite{marcellini}.
In \mbox{$\mathrm{e}^+\mathrm{e}^-$}\ annihilations at center-of-mass energies of 91 GeV, BEC\, have been observed for charged
pion pairs ~\cite{opalbe,becmult, alephbe, delphibe}, for $\mathrm{K}^0_{\mathrm{S}}
\mathrm{K}^0_{\mathrm{S}}$ pairs ~\cite{opalk0k0,opalk0k02,delphik0k0,alephk0k0}
and also for $\mathrm{K}^{\pm}\mathrm{K}^{\pm}$ ~\cite{delphikplus}.
\par
In the present paper we report on an investigation of BEC\, for charged pions between \mbox{$\mathrm{e}^+\mathrm{e}^-$}\
reactions at center-of-mass energies of 172 and 183 GeV, above the
threshold for W-pair production.
The analysis is motivated by the question of whether BEC\, for pions from different W bosons exist or not.
Theoretically this question is still not settled ~\cite{lund,nobec}.
However, if such correlations do exist, this could bias significantly the
measurement of the W boson mass in fully hadronic W-pair events~\cite{lund,vato,
jadach,bib-lonnblad}. The DELPHI collaboration has published a measurement, at $\sqrt{s}=172$ GeV, of BEC\ between
pions originating from two different W bosons~\cite{bib-delphidiff},
in which basically BEC\ in \ensuremath{\WW\rightarrow\qq\lnu}\ events were subtracted
from those of \ensuremath{\WW\rightarrow\qq\qq}\ events.
The aim of the present analysis is to analyse BEC\ for
fully hadronic W-pair events (\ensuremath{\WW\rightarrow\qq\qq} ), semileptonic W-pair events (\ensuremath{\WW\rightarrow\qq\lnu} ), as well as
non-radiative \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events. After having established BEC\ in hadronic W decays,
BEC\, are investigated separately for three classes of pions:
those originating from the same W boson, those from different W bosons
and those from non-radiative \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events.
Note that in this analysis, tracks are not assigned to jets or W-bosons and no kinematic
fits are needed. \par
BEC\ between
identical bosons can be formally expressed in terms of the normalised
function
\begin{equation}
\label{eq_1}
C(Q) \ = \ \frac{\rho_2(p_1,p_2)}{\rho_1(p_1)\rho_1(p_2)} \ = \ \sigma
\frac{d^2\sigma}{dp_1dp_2}\bigg/\left\{\frac{d\sigma}{dp_1}
\frac{d\sigma}{dp_2}\right\} \ ,
\end{equation}
where $\sigma$ is the total boson production cross section,
$\rho_1(p_i)$ and $d\sigma/dp_i$ are the single-boson density in
momentum space and the inclusive cross section, respectively.
Similarly $\rho_2(p_1,p_2)$ and $d^2\sigma/dp_1dp_2$ are respectively
the density of the two-boson system and its inclusive cross section.
The product of the independent one-particle densities
$\rho_1(p_1)\rho_1(p_2)$ is referred to as the reference density
distribution, to which the measured two-particle distribution
is compared. The inclusive two-boson density $\rho_2(p_1,p_2)$ can be
written as:
\begin{equation}
\label{eq_rho2}
\rho_2(p_1,p_2) \ = \ \rho_1(p_1)\rho_1(p_2) + K_2(p_1,p_2) \ ,
\end{equation}
where $K_2(p_1,p_2)$ represents the two-body correlations. In the simple
case of two identical bosons the normalised density function $C(Q)$,
defined in Eq.~\ref{eq_1}, describes the two-body
correlations.
Thus one has
\begin{equation}
\label{eq_r2}
C(Q) \ = \ 1 + \stackrel{\sim}{K}_2(p_1,p_2) \ ,
\end{equation}
where $\stackrel{\sim}{K}_2(p_1,p_2) =
K_2(p_1,p_2)/[\rho_1(p_1)\rho_1(p_2)]$ is the normalised two-body
correlation term. Since BEC\ are present when the
bosons are close to one another in phase space, a natural choice is
to study them as a function of the Lorentz invariant variable $Q$ defined by
\[ Q^2 = -(p_1 - p_2)^2 = M^2_2 - 4\mu^2 \ ,\]
which approaches zero as the identical bosons move closer in phase
space. Here $p_i$ is the four-momentum vector of the $i$th particle,
$\mu$ is the boson mass (here $m_{\pi}$) and $M^2_2$ is the invariant mass squared of
the two-boson system.
Ideally the reference sample should contain all correlations
present in the sample
used to measure $\rho(p_1,p_2)$, other
than the BEC\ , such as those due to energy,
momentum and charge conservation, resonance decays and
global event properties.
In this analysis, the reference is chosen to be
a sample of unlike-charge pairs of pions from the same event.
Since the presence of the resonances $\omega$, $\mathrm{K^0_S}$, $\mathrm{\eta}$,
$\mathrm{\eta^{ \prime}}$, $\mathrm{\rho^0}$, $\mathrm{f_{0}}$ and $\mathrm{f_{2}}$
in the unlike-charge reference sample leads to kinematic
correlations which are not present in the like-charge sample,
the unlike-charge sample has to be corrected for this
effect using simulated events. \par
Assuming a spherically symmetric pion source with a Gaussian radial distribution, the
correlation function $C(Q)$ can be parametrised~\cite{goldhaber} by
\begin{equation}
C(Q) = N \, (1 + f_{\pi}(Q)\,\lambda\,
{\mathrm{e}} ^ {-Q^2 R^2})\,(1 + \delta
\, Q +\epsilon \, Q^2 ),
\label{eq-usedfun}
\end{equation}
where $R$ is the radius of the source and
$\lambda$
represents the strength of the correlation,
with $0 \leq \lambda \leq 1$.
A value of $\lambda=1$ corresponds
to a fully chaotic source, while $\lambda =0$
corresponds to a completely coherent source without any BEC .
The function $f_{\pi}(Q)$ is the probability that a selected track pair is
really a pair of pions, as a function of $Q$.
The additional empirical term
\mbox{$(1 + \delta\, Q
+ \epsilon \, Q^2 )$}
takes into account the behaviour of the correlation function
at high $Q$ values due to long-range particle correlations
(e.g. charge and energy conservation, phase-space constraints), and
$N$ is a normalisation factor. \par
The structure of the paper is as follows.
Section 2 contains a brief overview of the OPAL detector, the event and
track selections as well as Monte Carlo models.
In section 3 the analysis of the data is described. BEC\
are investigated for \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}{}, \ensuremath{\WW\rightarrow\qq\lnu}{} and \ensuremath{\WW\rightarrow\qq\qq}{} events. After
establishing BEC\ in hadronic W-events the chaoticity parameter
for BEC\ between the decay products from the same W, \ensuremath{{\lambda^{\mathrm{ same}}}}{}, and
from different W bosons, \ensuremath{{\lambda^{\mathrm {diff}}}}{}, are determined.
Finally, section 4 summarises the results obtained.
\section{Experimental Details}
\subsection{The OPAL detector}
\label{sec-det}
A detailed description of the OPAL detector has been presented
elsewhere~\cite{opaldet} and therefore only the features
relevant to this analysis are summarised here. Charged particle
trajectories are reconstructed using the cylindrical central tracking
detectors which consist of a silicon microvertex detector, a high-precision
gas vertex detector, a large-volume gas jet chamber and thin
$z$-chambers \footnote{The OPAL right-handed coordinate system is defined such
that the origin is at the geometric centre of the jet chamber, $z$ is
parallel to, and has positive sense along, the e$^-$ beam direction, $r$
is the coordinate normal to $z$, $\theta$ is the polar angle with respect
to +$z$ and $\phi$ is the azimuthal angle around $z$.}.
The entire central detector is contained within a solenoid
that provides an axial magnetic field of 0.435~T\@.
The silicon microvertex detector consists of two layers of
silicon strip detectors, allowing to measure at least one hit per charged track in the
angular region
$|\cos\theta|<0.93$. It is surrounded by the vertex drift
chamber, followed by the jet chamber, about 400~cm in
length and 185~cm in radius, that provides up to 159 space points per
track and also measures the ionisation energy loss of charged particles,
\ensuremath{\mathrm{d}E/\mathrm{d}x}. With at least 130 charge samples along a track, a resolution
of $3.8 \%$ is achieved for the \ensuremath{\mathrm{d}E/\mathrm{d}x}\ for minimum ionising pions
in jets ~\cite{jetchamber, bib-dEdx}.
The $z$-chambers, which considerably improve the measurement of
charged tracks in $\theta$, follow the jet chamber at large radius.
The combination of these chambers leads to a momentum
resolution of $\sigma_{p}/p^{2}=1.25 \times 10^{-3}$ (GeV/$c$)$^{-1}$.
Track finding is nearly 100\%
efficient within the angular region $|\cos \theta |<0.92$ .
The mass resolution for $\mathrm{K^{0}_{S}} \rightarrow \pi^{+} \pi^{-}$,
related to the resolution in the correlation variable $Q$,
is found to be $\sigma = 7.0\pm 0.1$ MeV/$c^{2}$ ~\cite{opalk0k0}.
\subsection{Data selection}
\label{sec-selection}
This study is carried out using data at \mbox{$\mathrm{e}^+\mathrm{e}^-$}\ center-of-mass
energies of 172 GeV and 183 GeV with integrated luminosities of approximately
10~pb$^{-1}$ and 57~pb$^{-1}$, respectively.
Three mutually exclusive event samples are selected: a)
{\em the fully hadronic event sample}, \ensuremath{\WW\rightarrow\qq\qq}, where both W bosons decay
hadronically;
b){\em the semileptonic event sample}, \ensuremath{\WW\rightarrow\qq\lnu},
where one W decays hadronically and the other decays semileptonically ;
and c)
hadronic non-W events \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}, referred to here as the
{\em the non-radiative \ensuremath{(\Zz/\gamma)^{*}\,\,} event sample} in this analysis.
Throughout this paper, a reference to W$^{+}$ or its decay products
implicitly includes the charge conjugate states.
\subsubsection{{\bf Selection of the fully hadronic event sample \boldmath{\ensuremath{\WW\rightarrow\qq\qq}}}}
The selection of fully hadronic \ensuremath{\WW\rightarrow\qq\qq}\ events is performed in two
stages using a preselection based on cuts followed by a likelihood--based
selection procedure. Fully hadronic decays, \ensuremath{\WW\rightarrow\qq\qq}\ are
characterised by four or more energetic hadronic jets and little missing energy.
A preselection using kinematic variables
removes background predominantly from radiative \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events.
Events satisfying the preselection criteria are subjected to a likelihood
selection, which discriminates between signal and the remaining
four-jet-like QCD background. \par
At 172 GeV, several variables based on the characteristic four-jet-like
nature, momentum balance and jet angular structure, are used to distinguish \ensuremath{\WW\rightarrow\qq\qq}\
events from the remaining background and to construct the
likelihood.
The details of the selection at 172 GeV are described in appendix B of~\cite{wmass172}.
The signal and background situation at 183 GeV is similar to the one
at 172 GeV. For this reason, no new selection strategy was developed and the event
selection at 183 GeV is just a reoptimised version of the selection at 172 GeV.
The details of the selection at 183 GeV are described in~\cite{ww183}. At
183 GeV, no cut was applied against \ensuremath{{\mathrm{Z}^0}} \ensuremath{{\mathrm{Z}^0}} events. \par
Overall, there is a background of $11.6\%$ from \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events
and a contribution of $2.1\%$ from
$\ee \rightarrow \ensuremath{\Zz \Zz \rightarrow\qq\qq} $ events.
No selection for \ensuremath{\WW\rightarrow\qq\qq}\ events is applied to events
selected as \ensuremath{\WW\rightarrow\qq\lnu}\ events. \par
\subsubsection{{\bf Selection of the semileptonic event sample \boldmath{\ensuremath{\WW\rightarrow\qq\lnu}}}}
\ensuremath{\WW\rightarrow\qq\enu}\ and \ensuremath{\WW\rightarrow\qq\mnu}\ events are characterised by two
well-separated hadronic jets, a high-momentum lepton and
missing momentum due to the unobserved neutrino.
In \ensuremath{\WW\rightarrow\qq\tnu}\ the $\tau$ lepton gives rise to a low-multiplicity jet
consisting of one or three tracks.
The tracks from $\tau$ decay are not used in in the BEC\ studies.
Cuts are applied to reduce the background from radiative \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events.
A likelihood is formed using kinematic variables and
characteristics of the lepton candidate to further suppress the background from \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events.
The details of the selection at 172 GeV are given in appendix A of~\cite{wmass172}. \par
The \ensuremath{\WW\rightarrow\qq\lnu}\ event selection for the 183~GeV data is a modified
version of the 172~GeV selection.
At 183~GeV, a looser set of preselection cuts is used since the
lepton energy spectrum is broader due to the increased boost
and the set of variables used in the likelihood selections is modified.
In the \ensuremath{\WW\rightarrow\qq\tnu}\ sample there is a significant background from
hadronic decays of single W events (\ensuremath{\epem \rightarrow \mathrm{W}\enu}) and
an additional likelihood selection is used to reduce this background.
This is only applied to
\ensuremath{\WW\rightarrow\qq\tnu}\ events where the tau is identified as decaying in the single prong hadronic
channel.
Finally, in order to reduce \ensuremath{{\mathrm{Z}^0}} \ensuremath{{\mathrm{Z}^0}}\ contribution, events passing the
\ensuremath{\WW\rightarrow\qq\enu}\ likelihood selection are rejected if there is evidence of
a second energetic electron. A similar procedure is applied to the
\ensuremath{\WW\rightarrow\qq\mnu}\ selection.
The details of the selection at 183 GeV are given in~\cite{ww183}.
There is a background of $3.5\%$ from \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events,
$1.0\%$ from \ensuremath{\WW\rightarrow\qq\qq}\ events, $1.3\%$ from single W events
and $0.8\%$ from \ensuremath{\Zz \Zz \rightarrow\qq\ell \overline {\ell}}\ events. \par
\subsubsection{{\bf Selection of the non-radiative event sample \boldmath{\ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq} }}}
Here, an extension of the selection criteria defined
in~\cite{OPALPR197} is used, which starts by selecting hadronic events defined
as in~\cite{OPALPR035}.
To reject background from $\ensuremath{\mathrm{e}^+\mathrm{e}^-}\rightarrow\tautau$ and
$\gamma\gamma\rightarrow\ensuremath{\mathrm{q\overline{q}}}$ and to ensure that the events are
well contained in the OPAL detector one requires that
the event has at least seven charged tracks
with transverse momentum $p_t > 150$~MeV/$c$ and that
the polar angle of the thrust axis lies within the range $|\cos \theta_{T}|<0.9$.
To reject events with large initial-state radiation, one requires
$\sqrt{s}-\ensuremath{\sqrt{s^\prime}}<10$~GeV, where \ensuremath{\sqrt{s^\prime}}\ is the effective invariant mass of the hadronic system~\cite{PR183}.
For the suppression of the \ensuremath{\mathrm{W}^+\mathrm{W}^-}\ background one
requires that the events are
selected neither for the semileptonic nor for the fully hadronic \ensuremath{\mathrm{W}^+\mathrm{W}^-}\ samples described above.
The cut in the relative likelihood for vetoing \ensuremath{\WW\rightarrow\qq\qq}\ events is looser
than in the \ensuremath{\WW\rightarrow\qq\qq}\ event selection.
After selection, there is a residual background of $3.8\%$ from W-pair events
and a contribution of $0.3\%$ from \ensuremath{{\mathrm{Z}^0}} \ensuremath{{\mathrm{Z}^0}}\ events. \par
\subsubsection{Pion selection and Event samples}
\label{pion-sel}
Note that the three event selections result
in completely independent event samples without any overlap.
After the event selection the following cuts are applied
to all tracks, for all three event samples.
A track is required to have a transverse momentum $p_{t} > 0.15$~GeV/$c$,
momentum $p < 10$~GeV/$c$
and a corresponding error of $\sigma_{p} < 0.1$~GeV/$c$.
Only tracks with polar angles $\theta$ satisfying $|\cos\theta | <
0.94$ are considered.
The probability for a track to be a pion is enhanced by requiring
that the pion probability P$_{\pi}$ from the d$E$/d$x$ measurement is
P$_{\pi} > 0.02$.
Pion-pairs from a $\mathrm{K^{0}_{S}}$ decay are rejected using the $\mathrm{K^{0}_{S}}$ finder
described in \cite{opalk0k02}. This algorithm rejects 31{\%}
of the unlike-charge pion pairs coming from a $\mathrm{K^{0}_{S}}$ decay.
Since less than $11\%$ of the rejected pairs do not originate from
a $\mathrm{K^{0}_{S}}$, this cut does not introduce a significant bias
in the $Q$-distribution.
Finally, events with fewer than five charged selected tracks are rejected.
The number of events retained, as well as the number of background events
evaluated from Monte Carlo simulation is given in
table~\ref{tab-events}, for all three event samples. \par
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
event sample & \multicolumn{2}{c|}{number of selected events} & \multicolumn{2}{c|}{expected background events}\\
\cline{2-5}
& 172 GeV & 183 GeV & 172 GeV & 183 GeV \\ \hline
\ensuremath{\WW\rightarrow\qq\qq} & 55 & 327 & $9.5\pm0.5$ & $43.6\pm2.4$ \\
\ensuremath{\WW\rightarrow\qq\lnu} & 45 & 326 & $2.1\pm0.5$ & $23.1\pm2.4$ \\
\ensuremath{(\Zz/\gamma)^{*}\,\,} & 214 & 1009 & $8.1\pm1.7$ & $43.2\pm4.9$ \\
\hline
\end{tabular}
\end{center}
\caption{Number of retained events and number of background events predicted for the
three event samples, separately for 172 GeV and 183 GeV.}
\label{tab-events}
\end{table}
\subsection{Monte Carlo models}
\label{sec-montecarlo}
A number of Monte Carlo models are used to model \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}, \ensuremath{\WW\rightarrow\qq\lnu}\ or \ensuremath{\WW\rightarrow\qq\qq}\ events.
For the \ensuremath{\WW\rightarrow\qq\qq}\ event sample
the simulated events are also used to determine the fraction of
track pairs coming from the same or different W bosons.
The Monte Carlo samples are generated at \mbox{$\mathrm{e}^+\mathrm{e}^-$}\ center-of-mass energies of 172 and 183 GeV in
proportion to the corresponding integrated luminosities.
The production of W-pairs is simulated using \mbox{K{\sc oralw}}~\cite{bib-koralw}.
Non-radiative decays \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ as well as the \ensuremath{{\mathrm{Z}^0}} \ensuremath{{\mathrm{Z}^0}}\ and \ensuremath{\mathrm{W}\enu}\ events are
simulated with \mbox{P{\sc ythia}} ~\cite{bib-pythia}. \mbox{K{\sc oralw}}\ uses the same string model as \mbox{P{\sc ythia}}\ for hadronisation.
For systematic error studies the event generator \mbox{H{\sc erwig}} ~\cite{bib-herwig},
which employs a cluster hadronisation model, is also used.
All these Monte Carlo samples discussed above are generated without BEC .
In addition W-pair events are also simulated with BEC\ included ~\cite{bib-lonnblad},
using \mbox{P{\sc ythia}}\ \footnote{The model parameters
controlling BEC\, in \mbox{P{\sc ythia}}\ are taken to be
MSTJ(51)=2,
MSTJ(54)=--1,
MSTJ(57)=1,
PARJ(92)=1.0,
PARJ(93)=0.4,
MSTJ(52)=9,
PARJ(94)=0.275, and
PARJ(95)=0.0, as suggested by the authors of \cite{bib-lonnblad}.}.
The algorithm introduces BEC\ via a shift of final-state momenta among
identical bosons.
For these events two samples are generated: In the first, BEC\, are simulated for all pions
in the event, both from the same and from different W bosons. In the second sample, BEC\, are
simulated only for pions originating from the same W boson.
\section{Analysis}
\label{sec-analysis}
Using the tracks that pass the selection of section~\ref{pion-sel},
the $Q$-distributions are determined for like-charge pairs as well as
for unlike-charge pairs.
The correlation function $C(Q)$ is then obtained as the ratio of these $Q$-distributions.
Coulomb interactions between charged particles affect like- and unlike-charge
pairs in opposite ways and modify the correlation function. We therefore
apply the following correction to the correlation function,
\begin{equation}
C_{{\rm corr}}(Q) =\chi (Q)\, C_{{\rm uncorr}}(Q),
\label{eq-coulcorr}
\end{equation}
where
\begin{equation}
\chi (Q)= \frac{\mathrm{e}^{2\pi \eta}\,-\,1}{1\,-\,\mathrm{e}^{-2\pi\eta}},
\end{equation}
and where $\eta = \alpha \, m_{\pi}/Q$ with $\alpha$ the fine-structure constant
and $m_\pi$ the mass of the charged pion~\cite{bib-coulomb}.
The Coulomb correction factor $\chi(Q)$ is about 17{\%} in the first $Q$ bin,
5{\%} in the second bin and 1{\%} in the tenth bin,
with a bin size of $0.08$ GeV/$c^{2}$ (see Fig.~\ref{res-cor} for the definition of the bins).
The Monte Carlo simulations do not contain Coulomb effects, so the Monte Carlo
distributions are not corrected by Eq.~\ref{eq-coulcorr}. \par
Structure in the unlike-charge samples due to resonance production
is corrected using Monte Carlo.
For this, the $Q$-distribution is obtained for unlike-charge pair
combinations taken exclusively from the decay products of $\mathrm{K^0_S}$ mesons and
the resonances $\omega$, $\mathrm{\eta}$,
$\mathrm{\eta^{\prime}}$, $\mathrm{\rho^0}$, $\mathrm{f_{0}}$ and $\mathrm{f_{2}}$
as produced in the Monte Carlo.
The production of resonances has only been measured at \mbox{$\mathrm{e}^+\mathrm{e}^-$}\ center-of-mass energies around the \ensuremath{{\mathrm{Z}^0}}\ peak
and not at energies above the \ensuremath{{\mathrm{Z}^0}}\ peak.
\mbox{J{\sc etset}}\ \cite{bib-Jetset} describes the production of resonances around the \ensuremath{{\mathrm{Z}^0}}\ peak
quite well, although not perfectly in all cases~\cite{had-lafferty}.
To estimate the contribution for each resonance to the $Q$-distribution
at LEP 2 energies, the $Q$-distribution for each resonance is multiplied
by the ratio of the measured production rate at LEP \cite{had-lafferty}
and the corresponding rate in \mbox{J{\sc etset}}.
The main contributions come from $\mathrm{K^0_S}$, $\omega$,
$\mathrm{\rho^0}$ and $\mathrm{\eta}$ mesons.
The $Q$-distribution for the
resonances, thus obtained, is then scaled to the number of selected events and
subtracted from the experimental unlike-charge reference $Q$-distribution.
These corrections are made for each event selection
separately.
They are typically $5-10\%$ for small $Q$, falling
rapidly for $Q> 0.8$ GeV/$c^{2}$.
The three unlike-charge distributions,
before the correction, and the expected
signal from resonance decays are shown in Fig.~\ref{res-cor}.
\begin{figure}
\begin{center}\mbox{\epsfxsize=16cm
\epsffile{pr_262_1.eps}}\end{center}
\caption{The points show the unlike-charge pion pair
distribution, before the correction for resonance production,
for the three different event selections;
a) for the fully hadronic, b) for the semileptonic, and c)
for the non-radiative event selection.
The filled histogram is the expected contribution from resonances,
where both tracks of a pion pair come from the same resonance.}
\label{res-cor}
\end{figure}
The resulting experimental correlations $C(Q)$ are shown, for the
three event samples separately, in Fig.~\ref{fig-data}.
The data in all three distributions exhibit a clear enhancement at low
$Q$, consistent with the presence of BEC .
\begin{figure}
\begin{center}\mbox{\epsfxsize=16cm
\epsffile{pr_262_2.eps}}\end{center}
\caption{
The correlation function for like-charge pairs relative to
unlike-charge pairs for three event selections;
a) \ensuremath{{C^{\mathrm {had}}(Q)}}{} for the fully hadronic, b) \ensuremath{{C^{\mathrm{semi}}(Q)}}{} for the semileptonic, and c)
\ensuremath{{C^{\mathrm {non-rad.}}(Q)}}{} for the non-radiative event selection.
The Coulomb-corrected data are shown as solid points together with
statistical errors.
The curves are the result of the simultaneous fit discussed in
Sect.~\ref{sim-fit}.}
\label{fig-data}
\end{figure}
\subsection{\bf Fit to establish BEC\ in W-pair events}
\label{sim-fit-nwa}
The measured distributions cannot be directly
compared with the parametrisation of Eq.~\ref{eq-usedfun}~ since,
in general, each distribution has contributions from several
physical processes that may have different BEC .
To illustrate the situation, consider the hadronic W-pair events.
They have as their main contribution (see Table~\ref{tab-events})
the correlations from pions coming from hadronic W decays.
They contain, however, also contributions from background events, i.e. \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events.
Thus
\begin{equation}
\ensuremath{{C^{\mathrm {had}}(Q)}}{} = \frac{N^{\mathrm{WW}}_{\pm\pm} + N^{\mathrm Z^{*}}_{\pm\pm}}
{N^{\mathrm{WW}}_{+-} + N^{\mathrm Z^{*}}_{+-}},\label{eq-had-nwa}
\end{equation}
where $N^{\mathrm{WW}}_{\pm\pm}$ and $N^{\mathrm Z^{*}}_{\pm\pm}$
are the numbers of like-charge track pairs for the class of pions from
\ensuremath{\WW\rightarrow\qq\qq}\ events and for the class of pions from the background sample of \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events. The variables
$N^{\mathrm{WW}}_{+-}$ and $N^{\mathrm Z^{*}}_{+-}$ are defined analogously for unlike-charge pairs.
Eq.~\ref{eq-had-nwa} can be rewritten as
\begin{equation}
\ensuremath{{C^{\mathrm {had}}(Q)}}{} = \ensuremath{{P^{\mathrm {WW}}_{\mathrm{had}}(Q)}} \, \ensuremath{{C^{{\mathrm {\qq \qq}}}(Q)}} + \\
(1- \ensuremath{{P^{\mathrm {WW}}_{\mathrm{had}}(Q)}}{}) \, \ensuremath{{C^{{\mathrm {Z}}^{*}}_{\mathrm{had}}(Q)}}{} , \label{eq-4q-nwa}
\end{equation} \\
where \ensuremath{{C^{{\mathrm {\qq \qq}}}(Q)}}{} and \ensuremath{{C^{{\mathrm {Z}}^{*}}_{\mathrm{had}}(Q)}}{} are the BEC\, for the class of pions from
\ensuremath{\WW\rightarrow\qq\qq}\ events and for the class of pions from the sample of \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events,
the main background in the hadronic selection.
$\ensuremath{{P^{\mathrm {WW}}_{\mathrm{had}}(Q)}}{}= N^{\mathrm{WW}}_{+-} / (N^{\mathrm{WW}}_{+-} + N^{\mathrm Z^{*}}_{+-})$
is the fraction of unlike-charge pion pairs at a given $Q$ which originate from
a W-pair event in the hadronic event sample.
Here and in the following, the small number of \ensuremath{{\mathrm{Z}^0}} \ensuremath{{\mathrm{Z}^0}}
events are not counted as background but as signal,
since their properties with regard to BEC\ should be quite similar. \par
The experimentally determined correlations for the other two
event samples can be written as:
\begin{equation}
\ensuremath{{C^{\mathrm{semi}}(Q)}}{} = \ensuremath{{P^{\mathrm {W}}_{\mathrm{semi}}(Q)}} \, \ensuremath{{C^{{\mathrm {\qq}}}(Q)}} + \\
(1 - \ensuremath{{P^{\mathrm {W}}_{\mathrm{semi}}(Q)}}) \, \ensuremath{{C^{{\mathrm {Z}}^{*}}(Q)}} , \label{eq-semi-nwa}
\end{equation}
for the \ensuremath{\WW\rightarrow\qq\lnu}\ event sample and
\begin{equation}
\ensuremath{{C^{\mathrm {non-rad.}}(Q)}}{} = \ensuremath{{P^{\mathrm {Z^{*}}}_{\mathrm {non-rad}}(Q)}} \, \ensuremath{{C^{{\mathrm {Z}}^{*}}(Q)}} + \\
(1 - \ensuremath{{P^{\mathrm {Z^{*}}}_{\mathrm {non-rad}}(Q)}} ) \, \ensuremath{{C^{{\mathrm {\qq \qq}}}(Q)}} \label{eq-qcd-nwa}
\end{equation}
for the non-radiative \ensuremath{(\Zz/\gamma)^{*}\,\,} event sample.
The notation in these equations is analogous to that of
Eq.~\ref{eq-had-nwa}. \ensuremath{{C^{\mathrm{semi}}(Q)}}{} and \ensuremath{{C^{{\mathrm {Z}}^{*}}(Q)}}{} are the BEC\, for the two
pion classes from \ensuremath{\WW\rightarrow\qq\lnu}\ and non-radiative \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events, respectively.
The definition of the relative fractions \ensuremath{{P^{\mathrm {WW}}_{\mathrm{had}}(Q)}}{}, \ensuremath{{P^{\mathrm {W}}_{\mathrm{semi}}(Q)}}{} and \ensuremath{{P^{\mathrm {Z^{*}}}_{\mathrm {non-rad}}(Q)}}{}
is given in table \ref{tab-defs-nwa}.
They are taken from a Monte Carlo simulation which does not contain BEC\,
as discussed in section \ref{sec-montecarlo}.
These probabilities are global properties of the events and depend
little on whether BEC\, are assumed or not.
The small number of single-W events in the semileptonic event sample are treated as signal events. \par
The hadronic W-pair sample contains a sizeable number of \ensuremath{(\Zz/\gamma)^{*}\,\,}
background events. Due to the selection cuts suppressing
\ensuremath{(\Zz/\gamma)^{*}\,\,} events in the hadronic W-pair sample, the remaining \ensuremath{(\Zz/\gamma)^{*}\,\,} events have
different event shapes and multiplicities from those
in the main non-radiative \ensuremath{(\Zz/\gamma)^{*}\,\,} event sample. Since BEC depend on event shape
and multiplicity~\cite{becmult}, the correlation function for \ensuremath{(\Zz/\gamma)^{*}\,\,} events selected
as hadronic W-pairs, \ensuremath{{C^{{\mathrm {Z}}^{*}}_{\mathrm{had}}(Q)}}{}, is expected to be different from
that for the main non-radiative selection, \ensuremath{{C^{{\mathrm {Z}}^{*}}(Q)}}{}. To take these
differences into account, the parameters $\lambda$ and $R$ in the
correlation functions \ensuremath{{C^{{\mathrm {Z}}^{*}}_{\mathrm{had}}(Q)}}{} and \ensuremath{{C^{{\mathrm {Z}}^{*}}(Q)}}{} are not taken to
be equal but those in \ensuremath{{C^{{\mathrm {Z}}^{*}}_{\mathrm{had}}(Q)}}{} are adjusted according to the
different event topology.
In order to estimate this correction, the \ensuremath{\WW\rightarrow\qq\qq}\ selection described in \ref{sec-selection},
which contains no direct center-of-mass energy dependent variables,
is applied to data taken at LEP. A simultaneous BEC\
fit is applied to both events selected as \ensuremath{\WW\rightarrow\qq\qq}\ events and events
which are not selected as \ensuremath{\WW\rightarrow\qq\qq}\ events. The differences obtained
in $\lambda$ and $R$ are used here~\footnote{ For the function \ensuremath{{C^{{\mathrm {Z}}^{*}}_{\mathrm{had}}(Q)}}{}
the absolute $\lambda$ value is reduced by 0.094 and the
absolute $R$ value is increased by 0.097 fm relative to the corresponding parameters of \ensuremath{{C^{{\mathrm {Z}}^{*}}(Q)}}{}, with
\ensuremath{{\lambda^{\mathrm Z^{*}}}}{} kept as a free parameter in the main BEC\ fit.} to take
differences in the correlation function \ensuremath{{C^{{\mathrm {Z}}^{*}}_{\mathrm{had}}(Q)}}{} and \ensuremath{{C^{{\mathrm {Z}}^{*}}(Q)}}{}
into account.
Due to high purity of the semileptonic and non-radiative \ensuremath{(\Zz/\gamma)^{*}\,\,}\ selections,
no adjustment is applied to the correlation functions of events selected as background
in \ensuremath{{C^{\mathrm{semi}}(Q)}}{} and \ensuremath{{C^{\mathrm {non-rad.}}(Q)}}{}.
For \ensuremath{\WW\rightarrow\qq\qq}\ events selected as fully hadronic events and \ensuremath{\WW\rightarrow\qq\qq}\ events selected
as non-radiative \ensuremath{(\Zz/\gamma)^{*}\,\,} events the same BEC\ are assumed.
The effect of this assumption will be described with the systematic errors. \par
\begin{table}
\begin{center}
\begin{tabular}{|c|c|}
\hline
&\\
Probability definition & Prob. that $+-$ track pair \\
&\\
\hline
\hline
& \\
$\ensuremath{{P^{\mathrm {WW}}_{\mathrm{had}}(Q)}}{} = \frac{N^{\mathrm{WW}}_{+-}(Q)} { N^{\mathrm{WW}}_{+-}(Q)
+ N^{\mathrm Z^{*}}_{+-}(Q)}$ & originates from \ensuremath{\WW\rightarrow\qq\qq}\ process,\\
& in the hadronic event selection. \\
& \\
\hline
& \\
\pzstarhad{} = 1 -- \ensuremath{{P^{\mathrm {WW}}_{\mathrm{had}}(Q)}}{} & originates from \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ process, \\
& in the hadronic event selection. \\
& \\
\hline
& \\
$\psamesemi{} = \frac {N^{\mathrm{W}}_{+-}(Q)}{ N^{\mathrm{W}}_{+-}(Q)+
N^{\mathrm Z^{*}}_{+-}(Q)}$ &originates from \ensuremath{\WW\rightarrow\qq\lnu}\ process, \\
& in the semileptonic event selection. \\
& \\
\hline
& \\
$\ensuremath{{P^{\mathrm {Z^{*}}}_{\mathrm {non-rad}}(Q)}}{} = \frac{N^{\mathrm Z^{*}}_{+-}(Q)}{N^{\mathrm{WW}}_{+-}(Q)
+ N^{\mathrm Z^{*}}_{+-}(Q)}$ &originates from \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ process, \\
&in the non-radiative event selection. \\
& \\
\hline
& \\
\ensuremath{{P^{\mathrm {WW}}_{\mathrm{non-rad}}(Q)}}{} = 1 -- \ensuremath{{P^{\mathrm {Z^{*}}}_{\mathrm {non-rad}}(Q)}}{} & originates from \ensuremath{\WW\rightarrow\qq\qq}\ process, \\
&in the non-radiative event selection. \\
& \\
\hline
& \\
$\psamehad{} = \frac{N^{\mathrm{ same\,W}}_{+-}(Q)}{ N^{\mathrm{ same\,W}}_{+-}(Q) +N^{\mathrm{ diff\,W}}_{+-}(Q)
+ N^{\mathrm Z^{*}}_{+-}(Q)}$ &originates from the same W, \\
&in the hadronic event selection. \\
& \\
\hline
& \\
$\ensuremath{{P^{\mathrm {same}}_{\mathrm{non-rad}}(Q)}}{} = \frac{N^{\mathrm{ same\,W}}_{+-}(Q)}{N^{\mathrm{ same\,W}}_{+-}(Q) +N^{\mathrm{ diff\,W}}_{+-}(Q)
+ N^{\mathrm Z^{*}}_{+-}(Q)}$ & originates from the same W, \\
&in the non-radiative event selection. \\
& \\
\hline
\end{tabular}
\end{center}
\caption{Definition and meaning of the various probabilities
concerning unlike-charge track pairs, used in
Eqs. \ref{eq-4q-nwa} - \ref{eq-qcd-nwa} and
\ref{eq-4q} - \ref{eq-qcd} and illustrated in Figs.~\ref{pur1} and \ref{fig-psame}.}
\label{tab-defs-nwa}
\end{table}
\begin{figure}[ht]
\begin{center}\mbox{\epsfxsize=16cm
\epsffile{pr_262_3.eps}}\end{center}
\caption{The purities a) \ensuremath{{P^{\mathrm {WW}}_{\mathrm{had}}(Q)}}{}, b) \psamesemi{} and c) \ensuremath{{P^{\mathrm {WW}}_{\mathrm{non-rad}}(Q)}}{} as obtained
from Monte Carlo simulations.}
\label{pur1}
\end{figure}
The unknown correlation functions \ensuremath{{C^{{\mathrm {\qq \qq}}}(Q)}}{}, \ensuremath{{C^{{\mathrm {\qq}}}(Q)}}{} and \ensuremath{{C^{{\mathrm {Z}}^{*}}(Q)}}{} are parametrised
using Eq.~\ref{eq-usedfun}.
The parameters are determined in a simultaneous fit to the
three experimental distributions shown in Fig.~\ref{sim-fit}.
A common source radius $R$ is used for all event classes, while
the parameter $\lambda$ is allowed to be different.
There is justified by the typical separation between the W$^{+}$ and W$^{-}$ decay vertices is smaller than
0.1 fm at LEP 2 energies, much smaller than the typical hadronic
source radius of $R \approx 1$ fm~\cite{lund}, justifying equal source radii
for the \ensuremath{\WW\rightarrow\qq\qq}\ and the \ensuremath{\WW\rightarrow\qq\lnu}\ event classes.
The source radius for \mbox{$\mathrm{e}^+\mathrm{e}^-$} ~annihilations into hadrons has been measured up to 90 GeV
and no evidence has been found for an energy dependence~\cite{marcellini}.
For this reason $R$ is assumed to be the same at higher energies and the same
source radius is also used for the \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ event class.
Separate fits to the distributions show also consistent radii
for the different event selections.
The pion probability \ensuremath{f_{\pi}(Q)}\ is taken from Monte Carlo.
At small values of $Q$ it is about constant $\sim 0.84$ and varies only weakly with $Q$ for all channels.
The long-range parameters are expected to be different for
the \ensuremath{\WW\rightarrow\qq\qq}, \ensuremath{\WW\rightarrow\qq\lnu}\ and \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ event class, due
to kinematic and topological differences.
The results for the thirteen free parameters in the fit are given in Table~\ref{tab-results-nwa}.
The fit is made in the full range of $0.0 < Q < 2.0$ GeV/$c^{2}$.
In the distributions of Fig.~\ref{fig-data} the same particles contribute many
times, in different bins of $Q$, which introduces bin-to-bin
correlations. These are taken into account in the fit.
All three experimental distributions are well described by the fit,
the $\chi^{2}$/d.o.f. is 76.1/62.
\begin{table}[ht]
\begin{center}
\begin{tabular}{||c||c c c||} \hline
Parameter & \ensuremath{\WW\rightarrow\qq\qq} & \ensuremath{\WW\rightarrow\qq\lnu} & $(\mathrm{Z^{0}}/\gamma)^{*}$ \\ \hline
R (fm) & & $0.91\pm0.11\pm0.10$ & \\
$\lambda$ & $0.43\pm0.15\pm0.09$ & $0.75\pm0.26\pm0.18$ & $0.49\pm0.11\pm0.08$\\
$N$ & $0.86\pm0.04\pm0.04$ & $0.79\pm0.08\pm0.08$ & $0.86\pm0.05\pm0.04$ \\
$\delta$ & $0.12\pm0.10\pm0.10$ & $0.29\pm0.23\pm0.24$ & $0.13\pm0.11\pm0.08$ \\
$\epsilon$ & $-0.04\pm0.05\pm0.06$ & $-0.09\pm0.10\pm0.11$ & $-0.02\pm0.05\pm0.04$ \\
\hline
\end{tabular}
\end{center}
\caption{Result of the simultaneous fit.
The first error corresponds to the statistical uncertainty the second to systematics.}
\label{tab-results-nwa}
\end{table}
\subsection{\bf Fit to establish BEC\ in same and different W bosons.}
In this section BEC\ are investigated separately for
pions originating from the same W boson and for
pions from different W bosons.
The correlations for the fully hadronic event sample
(Eq. \ref{eq-had-nwa}) are written as
\begin{equation}
\ensuremath{{C^{\mathrm {had}}(Q)}}{} = \frac{N^{\mathrm{ same\,W}}_{\pm\pm} + N^{\mathrm{ diff\,W}}_{\pm\pm} +
N^{\mathrm Z^{*}}_{\pm\pm}}{N^{\mathrm{ same\,W}}_{+-} +N^{\mathrm{ diff\,W}}_{+-}
+ N^{\mathrm Z^{*}}_{+-}}, \label{eq-hadorig}
\end{equation}
where $N^{\mathrm{ same\,W}}_{\pm\pm}$, $N^{\mathrm{ diff\,W}}_{\pm\pm}$ and $N^{\mathrm Z^{*}}_{\pm\pm}$
are the number of like-charge track pairs for the class of pions from
the same W boson, different W bosons and from \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events. The variables
$N^{\mathrm{ same\,W}}_{+-}$, $N^{\mathrm{ diff\,W}}_{+-}$ and $N^{\mathrm Z^{*}}_{+-}$ are defined,
in a similar way, for unlike-charge pairs.
Eq.~\ref{eq-hadorig} can be rewritten as
\begin{eqnarray}
\ensuremath{{C^{\mathrm {had}}(Q)}}{} = \psamehad{} \, \ensuremath{{C^{\mathrm {same}}(Q)}}{} + \pzstarhad{} \, \ensuremath{{C^{{\mathrm {Z}}^{*}}_{\mathrm{had}}(Q)}}{} \nonumber \\
+ (1 - \psamehad{} - \pzstarhad{} ) \, \ensuremath{{C^{\mathrm {diff}}(Q)}}{} ,
\label{eq-4q}
\end{eqnarray}
where
\ensuremath{{C^{\mathrm {same}}(Q)}}{},\ensuremath{{C^{\mathrm {diff}}(Q)}}{} and \ensuremath{{C^{{\mathrm {Z}}^{*}}(Q)}}{} are the BEC\, for the class of pions from
the same W boson, different W bosons and from \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events.
The variables $\psamehad{}$ and $\pzstarhad{}$ are defined in Table \ref{tab-defs-nwa}.
Likewise, the experimentally determined correlations for the other two
event samples can be written as:
\begin{equation}
\ensuremath{{C^{\mathrm{semi}}(Q)}}{} = \psamesemi{} \, \ensuremath{{C^{\mathrm {same}}(Q)}}{} + (1 - \psamesemi{}) \, \ensuremath{{C^{{\mathrm {Z}}^{*}}(Q)}}{}
\label{eq-semi}
\end{equation}
for the \ensuremath{\WW\rightarrow\qq\lnu}\ event sample
and
\begin{eqnarray}
\ensuremath{{C^{\mathrm {non-rad.}}(Q)}}{} = \ensuremath{{P^{\mathrm {same}}_{\mathrm{non-rad}}(Q)}}{} \, \ensuremath{{C^{\mathrm {same}}(Q)}}{} + \ensuremath{{P^{\mathrm{Z^{*}}}_{\mathrm{non-rad}}(Q)}}{} \, \ensuremath{{C^{{\mathrm {Z}}^{*}}(Q)}}{} \nonumber \\
+ (1 - \ensuremath{{P^{\mathrm {same}}_{\mathrm{non-rad}}(Q)}}{} - \ensuremath{{P^{\mathrm{Z^{*}}}_{\mathrm{non-rad}}(Q)}}{} ) \, \ensuremath{{C^{\mathrm {diff}}(Q)}}{}
\label{eq-qcd}
\end{eqnarray}
for the non-radiative Z$^{*}$ event sample.
The definition of the variables
\psamehad{} , \pzstarhad{}, \psamesemi{}, \ensuremath{{P^{\mathrm {same}}_{\mathrm{non-rad}}(Q)}}{} ,
and \ensuremath{{P^{\mathrm{Z^{*}}}_{\mathrm{non-rad}}(Q)}}{} is also given in table \ref{tab-defs-nwa}.
\label{sim-fit}
By simultaneously fitting Eq's. \ref{eq-4q}, \ref{eq-semi}, and \ref{eq-qcd}
to the experimental distributions in Fig.~\ref{fig-data},
the BEC\ for the three pion classes
\ensuremath{{C^{\mathrm {same}}(Q)}}{}, \ensuremath{{C^{\mathrm {diff}}(Q)}}{} and \ensuremath{{C^{{\mathrm {Z}}^{*}}(Q)}}{} are determined.
Again, the probabilities \ensuremath{{P^{\mathrm {same}}_{\mathrm{non-rad}}(Q)}}{}, \ensuremath{{P^{\mathrm{Z^{*}}}_{\mathrm{non-rad}}(Q)}}{}, \psamehad{},
\pzstarhad{} and \psamesemi{} are taken
from Monte Carlo simulations not containing BEC, as discussed in section \ref{sec-montecarlo}.
The functions \ensuremath{{P^{\mathrm {same}}_{\mathrm{non-rad}}(Q)}}{} and \psamehad{} are shown in Fig.~\ref{fig-psame}.
Their properties contain only information from unlike-charge pion pairs
and are therefore independent of BEC.
The effect of possible variations of the function \psamehad{}, if BEC\, are assumed in the Monte Carlo,
is discussed in section \ref{sec-systematics}.
\begin{figure}
\begin{center}\mbox{\epsfxsize=16cm
\epsffile{pr_262_4.eps}}\end{center}
\caption{
The probability that both tracks of an unlike-charge track pair in
a fully hadronic event originate from the
same W boson, \ensuremath{{P^{\mathrm {same}}_{\mathrm{non-rad}}(Q)}}{} (upper plot) and \psamehad{} (lower plot),
as obtained from Monte Carlo simulations.
The histogram is the result for the case that no BEC\, are assumed.
The dashed and dotted histograms are the results for the case that
BEC\, are simulated for all pions or only for pions originating from
the same W boson, respectively.
}
\label{fig-psame}
\end{figure}
The unknown correlation functions \ensuremath{{C^{\mathrm {same}}(Q)}}{},
\ensuremath{{C^{\mathrm {diff}}(Q)}}{} and \ensuremath{{C^{{\mathrm {Z}}^{*}}(Q)}}{} are parametrised using Eq.~\ref{eq-usedfun}
with different $\lambda$ for the three event classes.
As before a common source radius $R$ is used for all event classes.
For the correlation function \ensuremath{{C^{{\mathrm {Z}}^{*}}_{\mathrm{had}}(Q)}}{} the parameters $\lambda$ and $R$ are adjusted
like in section \ref{sim-fit-nwa}.
Based on Monte Carlo studies the long range parameters \ensuremath{{\delta^{\mathrm {diff}}}}{} and \ensuremath{{\epsilon^{\mathrm {diff}}}}{}
for the correlation function \ensuremath{{C^{\mathrm {diff}}(Q)}}{} are taken to be zero.
This is equivalent to the assumption that colour reconnection effects do not influence the $Q$ distributions.
The free fit parameters are then determined in a simultaneous fit to the
three experimental distributions shown in Fig.~\ref{fig-data}.
The results for the eleven free parameters in the fit are given in Table~\ref{tab-results}.
The fit is made in the full range of $0.0 < Q < 2.0$ GeV/$c^{2}$.
The fit result is given in Fig.~\ref{fig-data}.
All three experimental distributions are well described by the fit
($\chi^{2}$/d.o.f. is 76.4/64).
The correlation between the parameters \ensuremath{{\lambda^{\mathrm {diff}}}}{} and \ensuremath{{\lambda^{\mathrm{ same}}}}{},
with a coefficient of $-0.52$,
is shown in Fig.~\ref{contour}.
\begin{figure}
\begin{center}\mbox{\epsfxsize=16cm
\epsffile{pr_262_5.eps}}\end{center}
\caption{Correlation between \ensuremath{{\lambda^{\mathrm {diff}}}}{} and \ensuremath{{\lambda^{\mathrm{ same}}}}{}. The contour
shows the $67\%$ confidence level.
The best value obtained in the fit is given by the cross.
The lines for
\ensuremath{{\lambda^{\mathrm {diff}}}}{} = 0
and \ensuremath{{\lambda^{\mathrm {diff}}}}{} = \ensuremath{{\lambda^{\mathrm{ same}}}}{} are also indicated.}
\label{contour}
\end{figure}
\begin{table}[ht]
\begin{center}
\begin{tabular}{||c||c c c||} \hline
Parameter & same W & diff W & $(\ensuremath{{\mathrm{Z}^0}}/\gamma)^{*}$ \\ \hline
$R$ (fm) & &$0.92\pm0.09\pm0.09$ & \\
$\lambda$ & $0.63\pm0.19\pm0.14$ & $0.22\pm0.53\pm0.14$ & $0.47\pm0.11\pm0.08$ \\
$N$ & $0.83\pm0.05\pm0.07$ & $1.00\pm0.01\pm0.00$ & $0.87\pm0.04\pm0.04$ \\
$\delta$ & $0.21\pm0.15\pm0.19$ & zero assumed & $0.11\pm0.11\pm0.07$ \\
$\epsilon$ & $-0.07\pm0.07\pm0.08$ & zero assumed & $-0.01\pm0.05\pm0.02$ \\
\hline
\end{tabular}
\end{center}
\caption{Result of the simultaneous fit distinguishing pions from the same and different W boson.
The first error corresponds to the statistical uncertainty, the second one to systematics.}
\label{tab-results}
\end{table}
\subsection{Systematic Errors}
\label{sec-systematics}
The following variations in the analysis are considered to obtain the systematic error,
which affect the fit result in both fit methods.
The systematic errors are listed
in Table~\ref{tab-systematics-nwa} and~\ref{tab-systematics},
together with their quadratic sums to give the final systematic error.
\begin{enumerate}
\item {\em Variation of the resonance production.}
In the main analysis, the distortion of the unlike-charge pairs due to resonances was
taken into account by subtracting the resonance $Q$ distribution
from unlike-charge pair $Q$ distribution.
This method is based exclusively on \ensuremath{{\mathrm{Z}^0}}\ data,
since no measurements of resonance production at LEP 2 are available.
Thus for systematics the correction factors
are varied within two standard deviations
of the experimental resonance production cross sections.
The maximum differences in the fit for each resonance were added in quadrature.
Several variations were made for this systematic check,
therefore no $\chi^{2}$ is given.
All fits are of good quality.
\item {\em Double-hit resolution.}
Unlike-charge pairs are bent in a magnetic field in opposite
directions, whereas like-charge pairs are bent in the same direction. Therefore
like-charge pairs at very low $Q$ are less well reconstructed.
Monte Carlo studies indicate the presence of such effects for pairs with a $Q$ less than 0.05 GeV/$c^{2}$.
For systematics, the fit is repeated in the range between $0.05 < Q < 2.0$ GeV/$c^{2}$.
\item
{\em Use of the} \mbox{H{\sc erwig}}\ {\em Monte Carlo.}
To determine the purities and the correction for resonances in the unlike-charge
sample the \mbox{H{\sc erwig}}\ Monte Carlo was used.
\item
The {\em probability functions} are obtained from Monte Carlo simulations
where BEC\, are simulated~\cite{bib-lonnblad} for all pions, both from the same and from
different W bosons.
\item
The {\em probability functions} are obtained from Monte Carlo simulations
where BEC\, are simulated~\cite{bib-lonnblad} only for pions from the same W.
\item
{\em Long-range correlations.}
The fit is repeated with $\epsilon$ = 0.
\item
{\em Different topology of the \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ background in fully hadronic selected events.}
The difference of the \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events in the hadronic and non-radiative Z$^{*}$ samples is taken
into account in the main analysis (see section~\ref{sim-fit}).
The events selected at LEP energies as \ensuremath{\WW\rightarrow\qq\qq}\ events and as
non-radiative events, which are used for this correction are statistically limited.
Therefore the parameters
governing the correction factor for the correlation function \ensuremath{{C^{{\mathrm {Z}}^{*}}_{\mathrm{had}}(Q)}}{}
are varied within their statistical error ($\lambda \pm 0.04$ and $R \pm 0.057$ fm)
and the largest deviation is taken as the systematic error.
\end{enumerate}
In addition, the effect on uncertainties from arising the knowledge of cross-section is examined. The cross-sections
for W-pair production processes as well as the cross-section for
non-radiative \ensuremath{(\Zz/\gamma)^{*}\,\,} processes are varied within their experimental uncertainties.
The impact on the final result is negligible.
Furthermore, differences between \ensuremath{\WW\rightarrow\qq\qq}\ events selected as hadronic events and selected as
non-radiative events are also considered. These variations introduce only small
changes in the results.
\begin{table}[ht]
\begin{center}
\begin{tabular}{||c||c|c|c|c|c||} \hline
& $R$ (fm) & $\lambda^{{\rm \ensuremath{\WW\rightarrow\qq\qq}}}$ & $\lambda^{\rm \ensuremath{\WW\rightarrow\qq\lnu}}$ & $\lambda^{{\rm \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}}}$& $\chi^{2}/$d.o.f. \\ \hline
Reference & $0.91\pm0.11$ & $0.43\pm0.15$ & $0.75\pm0.26$ & $0.49\pm0.11$ & $76.1/62$ \\ \hline
Variation & $\delta R$ (fm) & $\delta \lambda^{\ensuremath{\WW\rightarrow\qq\qq}}$ & $\delta \lambda^{\ensuremath{\WW\rightarrow\qq\lnu}}$ & $\delta \lambda^{\ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq} }$ & \\
\hline
1 & $\pm0.07$ & $\pm0.07$ & $\pm0.10$ & $\pm0.07$ & \\
2 & $<0.01$ & $-0.02$ & $-0.05$ & $+0.03$ & $74.1/62$ \\
3 & $<0.01$ & $+0.03$ & $+0.05$ & $<0.01$ & $95.1/62$ \\
4 & $+0.01$ & $-0.02$ & $-0.02$ & $-0.01$ & $75.7/62$ \\
5 & $<0.01$ & $-0.02$ & $-0.03$ & $<0.01$ & $75.9/62$ \\
6 & $+0.07$ & $-0.03$ & $-0.13$ & $-0.02$ & $78.2/65$ \\
7 & $<0.01$ & $<0.01$ & $-0.01$ & $<0.01$ & $76.3/62$ \\ \hline
total & $0.10$ & $0.09$ & $0.18$ & $0.08$ & \\
\hline
\end{tabular}
\end{center}
\caption{The effect of the systematic variations studied (discussed in
Sect.~\ref{sec-systematics}) on the variables $R$, \ensuremath{{\lambda^{\mathrm \WWqqqq}}}{}, \ensuremath{{\lambda^{\mathrm \WWqqln}}}{}
and \ensuremath{{\lambda^{\mathrm Z^{*}}}}{}. The last column shows the quality of the corresponding
fit.
}
\label{tab-systematics-nwa}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{||c||c|c|c|c|c||} \hline
& $R$ (fm) & $\lambda^{{\rm same}}$ & $\lambda^{{\rm diff}}$ & $\lambda^{Z^{*}}$ & $\chi^{2}/$d.o.f. \\ \hline
Reference & $0.92\pm0.09$ & $0.63\pm0.19$ & $0.22\pm0.53$ & $0.47\pm0.11$ & $76.4/64$ \\ \hline
Variation & $\delta R$ (fm) & $\delta \lambda^{\mathrm{same}}$ & $\delta \lambda^{\mathrm{diff}}$ & $\delta \lambda^{\mathrm{Z^{*}}}$ & \\ \hline
1 & $\pm0.07$ & $\pm0.09$ & $\pm0.07$ & $\pm0.07$ & \\
2 & $<0.01$ & $-0.05$ & $<0.01$ & $+0.03$ & $74.4/64$ \\
3 & $+0.01$ & $+0.04$ & $+0.03$ & $<0.01$ & $94.5/64$ \\
4 & $+0.01$ & $-0.02$ & $-0.10$ & $<0.01$ & $76.0/64$ \\
5 & $<0.00$ & $-0.04$ & $-0.03$ & $<0.01$ & $76.2/64$ \\
6 & $+0.05$ & $-0.08$ & $<0.01$ & $-0.01$ & $77.6/66$ \\
7 & $<0.01$ & $<0.01$ & $-0.05$ & $<0.01$ & $76.5/64$ \\ \hline
Total & $0.09$ & $0.14$ & $0.14$ & $0.08$ & \\
\hline
\end{tabular}
\end{center}
\caption{The effect of the systematic variations studied (discussed in
Sect.~\ref{sec-systematics}) on the variables $R$, \ensuremath{{\lambda^{\mathrm{ same}}}}{}, \ensuremath{{\lambda^{\mathrm {diff}}}}{}
and \ensuremath{{\lambda^{\mathrm Z^{*}}}}{}. The last column shows the quality of the corresponding
fit.
}
\label{tab-systematics}
\end{table}
\subsection{Q-based separation of BEC contributions}
\label{sec-unfold}
\begin{figure}
\begin{center}\mbox{\epsfxsize=16cm
\epsffile{pr_262_6.eps}}\end{center}
\caption{Correlation functions for the unfolded classes. The data
points show the experimental distributions for a pure sample of a)
pions originating from different W bosons \ensuremath{{C^{\mathrm {diff}}(Q)}}{}, b) pions
originating from the same W boson \ensuremath{{C^{\mathrm {same}}(Q)}}{} and c) pions from \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}{}
events. The errors are the statistical uncertainties and are correlated
between the three classes. The open histogram in a) is the
result, for pions from different W bosons,
of a simulation including BEC\ between pions from
different W bosons, the cross-hatched histogram the corresponding result
for a simulation with BEC\ for pions from the same W boson only.
The open histogram in b) shows the result, for pions from the same W boson,
of a simulation including
BEC\, for pions from the same W boson and the hatched histogram
the corresponding result for no BEC\ at all.
The hatched histogram in c) corresponds to a simulation with no
BEC\ at all.}
\label{fig-unfold}
\end{figure}
The experimental BEC\, for pure classes of a) tracks from different
W bosons \ensuremath{{C^{\mathrm {diff}}(Q)}}{} , b) tracks from the same W boson \ensuremath{{C^{\mathrm {same}}(Q)}}{}, as well as
c) tracks from \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events \ensuremath{{C^{{\mathrm {Z}}^{*}}(Q)}}{} can be obtained directly from
Eq.~\ref{eq-4q}~-~\ref{eq-qcd} by solving the equations for these
three unknown functions for each bin of $Q$, using the fractions from Table \ref{tab-defs-nwa}.
The resulting distributions are shown in Fig.~\ref{fig-unfold}.
A comparison of data and Monte Carlo without BEC\, shows that there is a clear signal at
small $Q$
for pions originating
from \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events
(Fig.~\ref{fig-unfold}~c).
The data for pions from the same W boson show a larger enhancement than the corresponding simulation (Fig.~\ref{fig-unfold}~b).
At the current level of precision, it cannot be established
whether BEC\, between pions from different W bosons exists or not
(Fig.~\ref{fig-unfold}~a),
in agreement with the result of the simultaneous fit of
Sect.~\ref{sim-fit}.
Note that the errors of the three unfolded distributions
are highly correlated with each other
{\em by construction}.
For $Q$ values larger than about 0.4 GeV/$c^{2}$, the distribution for \ensuremath{{C^{\mathrm {diff}}(Q)}}{} is
consistent with being constant in both MC and data.
\subsection{\bf Consistency check}
\label{sim-fit-dv}
In the analysis described above, the resonances were subtracted using Monte Carlo information
and long-range correlations were taken into account by the empirical factor
$(1 + \delta Q + \epsilon Q^2)$ in the correlation function of Eq.~\ref{eq-usedfun}.
As an alternative, we study here the double ratio
\begin{equation}
C^{\prime} (Q)=\frac{N^{DATA}_{\pm\pm}}{N^{DATA}_{+-}} / \frac{N^{MC}_{\pm\pm}}{N^{MC}_{+-}, }
\end{equation}
where the Monte Carlo events are generated without BEC\ .
If the production of resonances\footnote{As for the main analysis,
the resonance cross--sections in \mbox{J{\sc etset}}\ are adjusted to the measured rates at LEP energies.}
and long-range correlations are well described by the simulation, these should cancel in the
double ratio and only BEC\ should remain.
The agreement between the simulation and data was checked and is good for both the unlike-charge
and like-charge distributions.
The latter show significant deviations only in the low $Q$ region, where distortions
due to BEC\ are expected in the data.
Thus, for the double ratio, a simple fit ansatz can be used:
\begin{equation}
C^{\prime} (Q) = N \, (1 + f_{\pi}(Q)\,\lambda\,
{\mathrm{e}} ^ {-Q^2 R^2}).
\end{equation}
As in section~3.2, the double ratios for the three event selections can be described
by superpositions of the correlations for the different pion classes.
Eqs.~\ref{eq-4q} - \ref{eq-qcd} are also valid for the double ratios.
It can be shown that the relative probabilities $P$ are given by the expressions given in Table~\ref{tab-defs-nwa},
except that, in this case, the number of like-charge pairs $N_{\pm\pm}$ have to be used instead of the number
of unlike-charge ones $N_{+-}$ as was the case in section 3.2.
The relative probabilities are determined from a Monte Carlo simulation without BEC .
In a simultaneous fit to the three double ratios $C^{\prime} (Q)$ the BEC\ for the
three pion classes
$C^{\prime\,\mathrm{same}}(Q)$, $C^{\prime\,\mathrm{diff}}(Q)$ and $C^{\prime\,\mathrm{Z^{*}}}(Q)$ are determined.
A common source radius for all
pion classes is assumed and the parameters $\lambda$ and $R$ in the
correlation function $C^{\prime\,\mathrm{Z^{*}} }(Q)$ are adjusted for differences in multiplicity and topology as in section 3.2.
The seven parameters used in the fit are given in Table~\ref{tab-results-dv}.
The fit is made in the full range $0.0 < Q < 2.0$ GeV/$c^{2}$.
The fit describes the distributions well,
with a $\chi^{2}$/d.o.f. of 72.8/67.
The results of the fit are given in Table~\ref{tab-results-dv}.
They are fully compatible with the results of section 3.2.
The systematic errors are obtained in a similar way as before,
with the relevant individual contributions given
in Table~\ref{tab-systematics-dv}.
This method has the advantage that the long-range correlations
do not have to be determined in the fit.
On the other hand, this method relies more on Monte-Carlo input.
\begin{table}[ht]
\begin{center}
\begin{tabular}{||c||c c c||} \hline
Parameter & same W & diff W & $(\mathrm{Z}^{0}/\gamma)^{*}$ \\ \hline
R (fm) & &$1.11\pm0.13\pm0.21$ & \\
$\lambda$ & $0.65\pm0.21\pm0.09$ & $0.50\pm0.78\pm0.14$ & $0.42\pm0.09\pm0.05$ \\
N & $0.99\pm0.01\pm0.03$ & $1.00\pm0.01\pm0.00$ & $0.99\pm0.01\pm0.02$ \\
\hline
\end{tabular}
\end{center}
\caption{Result of the simultaneous fit using the double ratio $C^{\prime}(Q)$
The first error corresponds to the statistical uncertainty the second one to systematics.}
\label{tab-results-dv}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{||c||c|c|c|c|c||} \hline
& $R$ (fm) & $\lambda^{{\rm same}}$ & $\lambda^{{\rm diff}}$ & $\lambda^{Z^{*}}$ & $\chi^{2}/$d.o.f. \\ \hline
Reference & $1.10\pm0.11$ & $0.64\pm0.20$ & $0.50\pm0.72$ & $0.42\pm0.09$ & $72.8/68$ \\ \hline
Variation & $\delta R$ (fm) & $\delta \lambda^{\mathrm{same}}$ & $\delta \lambda^{\mathrm{diff}}$ & $\delta \lambda^{\mathrm{Z^{*}}}$ & \\ \hline
1 & $\pm0.11$ & $\pm0.07$ & $\pm0.03$ & $\pm0.09$ & \\
2 & $-0.14$ & $-0.06$ & $<0.09$ & $0.03$ & $89.6/67$ \\
3 & $-0.12$ & $<0.01$ & $+0.06$ & $+0.03$ & $91.5/67$ \\
7 & $<0.01$ & $<0.01$ & $+0.01$ & $<0.01$ & $72.8/67$ \\ \hline
Total & $0.21$ & $0.09$ & $0.14$ & $0.05$ & \\
\hline
\end{tabular}
\end{center}
\caption{The effect of the systematic variations studied (discussed in
Sect.~\ref{sec-systematics}) on the variables $R$, \ensuremath{{\lambda^{\mathrm{ same}}}}{}, \ensuremath{{\lambda^{\mathrm {diff}}}}{}
and \ensuremath{{\lambda^{\mathrm Z^{*}}}}{} from the double ratio.
The last column shows the quality of the corresponding fit. }
\label{tab-systematics-dv}
\end{table}
\section{\bf Discussion and Summary}
We have analysed the data obtained by the OPAL detector at \mbox{$\mathrm{e}^+\mathrm{e}^-$}\
center-of-mass energies of 172 and 183 GeV to study BEC\
between pions in three different physical processes:
fully hadronic events \ensuremath{\WW\rightarrow\qq\qq}{}, semileptonic events \ensuremath{\WW\rightarrow\qq\lnu}{},
and non-radiative \ensuremath{(\Zz/\gamma)^{*}\,\,} events.
The analysis assumes equal source size $R$
for these processes.
BEC\ are observed each of these processes.
The chaoticity parameter $\lambda$ for the semileptonic
process \ensuremath{\WW\rightarrow\qq\lnu}\ is larger than for the processes \ensuremath{\WW\rightarrow\qq\qq}\ and \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq},
but still consistent within the errors.
The long-range correlation parameters are consistent within their errors.
Furthermore, BEC\ between pions from the same W boson and different W bosons
have been studied.
The result for pions from the same W boson is consistent with those
for pions from non-radiative \ensuremath{(\Zz/\gamma)^{*}\rightarrow\qq}\ events.
At the current level of precision it is not established if
BEC\ between pions from different W bosons exists or not.
\bigskip\bigskip\bigskip
\par
\section{Acknowledgements:}
\par
We particularly wish to thank the SL Division for the efficient operation
of the LEP accelerator at all energies
and for their continuing close cooperation with
our experimental group. We thank our colleagues from CEA, DAPNIA/SPP,
CE-Saclay for their efforts over the years on the time-of-flight and trigger
systems which we continue to use. In addition to the support staff at our own
institutions we are pleased to acknowledge the \\
Department of Energy, USA, \\
National Science Foundation, USA, \\
Particle Physics and Astronomy Research Council, UK, \\
Natural Sciences and Engineering Research Council, Canada, \\
Israel Science Foundation, administered by the Israel
Academy of Science and Humanities, \\
Minerva Gesellschaft, \\
Benoziyo Center for High Energy Physics,\\
Japanese Ministry of Education, Science and Culture (the
Monbusho) and a grant under the Monbusho International
Science Research Program,\\
Japanese Society for the Promotion of Science (JSPS),\\
German Israeli Bi-national Science Foundation (GIF), \\
Bundesministerium f\"ur Bildung, Wissenschaft,
Forschung und Technologie, Germany, \\
National Research Council of Canada, \\
Research Corporation, USA,\\
Hungarian Foundation for Scientific Research, OTKA T-016660,
T023793 and OTKA F-023259.\\
| proofpile-arXiv_065-8337 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The $\Delta I=1/2$ rule in kaon decays has been the subject of
very many efforts at understanding it, see \cite{kreview} for a review.
We briefly discuss it and
a short history of attempts to understand it in Section \ref{deltaI}.
In this paper we attempt to put together various approaches
that have been done before. The short-distance effects are now known to
two-loops and the extended Nambu--Jona-Lasinio Model enhanced by using
Chiral Perturbation Theory whenever possible provides a reasonable
basis for the long-distance description of hadronic interactions needed.
We put the two together in a way that treats the scheme dependence
correctly. The underlying method, reproducing the results of the short-distance
running by an effective theory of exchanges of heavy bosons,
which we call $X$-bosons, is discussed in Section \ref{Xboson}.
The low energy model is shortly discussed in Section \ref{ENJL}.
In Section \ref{twopdef} we recall the definitions of the off-shell two-point
functions that we use here to determine the weak non-leptonic couplings.
The method here is basically to calculate these two-point functions
to next-to-leading order in $1/N_c$, but to all orders in the terms
enhanced by large logarithms involving $M_W$. We then compare with
the Chiral Perturbation Theory (CHPT) calculations of the same quantity
and in the end we calculate the relevant physical matrix elements using
CHPT.
In Section \ref{BKsect} we update our earlier results for $B_K$\cite{BPBK}.
Here we discuss in some detail the routing issue in Section \ref{routing},
which is rather non-trivial in the presence of neutral $X$-bosons whose
{\em direction} is not obvious. This also explains the discrepancies
of the results for very low $\mu$ in the chiral limit of \cite{BPBK}
and the results of \cite{FG95}. We give therefore updated numbers
and expressions for the main results of \cite{BPBK} here.
Section \ref{deltaS1longdistance} contains the same discussion
but for the $\Delta S=1$ operators $Q_1$ to $Q_6$. The current$\times$current
operators $Q_1$, $Q_2$, and\footnote{We use $Q_4=Q_2-Q_1+Q_3$.} $Q_3$
are computed at next-to-leading (NLO) in $1/N_c$ within the ENJL model.
The split in Penguin-like and $B_K$-like contributions is discussed.
For $Q_5$ we cannot simply discuss this split, here the correct
chiral behaviour is only reproduced after summing both contributions.
When extending the method to $Q_6$ one discovers that the factorizable
contribution from $Q_6$ has an infrared divergence in the chiral limit.
We discuss this problem in Section \ref{Q6discussion} and show how
it is cancelled by the non-factorizable contribution.
This problem might be part of the reason why estimates for the $Q_6$ operator
vary so widely.
After correcting for this we present also results for the
matrix elements of $Q_6$.
Finally we put the numerical results for the long- and short-distances
together in Section \ref{fullresults} and discuss their stability.
We also discuss here the coefficients $a$, $b$, and $c$ defined earlier
by Pich and de Rafael \cite{PdeR95}. We recapitulate our main results
and conclusions in Section \ref{conclusions}
\section{The $\Delta I=1/2$ Rule in $K \to \pi \pi$}
\label{deltaI}
The $K\to\pi\pi$ invariant amplitudes can be decomposed into definite isospin
quantum numbers amplitudes as $[A\equiv -i T]$
\begin{eqnarray}
A[K_S\to \pi^0\pi^0] &
\equiv &\sqrt{2\over3} A_0 -{2\over\sqrt 3} A_2 \nonumber \, ; \\
A[K_S\to \pi^+ \pi^-]
&\equiv &\sqrt{2\over3} A_0 +{1\over\sqrt 3} A_2 \nonumber \, ; \\
A[K^+\to\pi^+\pi^0] &\equiv& {\sqrt{3}\over2} A_2 \, .
\end{eqnarray}
Where $K_S \simeq K_1^0 +\epsilon \, K_2^0$, $K^0_{1(2)}\equiv(K^0-(+)
\overline{K^0})/\sqrt 2$, and CP($K^0_{1(2)})=+(-)K^0_{1(2)}$.
In this paper we are interested in
the CP conserving part of $K\to \pi\pi$, so we set
the small phase in the Standard Model CKM matrix elements
and therefore $\epsilon$ to zero.
Above we have included the final state interaction phases
$\delta_0$ and $\delta_2$ into the
amplitudes $A_0$ and $A_2$ as follows. For the isospin $1/2$
amplitude
\begin{equation}
A_0\equiv -i a_0 \, e^{i\delta_0}\, ,
\end{equation}
and for the isospin $3/2$
\begin{equation}
A_2\equiv -i a_2 \, e^{i\delta_2}\, .
\end{equation}
With the measured $K_S\to \pi^0\pi^0$ partial width $\Gamma_{00}$,
$K_S\to \pi^+\pi^-$ partial width $\Gamma_{+-}$,
and $K^+\to\pi^+\pi^0$ partial width $\Gamma_{+0}$ \cite{PDG},
we can calculate the ratio
\begin{equation}
\left|\frac{A_0}{A_2}\right| = \left({3\over4}
\sqrt{\frac{1-4 m_\pi^2/m_{K^+}^2}{1-4m_\pi^2/m_{K^0}^2}}
\left(\frac{\Gamma_{00}+\Gamma_{+-}}{\Gamma_{+0}}\right)-1\right)^{1/2} = 22.10
\end{equation}
This result is what is called the $\Delta I=1/2$ rule for kaon decays.
To understand quantitatively this rule has been one of the permanent
issues in the literature since the experimental determination.
It is by now clear
that it is the sum of several large contributions
both from short distance origin \cite{one-loop,two-loops}
and from long distance origin \cite{BG87,BBG,KMW90}
which add constructively to make $|A_0|$ much larger than $|A_2|$.
The lattice QCD community has also spent a large effort
on this problem, see \cite{lattice} for some recent reviews.
Among the long distance
enhancements of the $|A_0/A_2|$ ratio, the order $p^4$ chiral corrections
have been found to be quite important.
The CHPT analysis to order $p^4$ can be found in \cite{KMW90}
and both the counter-terms and the chiral logs to that order
can be found in \cite{BPP98}, the chiral logs
were originally calculated in \cite{Bloos}. There are some small differences
between the two results. The fit of the data to both
the order $p^4$ $K\to \pi\pi$ and $K\to\pi\pi\pi$ counter-terms
and chiral logs \cite{KMW90,KAM90} allowed to extract
\footnote{The fit uncertainties to this result were not quoted
in \cite{KMW90,KAM90}.}
\begin{eqnarray}
\label{I=1/2p2}
\left|\frac{A_0}{A_2}\right|^{(2)}&=& 16.4
\end{eqnarray}
to $O(p^2)$, i.e., around 34 \% of the enhancement in the $\Delta I=1/2$
rule is due just to order $p^4$ and higher CHPT corrections.
\subsection{CHPT to order $p^2$}
To order $p^2$ in CHPT, the amplitudes
$a_0$ and $a_2$ can be written in terms of two couplings,
\begin{eqnarray}
a_0 \equiv a_0^{8}+a_0^{27}&=&C \, \left[9 G_8+G_{27}\right]
\frac{\sqrt 6}{9} F_0 (m_K^2-m_\pi^2)\, , \nonumber \\
a_2&=& C \, G_{27} \frac{10\sqrt 3}{9} F_0 (m_K^2-m_\pi^2)\, ,
\end{eqnarray}
with
\begin{equation}
\label{defC}
C\equiv-\frac{3}{5}\, \frac{G_F}{\sqrt 2} \, V_{ud} \, V_{us}^*
\approx - 1.06\cdot10^{-6} \, \mbox{GeV}^{-2}
\end{equation}
and
\begin{equation}
\delta_0=\delta_2=0\, .
\end{equation}
The couplings $G_8$ and $G_{27}$ are two of the $O(p^2)$
$\Delta S=1$ couplings. They are defined in \cite{BPP98} and can
be determined from the $O(p^2)$ amplitudes \cite{KAM90} to be
\begin{equation}
\label{valueG8G27}
G_8 = 6.2 \pm 0.7 \qquad {\rm and} \qquad G_{27} = 0.48 \pm 0.06 \,.
\end{equation}
Here we have only included the error bars from the value
of the pion decay constant in the chiral limit $F_0=(86\pm10)$ MeV,
this corresponds to $f_\pi=92.4$ MeV.
Again there are uncertainties from the fit procedure and approximations
not quoted in \cite{KMW90,KAM90}.
Therefore to $O(p^2)$
\begin{equation}
\label{ratioA0A2p2}
\left|\frac{A_0}{A_2}\right|^{(2)}=\sqrt 2\,
\left( \frac{9 \, G_8 + G_{27}}{10 \, G_{27}} \right)\, .
\end{equation}
To understand the difficulty of the task of reproducing
(\ref{I=1/2p2}) it is convenient to make an $1/N_c$ analysis
of the $O(p^2)$ result. At large $N_c$, $G_8=G_{27}=1$ and
\begin{equation}
\label{I=1/2largeN}
\left|\frac{A_0}{A_2}\right|^{(2)}_{N_c} = \sqrt 2
\end{equation}
i.e. a factor 11.6 smaller than the QCD result in (\ref{I=1/2p2}) !
Notice that to $O(p^2)$ there are no quark mass
and therefore no chiral logs corrections to the ratio above.
So we have to explain one order of magnitude enhancement within QCD in the
chiral limit with $1/N_c$ suppressed corrections.
Another parametrization which will be useful when
studying the $\Delta I=1/2$ rule is the one introduced
by Pich and de Rafael in \cite{PdeR95}. In this parametrization
\begin{eqnarray}
\label{abc}
G_{27}&\equiv& a + b \, , \nonumber \\
G_{8}&\equiv& a + b + {5\over3}(c-b) \, .
\end{eqnarray}
The nice feature of this parametrization is that $a$, $b$, and
$c$ have a one to one correspondence with the three-different
QCD quark-level topologies. The $a$-type coupling corresponds to
configurations that include the factorizable ones (Figure \ref{figfull}a).
This coupling is of
order 1 in the large $N_c$ limit and has only $1/N_c^2$
corrections. The $b$-type coupling corresponds to what we call $B_K$-like
topologies (Figure \ref{figfull}b) and is of order $1/N_c$.
This coupling is related to the value of the $B_K$ parameter
in the chiral limit.
The $c$-type coupling corresponds to what we call Penguin-like topologies
(Figure \ref{figfull}c) and is also of order $1/N_c$.
So in the large $N_c$ limit
\begin{equation}
a=1 \hspace{1cm} {\rm and} \hspace{1cm}\; b=c=0 \; .
\end{equation}
\begin{figure}
\begin{center}
\epsfig{file=figfull.ps,width=12cm}
\end{center}
\caption{\label{figfull} The three types of contributions appearing
in the evaluation of matrix-elements of operators. Namely, (a) Factorizable,
(b) $B_K$-like, and (c) Penguin-like.}
\end{figure}
The main objective of this paper is the calculation of the $1/N_c$
corrections to (\ref{I=1/2largeN}), i.e. the couplings $b$ and $c$.
The coefficients $a$, $b$, and $c$ in \cite{PdeR95} were defined in a
large $N_c$ expansion within short-distance QCD, i.e.
with quarks and gluons. In the low-energy regime where
the long-distance part has to be evaluated
one however cannot distinguish the
$1/N_c^2$ corrections to $a$ from the ones to the coefficients $b$
and $c$. So for us $a$ takes the large $N_c$ value
$a=1$,
$b=G_{27}-1$, and $c=(3G_8+2G_{27})/5-1$. This definition can be used
both at long and short-distances and only differs by terms
of $O(1/N_c^2)$ with the one in \cite{PdeR95}.
The definition above has also the advantage that all couplings $a$,
$b$, and $c$ are scale independent.
Notice that in the present work the $1/N_c^2$
are from short-distance origin only.
\section{The Technique}
\subsection{$\Delta S=1$ and $\Delta S=2$ Two-Point Functions}
\label{twopdef}
The theoretical framework we use to study
the strangeness changing transitions in one and two units
was already introduced in Refs. \cite{BPBK,BPP98,BPDashen}.
The original suggestion for this type of method was in \cite{BDSPW85}.
The basic objects are the pseudo-scalar density correlators
\begin{equation}
\label{two-point}
\Pi^{ij}(q^2)\equiv
i \int {\rm d}^4 x \, e^{i q.x} \langle 0 |
T\left(P^{i\dagger}(0) P^j(x) e^{i \Gamma_{\Delta S=a}}\right) |0\rangle
\end{equation}
in the presence of strong interactions. Above, $a=0, 1, 2$
stands for $|\Delta S| =$ 0, 1, and 2 transitions and $i, j$ are light quark
combinations corresponding to the octet of the lightest
pseudo-scalar mesons;
\begin{eqnarray}
\label{pseudosources}
P^{\pi^0}(x)\equiv \frac{\displaystyle 1}{\displaystyle \sqrt 2}
\left[\overline u i \gamma_5 u
- \overline d i \gamma_5 d\right]\, , &
P^{\pi^+}(x)\equiv \left[ \overline d i \gamma_5 u \right]\, ,
P^{K^0}(x)\equiv \left[ \overline s i \gamma_5 d \right] \, , \nonumber \\
P^{K^+}(x)\equiv \left[\overline s i \gamma_5 u \right]\, , &
P^{\eta_8}(x)\equiv \frac{\displaystyle 1}{\displaystyle \sqrt 6}
\left[\overline u i \gamma_5 u
+ \overline d i \gamma_5 d - 2 \overline s i \gamma_5 s \right]\, .
\nonumber \\
\end{eqnarray}
Here and in the remainder, summation over colour-indices inside brackets
is assumed unless colour indices have been explicitly indicated.
These two-point functions
were analyzed extensively within CHPT to order $p^4$ in \cite{BPP98}.
In that reference we also pointed out how on can obtain information
on $K \to \pi \pi$ amplitudes
at order $p^4$ from off-shell $K \to \pi$ transitions.
Now, we want to use the $1/N_c$ technique used in \cite{BPBK,BPDashen}
to compute the off-shell $K \to \pi$ amplitudes and obtain
the relevant counter-terms of order $p^2$.
See \cite{BPP98},
for explicit details of which counter-terms of order $p^4$ we can get
and possible ways of estimating some couplings we cannot
get this way.
In the large $N_c$ limit, there is just one operator
in the Standard Model which changes strangeness by one-unit
\begin{equation}
\label{q2}
Q_2 \equiv [\bar{s}\gamma^\mu(1-\gamma_5)u](x)
[\bar{u}\gamma_\mu(1-\gamma_5) d](x) \, .
\end{equation}
After the inclusion of gluonic corrections $Q_2$ mixes with
\begin{equation}
\label{q1}
Q_1 \equiv [\bar{s}\gamma^\mu(1-\gamma_5)d](x)
[\bar{u}\gamma_\mu(1-\gamma_5) u](x)
\end{equation}
via box-type diagrams (first reference in \cite{one-loop}),
and with
\begin{eqnarray}
\label{operators}
Q_3&\equiv& [\bar{s}\gamma^\mu(1-\gamma_5)d](x)
\sum_{q=u,d,s}[\bar{q}\gamma_\mu(1-\gamma_5) q](x)
\nonumber\\ \nonumber
Q_4&\equiv& [\bar{s}^\alpha\gamma^\mu(1-\gamma_5)d_\beta](x)
\sum_{q=u,d,s}[\bar{q}^\beta\gamma_\mu(1-\gamma_5) q_\alpha](x)
\\ \nonumber
Q_5&\equiv& [\bar{s}\gamma^\mu(1-\gamma_5)d](x)
\sum_{q=u,d,s}[\bar{q}\gamma_\mu (1+\gamma_5)q](x)
\\
Q_6&\equiv& [\bar{s}^\alpha\gamma^\mu(1-\gamma_5)d_\beta](x)
\sum_{q=u,d,s}[\bar{q}^\beta\gamma_\mu(1+\gamma_5) q_\alpha](x)
\end{eqnarray}
via the so-called penguin-type diagrams \cite{one-loop}. Since the numerical
importance for the issues we want to address here
is small and for the sake of
simplicity we switch off electromagnetic interactions. The operator
$Q_4$ is redundant and satisfies $Q_4 = Q_2-Q_1+Q_3$.
Under SU(3)$_L$$\times$SU(3)$_R$ rotations $Q_-
\equiv Q_2-Q_1$, $Q_3$, $Q_4$, $Q_5$,
and $Q_6$ transform as $8_L \times 1_R$ and only carry
$\Delta I=1/2$ while $Q_{27}\equiv
3 Q_1 + 2 Q_2 - Q_3$ transforms as $27_L \times 1_R$ and carries
both $\Delta I=1/2$ and $\Delta I=3/2$.
The Standard Model low energy effective action describing
$\left|\Delta S\right|=1$ transitions can thus be written as
\begin{equation}
\Gamma_{\Delta S=1} \equiv -C_{\Delta S=1}
\, {\displaystyle \sum_{i=1}^6} \, C_i(\mu) \,
\int {\rm d}^4 y \, Q_i (y) \, + {\rm h.c.}
\end{equation}
where
$C_{\Delta S=1} = (G_F/\sqrt 2) \, V_{ud} V_{us}^* \,$.
There is just one operator changing strangeness by two-units
in the Standard Model,
\begin{equation}
\label{qS2}
Q_{\Delta S=2}\equiv [\bar{s}\gamma^\mu(1-\gamma_5)d](x)
[\bar{s}\gamma_\mu(1-\gamma_5)d](x)
\end{equation}
which transforms under SU(3)$_L$$\times$SU(3)$_R$ rotations
as $27_L \times 1_R$.
The matrix elements of the $Q_i$ with $i=1,\cdots,6$, and $Q_{\Delta S=2}$
operators depend on the renormalization
group (RG) scale $\mu$ such that physical processes are scale independent.
\subsection{The $X$-Boson Method and Matching}
\label{Xboson}
In this section we explain the basics of how to deal with the
resummation of
large logarithms using the renormalization group and how to
do the matching between the low energy model and the short-distance evolution
inside QCD. The guiding line here is the $1/N_c$ expansion.
Let us first explain the philosophy in the case of photon non-leptonic
processes \cite{BPDashen,BBGpp,BDashen}. The basic electromagnetic (EM)
non-leptonic interaction is given by
\begin{equation}
{\cal L}_{EM} =
\frac{(ie)^2}{2} \, \int \frac{{\rm d}^4 r}{(2\pi)^4}
\int {\rm d}^4 x \, \int {\rm d}^4 y \, e^{i q\cdot (x-y)}
\frac{g_{\mu\nu}}{r^2-i\epsilon} J^\mu_{Had}(x)\, J^{\nu}_{Had}(y)\,.
\end{equation}
Here we used the Feynman gauge, for a discussion of the gauge dependence
see \cite{BPDashen}, $J^\mu=(\overline q Q \gamma^\mu q)$,
$q^T = (u, d, s)$ and $Q$ is a
3 $\times$ 3 diagonal matrix collecting the light quark electric charges.
The integral over $r^2$ we rotate into Euclidean space and split into
a long and a short distance piece,
\begin{equation}
\label{split}
\int {\rm d}^4 r_E = \int {\rm d} \Omega
\left(\int_0^{\mu} {\rm d} |r_E| \, |r_E|^3 +
\int_{\mu}^\infty {\rm d} |r_E| \, |r_E|^3 \right)\,.
\end{equation}
The long distance piece we evaluate
in an appropriate low-energy model, CHPT\cite{BDashen}, ENJL\cite{BPDashen}
or using other hadronic models \cite{BBGpp}.
The short-distance part can be evaluated using the operator product expansion
(OPE) and the matrix-elements of the resulting operators can be evaluated to
the leading non-trivial order in $1/N_c$ using the same hadronic
low-energy hadronic model as for the long-distance part.
This procedure works extremely well in the case of internal photon exchange.
The problem is that in weak decays there are large logarithms
present of the type $\ln(M_W/\mu_L)/N_c$ which make the
$1/N_c$ expansion of questionable validity. The solution to this problem
at one-loop order was presented in \cite{BPBK} where we showed that
the integral in (\ref{split}) satisfied the same equation as the one-loop
evolution equation. This method was very nice for $B_K$ and can
also be applied to the $\Delta S=1$ transitions.
Here we will give an alternative description of the method used there
that will be extendable in a relatively straightforward way to the
two-loop renormalization group calculations. The precise definition
and calculations we defer to a future calculation.
We start at the scale $M_W$ where we replace the exchange of $W$ and top
quark in the full theory with higher dimensional
operators using the OPE in an effective theory
where these heavy particles have been integrated out.
So at a scale $\mu_H\approx M_W$ we need the matching conditions between
the full theory and the effective one. As usual we get them by
setting the matrix elements between external states of light particles,
i.e. the remaining quarks and gluons, in transition amplitudes with
$W$ boson and top quark exchanges
equal to those of the relevant operators in the effective theory.
\begin{equation}
\mbox{Step 1: at }\mu_H\approx M_W~:\quad \\
\langle 2 | (W,top\mathrm{-exchange})_{Full}|1\rangle =
\langle 2|\sum_i \, \tilde C_i(\mu_H) \, \tilde Q_i |1\rangle \, .
\end{equation}
We then proceed by using the renormalization group to run down from $\mu_H$
to $\mu_L$ below the charm quark mass where we have an effective theory
with gluons and the three lightest quark flavours.
At each heavy particle threshold crossed new matching conditions
between the two effective field theories (with and without the heavy particles
being integrated out) have to be set, this is done completely within
perturbative QCD, see e.g. \cite{BurasReviews}. So that
\begin{equation}
\mbox{Step 2: from }\mu_H ~\mathrm{to~}\mu_L \qquad
\langle 2|\sum_i \, \tilde C_i(\mu_H) \, \tilde Q_i |1\rangle
\longrightarrow
\langle 2|\sum_j \, C_j(\mu_L) \, Q_j |1\rangle \, .
\end{equation}
At Step 3 we again introduce a new effective field theory which reproduces
the physics of the operators $Q_j$ below $\mu_L$ by the exchange
of heavy $X_i$-bosons with couplings $g_i$. Again we need to set
matching conditions
\begin{equation}
\label{match3}
\mbox{Step 3: at }\mu_L :\quad
\langle 2 | (X_j\mathrm{-exchange})|1\rangle =
\langle 2|\sum_j C_j(\mu_L) Q_j |1\rangle\,.
\end{equation}
Here the matching means that the left hand side should be evaluated in an
operator product expansion in $M_{X_i}$
The right hand side matrix elements in (\ref{match3}) can be evaluated
completely within perturbative QCD and therefore all
the dependence on the renormalization scheme and the choice
of the basis $Q_j$ and of evanescent operators disappears in this step.
This procedures fixes the $g_i$ couplings as functions of the
chosen masses $M_{X_i}$ and the matrix elements
$\langle 2|\sum_j C_j(\mu_L) Q_j |1\rangle$ which are scheme
independent. Depending on the order to which we decide to calculate in the
effective theory, $g_i$ will depend on additional
terms that can be fully determined within the effective
theory with heavy $X_i$ bosons.
As an example, let us use the effective field theory with two-loop
accuracy for the running between
scales $\mu_H$ and $\mu_L$ and calculations at next-to-leading order
in $1/N_c$ within the heavy $X_i$ boson effective theory.
The term $C_1(\mu_L) \, Q_1$ is reproduced in the $X_i$
effective field theory by the exchange of a heavy enough
vector-boson $X_1^\mu$ with couplings
\begin{equation}
X_1^\mu \left\{ g_1 \left[\bar{s} \gamma_\mu (1-\gamma_5) d \right]+
g_1^\prime \left[\bar{u} \gamma_\mu (1-\gamma_5) u \right]
\right\}\, + {\rm h.c.}.
\end{equation}
The $X_1$ boson has only $\Delta S=1$ components.
This is shown pictorially
in Fig. \ref{figX}.
\begin{figure}
\begin{center}
\epsfig{file=figX.ps, width=10cm}
\end{center}
\caption{\label{figX} The reproduction of the operator $Q_1$
by the exchange of a neutral boson $X_1$.}
\end{figure}
The scale $\mu_L$ should be high enough to use
perturbation theory.
We have the following matching conditions (\ref{match3})
in this case (we assume that $Q_1$ only has
multiplicative renormalization for simplicity)
\begin{equation}
\label{match4}
\frac{g_1 \,g_1'^\dagger }{M_{X_1}^2} \left(
1+\frac{\alpha_s(\mu_L)}{\pi}\left[\tilde d_1
\ln\left(\frac{M_{X_1}}{\mu_L}
\right)+\tilde r_1\right]\right)=
C_1(\mu_L) \left( 1 + \frac{\alpha_s(\mu_L)}{\pi}\, r_1 \right) \, .
\end{equation}
The $r_1$ term cancels the scheme dependence of the two-loop
Wilson coefficient $C_1(\mu_L)$.
Notice that we can choose independently
any regularization scheme on the left and right hand sides.
In the present work we will use the NDR (naive dimensional regularization)
two-loop running between $\mu_W$ and $\mu_L$.
All the large logarithms of the type
$\ln(M_W/\mu_L)$ are absorbed in the couplings of the $X_i$
boson in a scheme independent way.
Now we come to Step 4.
Assume we want to calculate $K^0 \to \pi^0$ matrix element
in the Standard Model. Since we have included the effect of all the large
logarithms between $M_W$ and $\mu_L$ in the $g_i$ couplings,
we can now apply the same procedure explained at the beginning of this section
for the photon exchange case \cite{BPDashen,BBGpp,BDashen}
and remain at next-to-leading order in $1/N_c$. This we do
now for the effective three-flavour
field theory with heavy massive $X_i$ bosons.
So we split the integral over $|r_E|$ into a long distance
piece (between 0 and $\mu$) and a short distance piece (between $\mu$ and
$\infty$) as in (\ref{split}).
When evaluating the second term in (\ref{split}) we will find precisely
the correct logarithmic dependence on $M_{X_1}$ to cancel the one in
(\ref{match4}). The presentation of the scheme dependent
constants $r_1$ and $\tilde r_1$ for $\Delta S=1$ and $\Delta S=2$
is deferred to a future publication.
We then require some matching window in $\mu$
along the lines explained in \cite{BPBK} between these two pieces.
We will use the framework described above
to calculate $\Delta S=1$ and $\Delta S=2$ two-point functions
and defer the full discussion about this procedure to a future publication.
In practice we will also choose $\mu=\mu_L$.
The same procedure can in principle be used in lattice gauge theory
calculations where one can then include the $X_i$-bosons explicitly in the
lattice regularized theory or equivalently work with the corresponding
non-local operators.
\subsection{The Low-Energy Model}
\label{ENJL}
The low-energy model we use here is the extended Nambu--Jona-Lasinio model.
It consists out of the free lagrangian for the quarks
with point-like four-quark couplings added. This model has the correct
chiral structure and spontaneously breaks
chiral symmetry. It includes a surprisingly large amount of the observed
low energy hadronic phenomenology. We refer to the review articles
\cite{reviewsNJL} and the previous papers where we have discussed
the various aspects of the ENJL model used
here \cite{BPBK,BBR,BPano,BP94,BPPgm2}. A short overview of
the advantages and disadvantages can be found in \cite{BPDashen}
Section 3.2.1.
It is well known however that it doesn't confine and
doesn't have the correct momenta dependence at large $N_c$ in some cases.
These two issues were treated in \cite{PPR98} were a low energy model
correcting the wrong momenta dependence at large $N_c$ was presented.
The bad high energy behaviour of ENJL two-point functions
produces some unphysical cut-off dependence.
In this work we try to smear out this bad behaviour as follows.
For the fitting procedure
we only use points with small values of all momenta and
always Euclidean. We also keep only the few first terms
in the fit to a polynomial ( of order six at most)
which are therefore not extremely sensitive
to the bad high energy behaviour of the ENJL model.
The model in \cite{PPR98} gives very good
perspectives that this unphysical behaviour can be eliminated to a large
extent, see for instance the
recent work in \cite{KPR98}, and would provide a natural
extension of this work.
\section{$\Delta S=2$ Transitions: Long Distance}
\label{BKsect}
In this section we apply the technique to
$\Delta S=2$ transitions. These transitions were already studied
in \cite{BPBK} using the same model for the low
energy contributions, there are however differences in the routing
of the momenta with respect to the one we took in \cite{BPBK}.
See the next section for a discussion of this issue.
We study the two-point function
$\Pi^{\overline K^0 \, K^0}(q^2)$ in the presence of strong interactions
as defined in (\ref{two-point}).
The operators in $\Gamma_{\Delta S =2}$ are replaced by an $X$ boson
coupling to $[\bar{s}\gamma_\mu(1-\gamma_5)d](x)$ currents as described in
Section \ref{Xboson}.
We evaluate the two-point function then as a function of $\mu$ for various
values of $q^2$ and masses and this allows us to extract the relevant
couplings in CHPT. We restrict ourselves here to the $O(p^2)$ coefficient
$G_{27}$ and the actual value of $\hat B_K$.
\subsection{The Routing Issue}
\label{routing}
In this section we would like to explain why our present results
on $B_K$ differ from those presented in \cite{BPBK} even though we
use the same method and the same model. At the same time this
will explain the difference between the result from
Section 4 in \cite{BPBK} for $G_{27}$ and the
one from \cite{FG95}. Both papers use the method of \cite{BBGpp}
and \cite{BGK91} to identify the cut-off scale used to identify with the
short-distance evolution and we have several times checked the calculations
in both papers and found no errors in either. We will present the discussion
here in the case where the low energy model used is CHPT to simplify
the discussion.
The source of the difference turned out to be more subtle. In \cite{BPBK}
the choice of momentum for the $X$-boson was made to be $r+q$ where
$q$ is the momentum going through the two-point function defined
in (\ref{two-point}) and $r$ is the loop integration variable.
This particular choice was
done in order to have the lowest order always non-zero, even if the
range of momenta in $r$ integrated over was such that $|r^2|<|q^2|$.
We had also always chosen the direction of $r+q$ through the $X$ boson such
that the internal propagator appearing in diagram (b) of Fig. \ref{figBK}
had momentum $r$. Since the $X$ in that case was a neutral gauge boson this
was a natural choice.
\begin{figure}
\begin{center}
\epsfig{file=figBK.ps,width=10cm}
\end{center}
\caption{\label{figBK} Chiral Perturbation Theory
contributions to $\Pi_{\Delta S=a}(q^2)$.
(a) Lowest order. (b)-(f) Higher order non-factorizable.
The full lines are mesons. The zig--zag line is the $X$-boson.}
\end{figure}
It turns out however that in the presence of a
cut-off some of the contributions obtained
with this routing do not have the correct
CPS symmetry. This symmetry imposes that some of the contributions have to
have the internal propagator in Fig. \ref{figBK}
with momentum $r+2q$ instead of $r$. The precise change has been depicted in
Fig. \ref{figrouting}. The momentum flow as depicted in (a) should be replaced
by the sum of (b) and (c).
\begin{figure}
\begin{center}
\epsfig{file={figrouting.ps},width=12cm}
\end{center}
\caption{\label{figrouting} The routing for the $\Delta S=2$ operator enforced
by CPS symmetry. (a) Routing used in \cite{BPBK}, (b)+(c)
The correct routing as it should have been used.}
\end{figure}This doesn't affect the coefficients of the chiral logarithms.
Therefore one can
use any routing when using regularization which doesn't see analytic
dependence on the cut-off. Unfortunately, this bad routing
was actually causing most
of the bad behaviour for $B_K(\mu)$ for high values of $\mu$
in Table 1 of \cite{BPBK} and the difference
with the result for $G_{27}$ of \cite{BPBK} and \cite{FG95}.
In fact, using the background
field method as in \cite{FG95} the CPS symmetry is automatically satisfied
at order $p^2$ with any routing.
We have now corrected for this problem and obtain a much more reasonable
matching between long-distance contributions
and the short-distance contributions.
Nevertheless, it turns out that the range of values chosen for $\mu$
in \cite{BPBK} to make the predictions was not very much affected by
the routing problem explained above. The results we now obtain
are much more stable numerically and in the same ranges as the ones
quoted in \cite{BPBK}. We also agree with the result in \cite{FG95}
for $G_{27}(\mu)$ obtained from lowest order CHPT,
\begin{equation}
\label{G27CHPT}
G_{27}(\mu) = 1 - \frac{3\mu^2}{16\pi^2 F_0^2}\,.
\end{equation}
Here and in what follows, the $\mu$ dependent $G_8(\mu)$, $G_8'(\mu)$,
$G_{27}(\mu)$, and $B_K(\mu)$ couplings stand for the long-distance
contributions to those couplings, i.e. with
$[1+(\alpha_s(\mu)/\pi)\, r_{1, j}] \, C_j(\mu)=1$.
\subsection{CHPT Results}
Here we update Section 4 of \cite{BPBK} to correct for the routing
problem. The non-factorizable contribution to
$\Pi^{\overline K^0 \, K^0}(q^2)$ is given by the diagrams in Figure
\ref{figBK} and is:
\begin{eqnarray}
\lefteqn{\frac{-8 B_0^2 F_0^2}{(q^2-m_K^2)^2} \Bigg\{
\int^\mu\frac{d^4r_E}{(2\pi)^4}\frac{r_E^2 q_E^2}{(r_E^2+m_K^2)^2}
-\int^\mu\frac{d^4r_E}{(2\pi)^4}\frac{r_E^2}{r_E^2+m_K^2}}&&\nonumber\\&&
\!\!\!+\frac{1}{2}\int^\mu\frac{d^4r_E}{(2\pi)^4} (r_E+2 q_E)^2
\left[\frac{1}{(r_E+q_E)^2+m_\pi^2}+\frac{1}{(r_E+q_E)^2+2m_K^2-m_\pi^2}\right]
\Bigg\} \nonumber \\
\end{eqnarray}
These integrals can be performed analytically but the result is rather
cumbersome. The Euclidean continuation of $q^2$ we used is $q_E^2 = -q^2$.
The result in the chiral limit becomes
\begin{equation}
\frac{-8 B_0^2 F_0^2}{(q^2-m_K^2)^2}\frac{1}{16\pi^2F_0^2}
\left\{-3\mu^2 q^2-\frac{5}{6}q^4\right\}
\end{equation}
and for $q^2$ = 0
\begin{eqnarray}
\frac{-8 B_0^2 F_0^2}{(q^2-m_K^2)^2}\frac{1}{16\pi^2F_0^2}
\nonumber &&\\ && \hspace{-4cm} \times \Bigg\{
-\frac{1}{2}(2m_K^2-m_\pi^2)\left(\mu^2-(2m_K^2-m_\pi^2)
\ln\left(\frac{\mu^2+2m_K^2-m_\pi^2}{2m_K^2-m_\pi^2}\right)\right)
\nonumber \\ && \hspace{-4cm} +
m_K^2\left(\mu^2-m_K^2\ln\left(\frac{\mu^2+m_K^2}{m_K^2}\right)\right)
-\frac{1}{2}m_\pi^2
\left(\mu^2-m_\pi^2\ln\left(\frac{\mu^2+m_\pi^2}{m_\pi^2}\right)\right)
\Bigg\} \nonumber \\
\end{eqnarray}
These results allow to obtain the equivalent of (\ref{G27CHPT})
for the $O(p^4)$ coefficients.
\subsection{The $B_K$ Parameter: Long Distance and Short Distance}
We now take the results from the ENJL evaluation of
$\Pi^{\overline K^0 \, K^0}(q^2)$ both in the chiral limit and
in the case of quark masses corresponding to the physical pion and
kaon mass and use these to estimate $B_K$ and $G_{27}$.
The final results for $B_K$ in the chiral limit, $B_K^\chi(\mu)$
and $G_{27}(\mu)=4 B_K^\chi(\mu)/3$ are shown in Table \ref{tableBK}.
\begin{table}
\begin{center}
\begin{tabular}{|c|ccccccc|}
\hline
$\mu$(GeV)&$G_{27}(\mu)$&$B_K^\chi(\mu)$&$B_K(\mu)$&$\hat B_{K(1)}$
&$\hat B_{K(2)}^{\mbox{SI}}$&$\hat B_{K(2)}^{\exp}$&$
\hat B_{K(2)}^{\chi\exp}$\\
\hline
0.3& 0.830 & 0.622 & 0.784 & -- & -- & -- & --\\
0.4& 0.737 & 0.552 & 0.776 & -- & -- & -- & --\\
0.5& 0.638 & 0.478 & 0.762 & 0.79 & 0.36 & 0.48 & 0.30\\
0.6& 0.537 & 0.402 & 0.746 & 0.81 & 0.57 & 0.62 & 0.33\\
0.7& 0.431 & 0.323 & 0.721 & 0.81 & 0.63 & 0.66 & 0.30\\
0.8& 0.320 & 0.240 & 0.688 & 0.79 & 0.65 & 0.67 & 0.23\\
0.9& 0.200 & 0.150 & 0.643 & 0.75 & 0.64 & 0.66 & 0.15\\
1.0& 0.070 & 0.052 & 0.588 & 0.70 & 0.61 & 0.62 & 0.05\\
\hline
\end{tabular}
\end{center}
\caption{\label{tableBK} The long-distance contributions
to $G_{27}(\mu)$, $B_K^\chi(\mu)$ and $B_K(\mu)$
as determined using the ENJL model. Also shown are
$\hat B_{K(1)}$ using the one-loop short distance
and $\hat B_{K(2)}$, $\hat B_{K(2)}^{\chi}$ using the
two-loop short distance in Table \ref{WilsonS=2}.
See Appendix \ref{AppA} for the values of the
parameters used. For the non-chiral cases one has to
add 0.09$\pm$0.03 from the nonet vs octet difference, see text.}
\end{table}
We have also shown the value of $B_K$ obtained there from extrapolating the
ENJL two-point function in the Euclidean domain to the kaon pole
using Chiral Perturbation Theory, this is $B_K(\mu)$.
In the latter case we have to include the correction
due to the difference between the octet and nonet case. This
correction was estimated to be about $0.09\pm0.03$ in \cite{BPBK}
and we take it as $\mu$-independent.
In the other columns in Table \ref{tableBK} various parts of
the short distance correction are included. The realistic case, with non-zero
quark masses in the long distance contribution to
$B_K(\mu)$, we have shown with
the one-loop short-distance running, $\hat B_{K(1)}$, two-loop short
distance running with the scheme-dependence removed, $\hat B^{SI}_{K(2)}$,
as defined in eq. (\ref{defBKhat}), and the exact solution to the
two-loop evolution equation with the scheme-dependence removed
to the same order, $\hat B^{exp}_{K(2)}$ as defined
in Eq. (\ref{defBKhatexp}).
For the latter short-distance contribution we have also shown the
result in the chiral limit, $\hat B_{K(2)}^{\chi exp}$. The rest
of the parameters used are in App. \ref{AppA}.
Notice that the matching for all cases is acceptable. The quality
of the matching for the real $\hat B_K$ is as good as for $\hat B^{exp}_{K(2)}$
since they only differ by the $\mu$-independent correction of 0.09 described
above.
So, in the chiral limit we get
\begin{equation}
0.25< \hat B_K^\chi < 0.40 \, ,
\end{equation}
with non-zero quark masses we get
\begin{equation}
0.50 < \hat B_K^{\rm Nonet} < 0.70
\end{equation}
for the nonet case and
\begin{equation}
0.59< \hat B_K < 0.79
\end{equation}
for the real case.
Notice that the large value of the chiral symmetry breaking
ratio
\begin{equation}
1.8 < \frac{B_K}{B_K^\chi} < 2.4 \,
\end{equation}
confirms the qualitative picture obtained in \cite{BPBK}.
Finally, let us split the different contributions to the
value of $\hat B_K$ in the real case,
\begin{equation}
\label{splitBK}
\hat B_K = (0.33\pm0.10) + (0.09\pm0.02) +
(0.18\pm0.07) + (0.09\pm0.03)
\end{equation}
where the first terms is the chiral limit result, the second term
are the $O(p^4)$ chiral logs at $\nu=M_\rho$ \cite{BPBK}, the
third term are the $O(p^4)$ counterterms and higher also
at the same scale
and the last term is the above mentioned contribution
due to the $\eta_1-\eta_8$ mixing \cite{BPBK}. The error on the
chiral log contribution is from varying $\nu$ and the one on
the counterterm contribution from looking at various different
ways to extract the same counterterms as described in \cite{BPBK}
plus some extra as an estimate of the model error.
Notice that the last term in (\ref{splitBK}) is of order $(m_s-m_d)/N_c^2$
and it is not included in present lattice results. In fact, it introduces
an unknown systematic uncertainty in quenched and partially unquenched
results which is difficult to pin down, see \cite{lattice}.
So the lattice results cannot be easily compared to ours.
This term is also not included in determinations which are based in
lowest order CHPT since it is higher order. Therefore again a direct
comparison with our results has to be done carefully.
\section{$\Delta S=1$ Transitions: Long Distance}
\label{deltaS1longdistance}
In this section we use the $\Delta S=1$ two-point-functions
$\Pi^{K^+\pi^+}(q^2)$ and $\Pi^{K^0\pi^0}(q^2)$ as defined in
(\ref{two-point}). We do not use the one with $\eta_8$ since to
order $p^2$ we do not get any more information out of that
two-point function. It will provide extra information to $O(p^4)$
\cite{BPP98}.
The result to lowest order in CHPT is given by
\begin{eqnarray}
\!\!\Pi^{K^+\pi^+}(q^2)&=&-\frac{4 B_0^2 F_0^4\,C}
{(q^2-m_K^2)(q^2-m_\pi^2)}
\left[q^2\left(G_8+\frac{2}{3}G_{27}-2 G_8^\prime\right)
+m_\pi^2 G_8^\prime\right]\nonumber\\
\hskip-0.3mm\Pi^{K^0\pi^0}(q^2)&=&-\frac{2 \sqrt{2} B_0^2 F_0^4\,C }
{(q^2-m_K^2)(q^2-m_\pi^2)}
\left[q^2\left(-G_8+G_{27}+2 G_8^\prime\right)
-m_\pi^2 G_8^\prime\right]
\end{eqnarray}
Here $C$ of Eq. (\ref{defC})
has been chosen such that in the strict large-$N_c$ limit
$G_8 = G_{27} = 1$. The coupling
$G_8^\prime$ is the coefficient of the weak mass
term that does not contribute to $K\to\pi\pi$ at order $p^2$ but its
value is important at $O(p^4)$ and higher and for some
processes involving photons.
The definition of the $O(p^2)$ Lagrangian, a discussion of the
contributions from $G_8^\prime$ and further references
can be found in \cite{BPP98}.
We have calculated the two-point functions in the chiral limit to extract
the coefficient of $q^2$ and in the
case of equal quark masses for an ENJL quark mass of
0.5, 1, 5, 10, and 20
MeV in order to extract the coefficient of $m_\pi^2$.
As described in Section \ref{Xboson} we treat all coefficients $C_i(\mu_L)$
as leading order in $1/N_c$ since they are enhanced in principle by large
logarithms. We therefore obtain the matrix elements of $Q_1$, $Q_2$,
$Q_3$, $Q_4=Q_2-Q_1+Q_3$, $Q_5$ and $Q_6$ to next-to-leading order
in $1/N_c$.
\subsection{Current x Current Operators}
The comments here are only valid for $Q_1, Q_2, Q_3, Q_4$, and
$Q_5$. The operator $Q_6$ is special
and is treated separately in the next subsection.
We can now use the method of \cite{BBG} with the correct routing and obtain
for the contributions to $G_8$ and $G_8^\prime$ from $Q_1$, $Q_2$,
$Q_3$ and $Q_5$ [with $\Delta_\mu\equiv \mu^2 /(16\pi^2F_0^2)$]:
\begin{eqnarray}
\label{G8CHPT}
G_{27}(\mu)[Q_1]& = &
G_{27}(\mu)[Q_2] = 1-3 \Delta_\mu + O(p^4) \, , \nonumber\\
G_8(\mu)[Q_1] & = & -\frac{2}{3}\left[1+\frac{9}{2} \Delta_\mu
+ O(p^4) \right]\, , \nonumber\\
G_8(\mu)[Q_2] & = & 1+\frac{9}{2}\, \Delta_\mu + O(p^4) \,, \nonumber\\
G_8(\mu)[Q_3] & = & 2 G_8(\mu)[Q_2] + 3 G_8(\mu)[Q_1]= 0+O(p^4)\,,\nonumber \\
G_8(\mu)[Q_5] &=& 0+O(p^4) \, , \nonumber\\
G_8^\prime(\mu)[Q_1] &=& 0 \, , \nonumber\\
G_8^\prime(\mu)[Q_2] &=& \frac{5}{6}\, \Delta_\mu + O(p^4) \, , \nonumber\\
G_8^\prime(\mu)[Q_3] & = & 2 G_8^\prime(\mu)[Q_2]=
\frac{5}{3}\, \Delta_\mu + O(p^4)\, , \nonumber \\
G_8^\prime(\mu)[Q_5] &=& -\frac{5}{3} \Delta_\mu + O(p^4)\, .
\end{eqnarray}
Here and in the remainder $G_8(\mu)[Q_i]$ stands for the long-distance
contribution of operator $Q_i$ to $G_8$ when $C_i(\mu)$ is set equal to 1.
The same definition applies to $G_8^\prime(\mu)[Q_i]$ and $G_{27}(\mu)[Q_i]$.
In Tables \ref{tablecurrent} and \ref{tableQ6} we dropped the
argument $(\mu)$ for brevity.
The results from the ENJL calculations are summarized in Table
\ref{tablecurrent}.
The numbers in the columns 2 to 8 are always assuming
$[1+(\alpha_s (\mu)/\pi) \, r_{1, j}] \, C_j(\mu)=1$
for the relevant operator.
We get that $G_{27}(\mu)[Q_1] = G_{27}(\mu)[Q_2]
= G_{27}(\mu)$ in Table \ref{tableBK} and they are therefore not listed
again. In addition all the other operators are
octet so do not contribute to $G_{27}$. We also have $G_8^\prime(\mu)[Q_1]=0$,
the operator $Q_1$ only contributes via $B_K$-like contributions which
cannot have a contribution at $q^2=0$ for equal quark masses since
this type of contribution also produces $G_{27}$ where such terms
are forbidden.
The approach to the chiral limit for the left-left
current operators $Q_1$, $Q_2$, and
$Q_3$ is such that the $B_K$-like and Penguin-like contributions
are separately chiral invariant. For the left-right current
operator $Q_5$ this is not the case
and it is only the sum of the $B_K$-like and Penguin-like contributions
that vanishes for $q^2\to 0$ in the chiral limit.
Notice that the results for small $\mu$ agree quite well
with the results just using CHPT, eq. (\ref{G8CHPT}),
but differ strongly for larger $\mu$.
The values (\ref{G8CHPT}) at $\mu=0$ correspond to the factorizable
contribution.
We have also calculated the chiral logarithms that should be present in
these contributions. Subtracting them made the extraction of
the coefficient of $m_\pi^2$ to obtain $G_8^\prime$ numerically
much more convergent.
\begin{table}
\begin{center}
\begin{tabular}{|c|ccccccc|}
\hline
$\mu$ (GeV) & $G_8[Q_1]$&$G_8[Q_2]$&$G_8[Q_3]$& $G_8[Q_5]$ &
$G_8^\prime[Q_2]$ & $G_8^\prime[Q_3]$&$G_8^\prime[Q_5]$\\[0.2cm]
\hline
0.0 & -0.667 & 1.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\
0.3 & -0.834 & 1.271 & 0.040 & -0.041 & 0.070 & 0.140 & -0.149 \\
0.4 & -0.930 & 1.425 & 0.060 & -0.109 & 0.128 & 0.256 & -0.297 \\
0.5 & -1.029 & 1.600 & 0.113 & -0.244 & 0.206 & 0.412 & -0.530 \\
0.6 & -1.130 & 1.779 & 0.168 & -0.460 & 0.298 & 0.596 & -0.868 \\
0.7 & -1.235 & 1.962 & 0.219 & -0.769 & 0.399 & 0.798 & -1.321 \\
0.8 & -1.347 & 2.145 & 0.249 & -1.178 & 0.501 & 1.002 & -1.908 \\
0.9 & -1.467 & 2.325 & 0.249 & -1.690 & 0.598 & 1.196 & -2.634 \\
1.0 & -1.597 & 2.498 & 0.205 & -2.308 & 0.681 & 1.362 & -3.504 \\
\hline
\end{tabular}
\end{center}
\caption{\label{tablecurrent} The results for the long-distance
contributions to $G_8(\mu)$ and $G_8^\prime(\mu)$ from $Q_1$ to $Q_5$
$[Q_4=Q_2-Q_1+Q_3]$
as calculated using the ENJL model via the two-point functions.}
\end{table}
The results for $G_8(\mu)[Q_3]$ can be obtained from isospin
relations from $G_8(\mu)[Q_1]$ and $G_8(\mu)[Q_2]$.
The results for $G_8(\mu)[Q_5]$ come from a large cancellation between
the values of $G_8-2G_8^\prime$ and $G_8^\prime$ and have a
somewhat larger uncertainty than the others.
It should be noticed that in all cases the $1/N_c$ corrections to the
matrix elements are substantial.
\subsection{The $Q_6$ Operator: Factorization Problem and Results}
\label{Q6discussion}
After Fierzing, the $Q_6$ operator defined in (\ref{operators})
\begin{eqnarray}
Q_6&\equiv&-2{\displaystyle \sum_{q=u,d,s}} \,
\left[\overline{s} \, (1+\gamma_5)\,q\right](x) \,
\left[\overline q \, (1-\gamma_5) \, d\right](x)
\end{eqnarray}
gives both factorizable and non-factorizable contributions to the off-shell
two-point functions to $K^0\to \pi^0$, $K^0\to\eta_8$, and
$K^+\to \pi^+$. Here, we study for definiteness the $K^+\to\pi^+$
two-point function, $\Pi^{K^+\pi^+}(q^2)$ of Eq. (\ref{two-point}).
The factorizable contributions from $Q_6$ to this two-point function are
\begin{eqnarray}
\label{Q6fac}
\Pi^{K^+\pi^+}_{Q_6 Fact}(q)&=& 2 C_{\Delta S=1} \, C_6(\mu) \left[
\langle 0| \overline d d + \overline s s |0 \rangle \, \,
\Pi^{P_{K^-}S_{32}P_{\pi^+}}(0,q) \right. \nonumber \\
&-& \left. \Pi^P_{K^+K^+}(q) \, \Pi^P_{\pi^+\pi^+}(q)\, \right].
\end{eqnarray}
Here $C_6(\mu)$ is the Wilson
coefficient of $Q_6$, $\Pi_P^{ii}(q)$ are two-point functions
\begin{equation}
\Pi_P^{ii}(q) \equiv i \int {\rm d}^4 x \, e^{i q.x} \,
\langle 0| T(P_{i}^\dagger(0) P_i(x)) | 0 \rangle
= -\left[ \frac{Z_i}{q^2-m_i^2}+ Z'_{i}\right],
\end{equation}
with $P_{i}(x)$ the pseudo-scalar sources defined in (\ref{pseudosources}),
and $\Pi^{P_{K^-}S_{32}P_{\pi^+}}(p,q)$ the three-point function
\begin{eqnarray}
\Pi^{P_{K^-}S_{32}P_{\pi^+}}(p,q)&\equiv&
i^2 \int {\rm d}^4 x \, \int {\rm d}^4 y
\, e^{i(q.x-p.y)} \,
\langle 0| T(P_{K^-}(x)S_{32}(y) P_{\pi^+}(0)) | 0 \rangle \nonumber \\
\end{eqnarray}
with $S_{32}(y)$ the scalar source
\begin{eqnarray}
S_{32}(y)\equiv - \left[ \overline s \, d \right] (y)\, .
\end{eqnarray}
The last term in Eq. (\ref{Q6fac}) corresponds to the diagram
shown in Fig. \ref{figfull}(a). The first term is a contribution
which is absent in the case of current$\times$current operators.
It is depicted in Fig. \ref{figfactQ6}.
\begin{figure}
\begin{center}
\epsfig{file=figfactQ6.ps,width=5cm}
\end{center}
\caption{\label{figfactQ6} The factorizable contribution for the $Q_6$
operator that is not well defined in the chiral limit. This contribution
is not present for current$\times$current operators.}
\end{figure}
In octet symmetry, to next-to-leading order we have \cite{GL}
\begin{equation}
\frac{\displaystyle\langle 0| \overline d d + \overline s s |0 \rangle}
{ -2 B_0 F_0^2} =
1 + \frac{16}{F_0^2} \, (2 m_K^2+m_{\pi}^2)
L_{6} + \frac{4}{F_0^2} \, m_K^2 \, (2 L_8 + H_2 )
- \frac{3}{2} \mu_\pi -3 \mu_K -\frac{5}{6} \mu_{\eta_8}
\end{equation}
for the one-point function and
\begin{eqnarray}
\lefteqn{\Pi^{P_{K^-}S_{32}P_{\pi^+}}(0,q)=
- \frac{\sqrt{Z_{K^+}Z_{\pi^+}}}{(q^2-m_K^2)(q^2-m_{\pi}^2)}
\, B_0}&& \nonumber \\ &\times&
\left[ 1+ \frac{8}{F_0^2} \, (2 m_K^2+m_{\pi}^2)
(2 L_{6} - L_4) \right.
- \frac{4}{F_0^2} \, (2q^2+m_K^2+m_\pi^2) \, L_5
+ \frac{32}{F_0^2} \, q^2 \, L_8 \nonumber \\ &&
\!\!\!\!\!\!\!- {1\over6} \frac{q^2}{16 \pi^2 F_0^2} \left [
\ln\left(\frac{m_K^2}{\nu^2}\right)- {3\over2(m_K^2-m_\pi^2)}
\left( m_\pi^2 \ln\left({m_\pi^2\over m_K^2}\right) +
m_{\eta_8}^2 \ln\left({m_{\eta_8}^2\over m_K^2}\right)
\right) \right] \nonumber \\ &&
+ {1\over2} \mu_\pi + {1\over6} \mu_{\eta_8} +
{5 \over 6} \mu_K + \frac{5}{12} \,
\frac{m_\pi^2}{16 \pi^2 F_0^2} \ln\left(\frac{m_K^2}{\nu^2}\right)
\nonumber \\ &&
- \left. \frac{m_K^2+m_\pi^2}{16 \pi^2 F_0^2} \,
{1\over8(m_K^2-m_\pi^2)} \left[
m_{\eta_8}^2 \ln\left({m_{\eta_8}^2\over m_K^2}\right)
- 3 m_{\pi}^2 \ln\left({m_{\pi}^2\over m_K^2}\right)
\right] \right]
\end{eqnarray}
for the three-point function.
Here and in the remainder the constants $L_i$ are defined at a scale $\nu$,
$L_i\equiv L_i^r(\nu)$.
and $\mu_i=\ln(m_i/\nu)/(16\pi^2)$ for $i=\pi,K,\eta_8$.
At next-to-leading order, the
expressions for the two-point functions were given
for the octet symmetry case in \cite{BPP98}. So the second part in
(\ref{Q6fac}) can be written as
\begin{eqnarray}
\lefteqn{\Pi^P_{K^+K^+}(q) \, \Pi^P_{\pi^+\pi^+}(q) =
\frac{\sqrt{Z_{K^+}Z_{\pi^+}}}{(q^2-m_K^2)(q^2-m_{\pi}^2)}
\, 2 F_0^2 B_0^2 }&& \nonumber \\ &\times&
\left[ 1+ \frac{8}{F_0^2}
(2 m_K^2 +m_\pi^2) (4 L_6 -L_4) \right.
+ \frac{4}{F_0^2} (m_\pi^2+m_K^2) (4 L_8 -L_5)
\nonumber\\&& +
\frac{4}{F_0^2} (2 q^2 - m_\pi^2-m_K^2) (2 L_8 -H_2)
- \left.
\frac{7}{4} \mu_\pi - \frac{5}{2} \mu_K -\frac{5}{12} \mu_{\eta_8}
\right]
\end{eqnarray}
Therefore this order, in octet symmetry, the factorizable contributions
to $\Pi^{K^+\pi^+}(q)$ from $Q_6$ are
\begin{eqnarray}
\lefteqn{\Pi^{K^+\pi^+}_{Q_6, Fact}(q) =
-\frac{\sqrt{Z_{K^+}Z_{\pi^+}}}{(q^2-m_K^2)(q^2-m_{\pi}^2)}
\, C_{\Delta S=1} \, C_6(\mu)
\, 16 B_0^2(\mu) }\nonumber&& \\&\times&
\Bigg\{ 2 q^2 \left\{L_5 -(2 L_8 + H_2)\right\} +
m _\pi^2 (2 L_8 + H_2)
-{1\over12 } \mu_K +{1\over16} \mu_{\eta_8} - {3\over16} \mu_\pi
\nonumber \\ &&\hskip-0.8cm
+ {1\over24} \frac{q^2}{16 \pi^2 F_0^2} \left [
\ln\left(\frac{m_K^2}{\nu^2}\right)- {3\over2(m_K^2-m_\pi^2)}
\left( m_\pi^2 \ln\left({m_\pi^2\over m_K^2}\right) +
m_{\eta_8}^2 \ln\left({m_{\eta_8}^2\over m_K^2}\right)
\right) \right] \nonumber \\ &&
- \frac{5}{48} \,
\frac{m_\pi^2}{16 \pi^2 F_0^2} \ln\left(\frac{m_K^2}{\nu^2}\right)
\nonumber \\ &&
+ \frac{m_K^2+m_\pi^2}{16 \pi^2 F_0^2} \,
{1\over32(m_K^2-m_\pi^2)} \left[
m_{\eta_8}^2 \ln\left({m_{\eta_8}^2\over m_K^2}\right)
- 3 m_{\pi}^2 \ln\left({m_{\pi}^2\over m_K^2}\right)
\right] \Bigg\}
\end{eqnarray}
As it is well known the order $p^0$ contribution from $Q_6$ vanishes
\cite{CFG86}
and the first non-trivial contribution from this operator
is of order\footnote{The order $p^2$ chiral logs were called
order $p^0/N_c$ contributions in \cite{GKPSB}.} \, $p^2$.
This happens here as an exact
cancellation between the two types of factorizable
contributions at order $p^0$. As a result there is a very large
cancellation between the two types of factorizable contributions
at order $p^2$.
We get
\begin{equation}
\label{G8Q6fact}
G_8\Bigg|_{Q_6,\mbox{Fact}} = - \left[ {5\over 3} \right] \, 16 \,
C_6(\mu) \frac{B_0^2(\mu)}{F_0^2} \, \left[ L_5
- {3\over 16} \frac{1}{16 \pi^2 }
\left[ 2 \ln\left(\frac{m_L}{\nu}\right) + 1 \right]
\right] \nonumber \\
\end{equation}
and
\begin{equation}
\label{G8pQ6fact}
G_8^{\prime}\Bigg|_{Q_6,\mbox{Fact}} =
- \left[ {5\over 3} \right] \, 8 \,
C_6(\mu) \frac{B_0^2(\mu)}{F_0^2} \, \left[ (2L_8+H_2)
-\frac{5}{24} \, \frac{1}{16 \pi^2 }
\left[ 2 \ln\left(\frac{m_L}{\nu}\right) + 1\right] \right]
\end{equation}
The mass $m_L$ above has to be understood as an infrared cut-off
as we have done the chiral limit $m_L = m_\pi=m_K=m_{\eta_8} \to 0$.
The factorizable contribution to $G_8$ and G$_8'$ from $Q_6$
is therefore not well defined. It has an infrared divergence.
The divergence is related to the divergence in the
pion scalar radius in the chiral limit. Since $Q_6$ is an
$8_L\times 1_R$
operator we know from CHPT in the non-leptonic sector that to lowest order
in the counting there, no infrared divergences are present in the
two-point function $\Pi^{K^+\pi^+}(q^2)$. These infrared
divergences are therefore
spurious and must be cancelled by another contribution.
The only possibility is that it
cancels out with the non-factorizable contribution also coming from $Q_6$.
We will see below that this is indeed the case.
Notice also that since $G_8$ and $G_8'$ are $O(p^2)$ couplings,
Eqs. (\ref{G8Q6fact}) and (\ref{G8pQ6fact})
are exact for the factorizable contributions.
Unfortunately, the non-factorizable contributions can only be calculated
at present in a model dependent way. In the $1/N_c$ expansion,
the infrared divergent part of $G_8$ and $G_8^\prime$
can in fact be calculated analytically using the $O(p^2)$ CHPT Lagrangian.
We can therefore subtract it. It follows from the diagrams shown
in Fig. \ref{figBK}, (b),(c), (e),
and (f) and by using CHPT for the $X$-boson vertices which is valid for
small $\mu$. For equal masses $m_K^2 = m_\pi^2 = m_{\eta_8}^2 = m_L^2$
we obtain
\begin{eqnarray}
\label{Q6NF-CHPT}
\lefteqn{\Pi^{K^+\pi^+}(q^2) = \frac{2B_0^2 F_0^2}{(q^2-m_K^2)(q^2-m_\pi^2)}
C_{\Delta S=1} C_6(\mu)4B_0^2(\mu)}&&\nonumber\\&\times&
\Bigg\{-\frac{1}{6}(q^2-5m_L^2)\int^\mu
\frac{d^4r_E}{(2\pi)^4}\frac{1}{(r_E^2+m_L^2)^2} \nonumber\\&&
-\frac{5}{6} \int^\mu
\frac{d^4r_E}{(2\pi)^4}\left[\frac{1}{((r_E+q_E)^2+m_L^2)}
-\frac{1}{(r_E^2+m_L^2)}\right] \Bigg\}\, . \nonumber \\
\end{eqnarray}
The non-factorizable (NF) part above in the limit $m_L\to0$ leads to
\begin{equation}
\label{G8Q6NF-IR}
G_8{\Bigg|_{\mbox{$Q_6$,NF-$O(p^2)$}}}
= - \left[ {5\over 3} \right] \, 16 \,
C_6(\mu) \frac{B_0^2(\mu)}{F_0^2} \, \frac{3}{16}\, \frac{1}{16\pi^2}
\left[2 \ln\left(\frac{m_L}{\mu}\right)+\frac{13}{18}\right]
\end{equation}
and
\begin{equation}
\label{G8pQ6NF-IR}
G_8^{\prime}{\Bigg|_{\mbox{$Q_6$,NF-$O(p^2)$}}} =
- \left[ {5\over 3} \right] \, 8 \,
C_6(\mu) \frac{B_0^2(\mu)}{F_0^2} \, \frac{5}{24} \, \frac{1}{16\pi^2}
\left[2 \ln\left(\frac{m_L}{\mu}\right)+1\right]
\end{equation}
There is a very large cancellation between the factorizable
parts in (\ref{G8Q6fact}) and (\ref{G8pQ6fact}) and the non-factorizable
part in (\ref{G8Q6NF-IR}) and (\ref{G8pQ6NF-IR})
both for the IR divergent part and for the large $1/N_c$ constant
part. Summing up the exact factorizable result and the infrared divergent
non-factorizable part we get
\begin{equation}
\label{G8Q6}
G_8\Bigg|_{\mbox{$Q_6$, $O(p^2)$}} = - \left[ {5\over 3} \right] \, 16 \,
C_6(\mu) \frac{B_0^2(\mu)}{F_0^2} \, \left[ L_5(\nu)
- \frac{1}{16 \pi^2 }\left(
{3\over 8} \ln\left(\frac{\mu}{\nu}\right)
+\frac{5}{96}\right)\right]
\end{equation}
and
\begin{equation}
\label{G8pQ6}
G_8^{\prime}\Bigg|_{\mbox{$Q_6$, $O(p^2)$}} =
- \left[ {5\over 3} \right] \, 8 \,
C_6(\mu) \frac{B_0^2(\mu)}{F_0^2} \, \left[ (2L_8+H_2)(\nu)
-\frac{5}{12} \, \frac{1}{16 \pi^2 }
\ln\left(\frac{\mu}{\nu}\right) \right]
\end{equation}
It is then a
non-trivial check of the validity of the model used that the
non-factorizable part indeed contains the correct
infrared logarithms needed to cancel the factorizable ones.
The ENJL model used here does.
Notice in (\ref{G8Q6}) and (\ref{G8pQ6}) all the dependence
on the IR scale , $m_L^2$, drops out as it should
and the scale in the logarithm becomes $\ln(\mu/\nu)$.
So in the chiral limit and next-to-leading
in $1/N_c$ , the scale dependence on the short-distance
scale gets compared to the scale where the CHPT constants are defined.
The result above shows that at least the $B_6$ parameter defined
as usual as the ratio of the non-factorizable contributions over the
vacuum saturation result (VSA) is not well defined.
It is therefore necessary to give another definition for this $B$
parameter. The cancellation of the infrared divergence found
here is probably also the source for the large cancellations found
between the factorizable and non-factorizable contributions
in earlier work. Notice also that the $1/N_c$ finite term
in (\ref{G8Q6fact}) is {\em larger} than the leading in
$1/N_c$ result and
with {\em opposite} sign. It is clear that it can be
dangerous not to have and
analytical cancellation of both the IR divergent part
and the $1/N_c$ constant as we have. This can explain also
some discrepancies for the $B_6$ parameter results in the literature,
$B_6$ is just not well defined.
The way we treat our results is that we remove the exact infrared
logarithm from our ENJL calculation by adding equations (\ref{G8Q6fact})
and
(\ref{G8pQ6fact}) which are exact and model independent to the ENJL results.
In this way we also remove the IR divergence of the non-factorizable
part exactly. We chose the reference scale $\mu = M_\rho$
to do the subtraction. We generate the mass $m_L^2$
by putting small current quark masses.
The remaining factorizable factor, i.e.
the part from the constants $L_5$, $L_8$, and $H_2$ are then evaluated
at a scale $\nu = M_\rho$.
This corresponds for the leading in $1/N_c$ contribution
to $G_8$ and $G_8'$ from $Q_6$
\begin{equation}
\label{numfact}
G_8^{ENJL}{\Bigg|_{\mbox{$N_c$}}} = (-38\pm8) \, C_6(\mu)
\quad{\rm and} \quad
G_8^{\prime ENJL}{\Bigg|_{\mbox{$N_c$}}} = (-9\pm14) \, C_6(\mu)
\end{equation}
using
\begin{eqnarray}
L_5(M_\rho)&=&(1.4\pm0.3)\cdot 10^{-3} \nonumber \\
(2L_8+H_2)(M_\rho) &=& (0.7\pm1.1)\cdot10^{-3} \, .
\end{eqnarray}
We have used here the value of $B_0$ and $F_0$ from
the ENJL model. The value of $2L_8+H_2$ is derived from the
canonical value for $L_8(M_\rho) = (0.9\pm0.3)\cdot10^{-3}$ and the
value for $(2L_8-H_2)(M_\rho)=
(2.9\pm1.0)\cdot10^{-3}$ from \cite{BPR}.
The large error for $G_8^{' ENJL}$ in (\ref{numfact}) is because of the
large cancellation in the value for $2L_8+H_2$.
Notice that the size of the subtracted terms in
$G_8^{ENJL}$ is about
$+40 \, C_6(\mu)$ for $m_L^2=m_\pi m_K$ and varies very fast
with $m_L$.
Our calculation agrees with the one of \cite{GKPSB} when the
appropriate identifications are made. The large cancellation between the
factorizable and non-factorizable parts where also observed there.
They were however not identified as an exact cancellation of infrared
divergences. In fact, at the order the calculation was done
in \cite{GKPSB} the cancellation of the $1/N_c$ factorizable
and non-factorizable pieces is very large, and
in their language\footnote{As we said
$B_6$ is not well defined. We come back to this question in Section
\ref{conclusions}.} one should get $B_6$ very near to one.
They get indeed $B_6$ very close to one.
The non-factorizable non-divergent part has
corrections from higher order terms in the chiral
Lagrangian which we calculate numerically using the ENJL model.
We have included them and these give therefore the numerical differences
between our results and the ones in \cite{GKPSB}.
Before we present the results for $G_8(\mu)$
and $G_8'(\mu)$ from $Q_6$ from our ENJL calculation we need to
include one additional remark.
The vector and axial-vector currents used in the previous section are
uniquely identified both in the ENJL model and in QCD. There is
however no guarantee as remarked in \cite{BP94} that the same is true
for the scalar and pseudo-scalar densities. Here we renormalize
the ENJL scalar $S(x)$ and pseudo-scalar $P(x)$ densities
by the values of the quark condensates in the chiral limit:
\begin{equation}
\label{renormB0}
S_{\mbox{ENJL}} = S_{\mbox{QCD}}(\mu)
\frac{\langle\bar{q}q\rangle_{\mbox{ENJL}}}
{\langle\bar{q}q\rangle_{\mbox{QCD}}(\mu)}.
\end{equation}
There is an analogous equation for the pseudo-scalar density.
This factor should be remembered when using the Wilson
coefficients from our results.
The values we have used are $B_0^{QCD}(1 {\rm GeV})=(1.75\pm0.40)$ GeV
in the $\overline{MS}$ scheme \cite{BPR,DN98}, and $B_0^{ENJL}=2.80$ GeV
\cite{BBR}. We have also included the QCD scale dependence of
the $B_0$ parameter to two-loops.
We show in Table \ref{tableQ6} the results
for $G_8(\mu)[Q_6]$ and $G_8^\prime(\mu)[Q_6]$ without the
renormalization factor of Eq. (\ref{renormB0}), columns labelled ENJL, and
including the
renormalization factor of Eq. (\ref{renormB0}) both to one-loop,
columns labelled $^{(1)}$,
and two-loops in QCD , columns labelled $^{(2)}$.
Notice $B_0(\mu)=-\langle\bar{q}q\rangle(\mu)/F_0^2$ and this factor
is responsible for most of the running of $Q_6$\cite{EdR89}.
\begin{table}
\begin{center}
\begin{tabular}{|c|cccccc|}
\hline
$\mu$ (GeV) &$G_8[Q_6]$&
$G_8^\prime[Q_6]$&
$G_8[Q_6]$&
$G_8^\prime[Q_6]$&
$G_8[Q_6]$ &
$G_8^\prime[Q_6]$\\[0.2cm]
& ENJL & ENJL & $^{(1)}$ &$^{(1)}$&$^{(2)}$&$^{(2)}$\\
\hline
0.3 & -118 & -69 &&&& \\
0.4 & -103 & -53 &&&& \\
0.5 & -93 & -41 & -21.1 & -9.3 & -6.4 & -2.8 \\
0.6 & -88 & -32 & -23.9 & -8.7 & -14.7 & -5.3 \\
0.7 & -84 & -25 & -25.9 & -7.7 & -20.1 &-6.0 \\
0.8 & -82 & -20 & -27.9 & -6.8 & -24.5 & -6.0 \\
0.9 & -82 & -17 & -30.0 & -6.2 & -28.4 & -5.9 \\
1.0 & -83 & -15 & -32.4 & -5.9 & -32.4 &-5.9 \\
\hline
\end{tabular}
\end{center}
\caption{\label{tableQ6} Results for the long-distance contributions
to $G_8$ and $G_8^\prime$ from $Q_6$ as calculated using the ENJL model via
the two-point functions for the non-factorizable part and adding
the model independent factorizable part in (\ref{G8Q6fact}) and
(\ref{G8pQ6fact}).
The last 4 columns include the renormalization of scalar and
pseudo-scalar densities to one-loop $^{(1)}$ and two-loops $^{(2)}$
in QCD. The short-distance
anomalous dimensions for $B_0(\mu)$ at scales below 0.5 GeV
blows up.}
\end{table}
\section{The Order $p^2$ Full $\Delta S=1$ Couplings}
\label{fullresults}
We use here the results of \cite{two-loops} and \cite{one-loop}
for the $\Delta S=1$ QCD anomalous dimensions to one- and two-loops
respectively to obtain final values.
The solution for the Wilson coefficients are given in
\cite{two-loops,BurasReviews} at two-loops
using an expansion in $\alpha_s$. Whenever
the values of $\Lambda_{QCD}$ are needed in the
$\overline{MS}$ scheme with three flavours
we use the expanded in $\alpha_s$ formulae \cite{PDG} from
$\alpha_s(M_\tau)=0.334\pm0.006$
with $M_\tau=1.77705 \pm 0.00030$ GeV \cite{PDG}
and get $\Lambda^{(1)}_{QCD}=$ 220 MeV to one-loop
and $\Lambda^{(2)}_{QCD}=$ 400 MeV to two-loops. The
values of the Wilson coefficients we use
for $\Delta S=1$ \cite{two-loops,BurasReviews}
and for $\Delta S=2$ \cite{S=2twoloops}
are in the Appendix.
We also include there the scheme dependent constants $r_1$ needed for the
two-loops short-distance running in the NDR scheme we use.
We now show in Tables \ref{resultsg27g8g8p} and \ref{resultsg27g8g8p2}
the results
for the coefficients $G_{27}$, $G_8$ and $G_8^\prime$.
The numbers in brackets refer to keeping only $Q_1$, $Q_2$, and $Q_6$.
Most of the difference is due to $Q_4$.
The matching for the one-loop running of the Wilson coefficients is very good.
We obtain a value of $G_8\approx4.3$ and $G_8^\prime\approx0.8$.
If we look inside the numbers,
for $G_8$ the contribution via $Q_1$ is fairly constant over the whole
range but there is a distinct shift from $Q_2$ to $Q_6$ for lower
values of $\mu$. The operator $Q_2$ remains the most important over the
entire range of $\mu$ considered. For $G_8^\prime$ similar comments
apply except that $Q_1$ doesn't contribute.
Typically $G_{27}$ is somewhat low compared to the experimental number
and we have not as good matching as in the octet sector.
Notice though that it gets somewhat more stable
in the range between 0.5 and 0.8 GeV as one expects from
the validity of the low-energy model.
When two-loop running is taken into account in the
NDR scheme the numbers do not change so much. The effect of the
$r_1$ constants in this scheme
is however very large and causes a significant
shift in the numbers.
The numbers for the octet case are somewhat stable
in the range $\mu=0.8$ to $1.0$ GeV but there is where the ENJL model
is expected to start deviating from the true behaviour.
\begin{table}
\begin{center}
\begin{tabular}{|c|ccc|}
\hline
$\mu$ (GeV)& $G_{27}$ & $G_{8}$ & $G_8^\prime$\\
\hline
0.5 & 0.399 & 4.45 (4.55) & 0.739 (0.761)\\
0.6 & 0.351 & 4.26 (4.34) & 0.686 (0.710)\\
0.7 & 0.291 & 4.21 (4.28) & 0.703 (0.727)\\
0.8 & 0.221 & 4.25 (4.30) & 0.767 (0.789)\\
0.9 & 0.141 & 4.33 (4.37) & 0.847 (0.866)\\
1.0 & 0.050 & 4.44 (4.46) & 0.923 (0.935)\\
\hline
\end{tabular}
\end{center}
\caption{\label{resultsg27g8g8p} The final results for the three
$O(p^2)$ couplings using the one-loop Wilson coefficients. The numbers
in brackets refer to using $Q_1$, $Q_2$, and $Q_6$ only.}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|ccc|}
\hline
$\mu$ (GeV) & $G_{27}$ & $G_{8}$ & $G_8^\prime$\\
\hline
0.5 & 0.182 & 11.20 (12.4)& 1.60 (1.75)\\
0.6 & 0.249 & 7.30 (7.8) & 1.13 (1.22)\\
0.7 & 0.230 & 6.30 (6.6) & 0.99 (1.10)\\
0.8 & 0.184 & 5.88 (6.2) & 0.97 (1.08)\\
0.9 & 0.121 & 5.73 (5.9) & 0.99 (1.11)\\
1.0 & 0.044 & 5.61 (5.8) & 1.03 (1.14)\\
\hline
\end{tabular}
\end{center}
\caption{\label{resultsg27g8g8p2} The final results for the three
$O(p^2)$ couplings using the two-loop Wilson coefficients
with the inclusion of the $r_1$ factors. The numbers
in brackets refer to using $Q_1$, $Q_2$, and $Q_6$ only.}
\end{table}
Notice that at large $N_c$, $G_8$ and $G_{27}$
are both 1. Adding $1/N_c$ corrections
$G_{27}$ decreases by a non-negligible factor
around two to three, while the $G_8$ coupling gets
enhanced up to $G_8=6.2 \pm0.7$.
The short-distance enhancement is almost a factor
of two for the whole range of $\mu$. The rest of the
enhancement, namely a factor two to three is mainly due to the
large value of the long-distance contribution
to the Penguin-like coupling $c$.
The bulk of the long distance part enhancement of the coupling $c$
comes from $Q_2$ and $Q_6$. There is also a
small contribution to $G_8$ in the right direction from the $B_K$-like
coupling $b$ from both $Q_2$ and $Q_1$.
The final results for the ratio $|A_0/A_2|$ at
$O(p^2)$ (\ref{ratioA0A2p2}) are in Table \ref{resultsA0/A2}.
The stability we get for the one-loop short-distance is not bad,
and there is some minimum around 0.7 GeV for the two-loop running.
We get in general too large values for this ratio compared
to the experimental 16.4 value (\ref{I=1/2p2}) due to the
somewhat small value of $G_{27}$ we get.
\begin{table}
\begin{center}
\begin{tabular}{|c|cc|}
\hline
$\mu$ (GeV) & One-Loop & Two-Loops \\
\hline
0.5 & 14.3 & 78.5 \\
0.6 & 15.6 & 37.5 \\
0.7 & 18.6 & 35.0 \\
0.8 & 24.6 & 40.8 \\
0.9 & 39.2 & 60.1 \\
1.0 & 113.2 & 162.4 \\
\hline
\end{tabular}
\end{center}
\caption{\label{resultsA0/A2} The final results for the
ratio $|A_0/A_2|$ to $O(p^2)$ using the one-loop short-distance
running and the full scheme independent two-loops short-distance running.}
\end{table}
In order to show the improvement with previous results and the quality
of the matching we have shown in Figure \ref{figmatch} for $G_{27}(\mu)$
the lowest order result Eq. (\ref{G8CHPT}), the ENJL result for the
same quantity and the final result for $G_{27}$ with the two-loop
short distance included. We have similarly plotted $G_8[Q_1](\mu)$ and
$G_8[Q_2](\mu)$ both from the lowest order result Eq. (\ref{G8CHPT})
and from the ENJL model. We also showed the full result for $G_8^\prime$
when the two-loop running is included properly. Similar improvements of
Eq. (\ref{G8CHPT}) and (\ref{G8Q6}) can be seen by plotting the other
results with the corresponding ones from Tables \ref{tablecurrent},
\ref{tableQ6}, and \ref{resultsg27g8g8p2}.
\begin{figure}
\begin{center}
\epsfig{file=figmatch.ps,width=13cm,angle=-90}
\end{center}
\caption{\label{figmatch} The improvement of the behaviour with $\mu$
of several quantities. Shown are the lowest order result, ENJL result,
and when short-distance running is added.}
\end{figure}
In summary, the results we get for $G_8$, $G_{27}$, and $G_8'$ are
\begin{eqnarray}
\label{finalnumbers}
4.3 < &G_8 &< 7.5 \nonumber\\
0.8 < & G_8^\prime & < 1.1\nonumber\\
0.25 < &G_{27}& < 0.40
\end{eqnarray}
The bounds have been chosen by looking at both the one-loop and two-loop
results in the stability regions in
Tables \ref{resultsg27g8g8p} and \ref{resultsg27g8g8p2}.
{}From (\ref{finalnumbers}) we can extract the values
\begin{eqnarray}
-0.75 &< b<& -0.50 \nonumber \\
1.7 &< c <& 3.7
\end{eqnarray}
and we have fixed $a=1$ as explained before.
For the $\Delta I=1/2$ rule we get
\begin{eqnarray}
15 < \left| \frac{A_0}{A_2} \right|^{(2)} < 40
\end{eqnarray}
to order $p^2$.
We get a huge enhancement due to the $c$-coupling, it is therefore
interesting what do other calculations predict for this coupling.
One model where this coupling can be easily extracted is
the effective action approach \cite{PdeR91}.
To order $1/N_c$ one gets \cite{PdeR91,BP93}
\begin{eqnarray}
c&=& C_2(\mu) - 1 + \Re e \, C_4(\mu) +
C_2(\mu) \, \frac{4 \pi\alpha_s(\mu)}{N_c}
\, (2H_1+L_{10})(\mu) \nonumber \\
&-& 16 \frac{B_0^2(\mu)}{F_0^2}\,
L_5(\mu) \left[ \Re e \, C_6(\mu) + C_2(\mu) \,
\frac{4 \pi\alpha_s(\mu)}{N_c}
\, (2H_1+L_{10})(\mu) \right] \nonumber \\ &+& O(1/N_c^2)
\end{eqnarray}
with $\mu=M_\rho$, $\alpha_s(M_\rho)=0.70$, $B_0(M_\rho)=1.4$ GeV,
and $(2H_1+L_{10})(M_\rho)= -0.015$ \cite{BBR},
we get
\begin{eqnarray}
c&=&0.95 \pm 0.40 \, .
\end{eqnarray}
The reason why $c$ is smaller than the present work results
is that the long-distance mixing between $Q_2$ and $Q_6$
is not well treated in this model. In fact this contribution
is model dependent already at $O(1/N_c)$. For instance, it appears
in terms of the short-distance value $\alpha_s(M_\rho)$.
It is clear that at such scales one has to treat the long
distance contributions in a hadronic model and the $\alpha_s(M_\rho)$
above will appear enhanced. Nevertheless, the extra contribution
to $c$ coming from the operator $Q_2$ \cite{PdeR91,BP93}
both from short-distance origin, namely the term $C_2(\mu)-1$,
and from long-distance origin, namely the part proportional to
$2H_1+L_{10}$, give some insight on the potentially large value of $c$.
We cannot easily compare our result with those of \cite{Trieste},
their method of calculating the low energy part has no obvious connection
to the short-distance evolution and their results cannot be directly
compared to ours. The results from the lattice \cite{lattice}
are at rather high values of the quark masses and can thus also not be
simply compared to our results.
As stated above, we agree with the calculations of \cite{GKPSB}
for low values of the scale $\mu$ where we should agree but deviate
significantly at higher scales. The earlier Dortmund group
results \cite{Dortmund} are thus also expected to have significant
corrections. The attempts at calculating via more inclusive modes
\cite{PdeR}
have very large QCD corrections\cite{PdeR91,Pich}. We see the remnant
of this in the large corrections from the $r_1$ terms, see
Appendix \ref{AppA}. The short-distance factors are in fact
one of the bigger remaining sources of uncertainty.
\section{Results and Conclusions}
\label{conclusions}
The main results of this paper are the results for the $O(p^2)$ couplings
$G_8$, $G_{27}$, and $G_8^\prime$ as a function of cut-off
$\mu$ for the various
operators $Q_j$, $j=1,\cdots, 6$, as given
in Tables \ref{tableBK}, \ref{tablecurrent}, and \ref{tableQ6}.
In addition we have corrected our earlier results for $B_K(\mu)$ for
the routing problem as described in Section \ref{routing} and presented
those in Table \ref{tableBK} as well.
The other main result of this paper is the observation that
in the chiral limit the factorizable contribution from $Q_6$ is not well
defined due to an infrared divergence
and we expect that similar problems will show up for the
current-current operators when we try to calculate higher order coefficients
in the weak chiral perturbation theory Lagrangian. We showed that
the total contribution of $Q_6$ obtained after adding the non-factorizable
and factorizable parts is however well defined. We also expect that
he same solution will hold for coefficients of higher order operators
in the chiral lagrangian. A corollary of this observation is
that the use of $B$-factors in the chiral limit
as is common in other treatments of weak
non-leptonic operators is not possible in the way they are defined, namely
the whole result normalized to the VSA result.
One could use the leading result in $1/N_c$ as an appropriate starting
point for normalizing the $B_6$-parameter in the chiral limit,
this is keeping only the $L_5$, $L_8$, and $H_2$ terms but this is difficult
to implement for lattice gauge theory calculations.
In fact, what in practice people have used \cite{BurasReviews,GKPSB}
for the VSA, i.e. the
factorizable part of $Q_6$, has been just the large $N_c$ part.
Of course, this is not in agreement with what is done with other $B$
parameters for current $\times$ current operators
like $B_K$ where the $1/N_c$ factorizable part
is always included in the VSA result.
After the problems we encountered and the
importance of the $B$ parameters to normalize results from different
techniques,
we believe a new consistent definition
of the $B$ parameters should be looked for or just abandon the use
of $B$ parameters and quote matrix elements values.
We also emphasize that
caution should be taken when combining results from different methods for
the factorizable and non-factorizable contributions.
When we combine our main results with the Wilson coefficients at one-loop
we get nice stable results. Using the Wilson coefficients at two-loops with
the inclusion of the $r_1$ factors which as we argued in Section \ref{Xboson}
is necessary we obtain relatively stable values for $G_8$ and the coefficient
of the weak mass term $G_8'$ with
\begin{eqnarray}
4.3 < &G_8 &< 7.5 \nonumber\\
0.8 < & G_8^\prime & < 1.1\nonumber\\
0.25 < &G_{27}& < 0.40
\end{eqnarray}
The main uncertainty here is in fact coming from the short-distance
coefficients for the octet case and from the long-distance for the
27-plet case. For the $G_{27}$ coupling we obtain a somewhat small
value compared to the experimental one. This translates into the
following results for the $\Delta I=1/2$ rule in the chiral limit
\begin{equation}
15< \left| \frac{A_0}{A_2} \right|^{(2)} < 40\,.
\end{equation}
These results are somewhat large.
Nevertheless, we would like to emphasize that we have obtained these
results from
a next-to-leading in $1/N_c$ long-distance calculation and we have
passed from the large $N_c$ result $| A_0/A_2 |_{N_c}=\sqrt 2$
to values around 20 to 35. One
can certainly expect non-negligible $1/N_c^2$ corrections to our
results but the huge enhancement is there. We would also like to stress
that we have no free input in our calculation. All parameters have been
determined from elsewhere.
{}From the results above we have also obtained the couplings
$b=G_{27}-1$ and $c=(3G_8+2G_{27})/5-1$
\begin{eqnarray}
-0.75 &< b<& -0.50 \nonumber \\
1.7 &< c <& 3.7 \, .
\end{eqnarray}
Here is then one of our main results, the $\Delta I=1/2$
rule enhancement comes from the
Penguin-like topologies $(c)$ in Figure \ref{figfull}, both from
$Q_2$ which dominates for high values of $\mu$
and from $Q_6$ which dominates for small values of $\mu$.
In addition we obtain a value for chiral limit value
$\hat B_K^\chi$ as defined in Eq. (\ref{defBKhat})
\begin{equation}
0.25<\hat B_K^\chi <0.40
\end{equation}
and the value for the $\hat B_K$ parameter
in the real case
\begin{equation}
0.59<\hat B_K < 0.79 \, .
\end{equation}
These two results confirm the ones in \cite{BPBK}.
Notice that the different short-distance contribution
{}from $M_W$ until the charm quark mass to $G_{27}$ and
$\hat B_K^\chi$ has produced
\begin{equation}
\frac{\hat B_K^\chi}{G_{27}} \simeq 1.1 \,
\end{equation}
instead of $3/4$.
So we have obtained quite good matching for $G_8$, $G_8^\prime$,
and $\hat B_K$ for values of $\mu$ around $0.7-1.04$ GeV
and for $G_{27}$ for values of $\mu$ around $0.6-0.8$ GeV.
We obtained values
for the three parameters of order $p^2$ not too far from the
experimental ones and a quantitative
understanding of the origin of the $\Delta I=1/2$ enhancement.
Notice that the values of the cut-off we use to predict
our results are not extremely low as in other $1/N_c$ approaches,
still one would like the matching region to be larger and
for somewhat larger values of the cut-off.
\acknowledgments
This work was partially supported by the European Union
TMR Network $EURODAPHNE$ (Contract No ERBFMX-CT98-0169) and by the
Swedish Science Foundation.
The work of J.P. was supported in part
by CICYT (Spain) and by Junta de Andaluc\'{\i}a under Grants Nos.
AEN-96/1672 and FQM-101 respectively. J.P. also likes to thank the
CERN Theory Division and the Department of Theoretical Physics at Lund
University (Sweden) where part of his work was done for hospitality.
We thank Elisabetta Pallante for participation in the early parts of this work
and Eduardo de Rafael for discussions
| proofpile-arXiv_065-8342 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus
.2ex}{2.3ex plus .2ex}{\large\bf}}
\def\arabic{section}.{\arabic{section}.}
\def\large\arabic{section}.\arabic{subsection}.{\large\arabic{section}.\arabic{subsection}.}
\def#1}{}
\def\FERMILABPub#1{\def#1}{#1}}
\def\ps@headings{\def\@oddfoot{}\def\@evenfoot{}
\def\@oddhead{\hbox{}\hfill
\makebox[.5\textwidth]{\raggedright\ignorespaces --\thepage{}--
\hfill }}
\def\@evenhead{\@oddhead}
\def\subsectionmark##1{\markboth{##1}{}}
}
\ps@headings
\catcode`\@=12
\relax
\def\r#1{\ignorespaces $^{#1}$}
\def\figcap{\section*{Figure Captions\markboth
{FIGURECAPTIONS}{FIGURECAPTIONS}}\list
{Fig. \arabic{enumi}:\hfill}{\settowidth\labelwidth{Fig. 999:}
\leftmargin\labelwidth
\advance\leftmargin\labelsep\usecounter{enumi}}}
\let\endfigcap\endlist \relax
\def\tablecap{\section*{Table Captions\markboth
{TABLECAPTIONS}{TABLECAPTIONS}}\list
{Table \arabic{enumi}:\hfill}{\settowidth\labelwidth{Table 999:}
\leftmargin\labelwidth
\advance\leftmargin\labelsep\usecounter{enumi}}}
\let\endtablecap\endlist \relax
\def\reflist{\section*{References\markboth
{REFLIST}{REFLIST}}\list
{[\arabic{enumi}]\hfill}{\settowidth\labelwidth{[999]}
\leftmargin\labelwidth
\advance\leftmargin\labelsep\usecounter{enumi}}}
\let\endreflist\endlist \relax
\catcode`\@=11
\def\marginnote#1{}
\def#1}{}
\def\FERMILABPub#1{\def#1}{#1}}
\def\ps@headings{\def\@oddfoot{}\def\@evenfoot{}
\def\@oddhead{\hbox{}\hfill
\makebox[.5\textwidth]{\raggedright\ignorespaces --\thepage{}--
\hfill }}
\def\@evenhead{\@oddhead}
\def\subsectionmark##1{\markboth{##1}{}}
}
\ps@headings
\relax
\def\firstpage#1#2#3#4#5#6{
\begin{document}
\begin{titlepage}
\nopagebreak
\title{\begin{flushright}
\vspace*{-1.0in}
{\normalsize NUB--#1 #2}\\[-9mm]
{\normalsize hep-th/9811224}\\[14mm]
\end{flushright}
{#3}}
\author{\large #4 \\ #5}
\maketitle
\vskip -7mm
\nopagebreak
\def1.0{1.0}
\begin{abstract}
{\noindent #6}
\end{abstract}
\vfill
\begin{flushleft}
\rule{16.1cm}{0.2mm}\\[-3mm]
$^{\star}${\small Research supported in part by
the National Science Foundation under grant
PHY--96--02074.}\\
November 1998
\end{flushleft}
\thispagestyle{empty}
\end{titlepage}}
\newcommand{{\mbox{\sf Z\hspace{-3.2mm} Z}}}{{\mbox{\sf Z\hspace{-3.2mm} Z}}}
\newcommand{{\mbox{I\hspace{-2.2mm} R}}}{{\mbox{I\hspace{-2.2mm} R}}}
\def\stackrel{<}{{}_\sim}{\stackrel{<}{{}_\sim}}
\def\stackrel{>}{{}_\sim}{\stackrel{>}{{}_\sim}}
\newcommand{\raisebox{0.085cm}{\raisebox{0.085cm}
{\fbox{\rule{0cm}{0.07cm}\,}}}
\newcommand{\partial_{\langle T\rangle}}{\partial_{\langle T\rangle}}
\newcommand{\partial_{\langle\bar{T}\rangle}}{\partial_{\langle\bar{T}\rangle}}
\newcommand{\alpha^{\prime}}{\alpha^{\prime}}
\newcommand{M_{\scriptscriptstyle \!S}}{M_{\scriptscriptstyle \!S}}
\newcommand{M_{\scriptscriptstyle \!P}}{M_{\scriptscriptstyle \!P}}
\newcommand{\int{\rm d}^4x\sqrt{g}}{\int{\rm d}^4x\sqrt{g}}
\newcommand{\left\langle}{\left\langle}
\newcommand{\right\rangle}{\right\rangle}
\newcommand{\varphi}{\varphi}
\newcommand{\bar{a}}{\bar{a}}
\newcommand{\,\bar{\! S}}{\,\bar{\! S}}
\newcommand{\,\bar{\! X}}{\,\bar{\! X}}
\newcommand{\,\bar{\! F}}{\,\bar{\! F}}
\newcommand{\bar{z}}{\bar{z}}
\newcommand{\,\bar{\!\partial}}{\,\bar{\!\partial}}
\newcommand{\bar{T}}{\bar{T}}
\newcommand{\bar{\tau}}{\bar{\tau}}
\newcommand{\bar{U}}{\bar{U}}
\newcommand{\bar\Theta}{\bar\Theta}
\newcommand{\bar\eta}{\bar\eta}
\newcommand{\bar q}{\bar q}
\newcommand{\bar{Y}}{\bar{Y}}
\newcommand{\bar{\varphi}}{\bar{\varphi}}
\newcommand{Commun.\ Math.\ Phys.~}{Commun.\ Math.\ Phys.~}
\newcommand{Phys.\ Rev.\ Lett.~}{Phys.\ Rev.\ Lett.~}
\newcommand{Phys.\ Rev.\ D~}{Phys.\ Rev.\ D~}
\newcommand{Phys.\ Lett.\ B~}{Phys.\ Lett.\ B~}
\newcommand{\bar{\imath}}{\bar{\imath}}
\newcommand{\bar{\jmath}}{\bar{\jmath}}
\newcommand{Nucl.\ Phys.\ B~}{Nucl.\ Phys.\ B~}
\newcommand{{\cal F}}{{\cal F}}
\renewcommand{\L}{{\cal L}}
\newcommand{{\cal A}}{{\cal A}}
\newcommand{{\cal M}}{{\cal M}}
\newcommand{{\cal N}}{{\cal N}}
\newcommand{{\cal T}}{{\cal T}}
\newcommand{{\rm AdS}}{{\rm AdS}}
\renewcommand{\Im}{\mbox{Im}}
\newcommand{{\rm e}}{{\rm e}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\gsi}{\,\raisebox{-0.13cm}{$\stackrel{\textstyle
>}{\textstyle\sim}$}\,}
\newcommand{\lsi}{\,\raisebox{-0.13cm}{$\stackrel{\textstyle
<}{\textstyle\sim}$}\,}
\date{}
\firstpage{3192}{}
{\large\sc Remarks on Two-Loop Free Energy in ${\cal N}\,{=}\,\,4$
Supersymmetric\\[-5mm]
Yang-Mills Theory at Finite Temperature$^\star$}
{A. Fotopoulos and
T.R. Taylor}
{\normalsize\sl Department of Physics, Northeastern
University, Boston, MA 02115, U.S.A.}
{The strong coupling behavior of finite temperature free energy in
${\cal N}\,{=}\,\,4$ supersymmetric $SU(N)$ Yang-Mills theory has been recently
discussed by Gubser, Klebanov and Tseytlin in the context of AdS-SYM
correspondence. In this note, we focus on the weak coupling behavior.
As a result of a two-loop computation we obtain, in the large $N$ 't Hooft limit,
$F(g^2N\to 0 )\approx -\frac{\pi^2}{6}N^2V_3T^4\,(1-\frac{3}{2\pi^2}g^2N)$.
Comparison with the strong coupling expansion
provides further indication that free energy is a smooth monotonic
function of the coupling constant.}
\setcounter{section}{0}
{}Finite temperature effects break supersymmetry \cite{gg}. By
switching on non-zero
temperature one can interpolate between supersymmetric and non-supersymmetric
theories. For instance in gauge theories, one can interpolate between the
supersymmetric case and a theory which contains pure Yang-Mills (YM) as the
massless sector, with some additional thermal excitations.
In the infinite temperature limit, the time
dimension decouples and, at least formally, one obtains a
non-supersymmetric Euclidean gauge theory.
If no phase transition occurs when the YM gas is heated up, then
the dynamics of realistic gauge theories such as QCD
is smoothly connected to their supersymmetric relatives.
Maldacena conjecture \cite{mald} which relates the large $N$ limit of
${\cal N}{=}\,4$ supersymmetric $SU(N)$ Yang Mills theory (SYM) to
type IIB superstrings
propagating on ${\rm AdS}_5\times S^5$, provides a very promising
starting point towards QCD. On the superstring side,
non-zero temperature can be simulated by including Schwarzschild
black holes embedded in AdS spacetime \cite{wit},
which describe the near-horizon geometry
of non-extremal D-brane solutions \cite{hs}. The classical geometry
of black holes with Hawking temperature $T$ does indeed encode correctly
many qualitative features of large $N$ gauge theory heated up
to the same temperature. At the quantative level though, the comparison
between SYM and supergravity becomes rather subtle because the supergravity
side merely provides the strong coupling expansion for physical quantities
while most of finite temperature computations in SYM are limited
to the perturbative, weak coupling expansion. In this note, we comment
on the computation of free energy.
The SYM thermodynamics was first compared with the thermodynamics of
D-branes in ref.\cite{gkp}. The free energy $F$ obtained in ref.\cite{gkp}
describes the limit of infinitely strong coupled SYM theory.
More recently,
the AdS-SYM correspondence has been employed for computing
the subleading term in the strong coupling expansion
(in $\lambda\equiv g^2N$)
\cite{gkt,ty}:\footnote{In this context, the gauge coupling $g$ is related to
the type IIB superstring coupling $g_s$: $g^2=2\pi g_s$.}
\begin{equation}
{}F(\lambda\to\infty) ~\approx~ -\frac{\pi^2}{6}N^2V_3T^4\,\big[\,\frac{3}{4}+
\frac{45}{32}\zeta(3)(2\lambda)^{-3/2}\,\big]\ . \label{inf}
\end{equation}
The comparison with the limiting free-theory value,
\begin{equation}
{}F(\lambda=0 )~=~ -\frac{\pi^2}{6}N^2V_3T^4\ , \label{free}\end{equation}
indicates that the exact answer has the form:
\begin{equation}
{}F(\lambda)~=~-\frac{\pi^2}{6}N^2V_3T^4f(\lambda)\ ,\label{exact}\end{equation}
where the function $f(\lambda)$ interpolates smoothly between the asymptotic
values $f(0)=1$ and $f(\infty)=3/4$ \cite{gkp}. The sign of the subleading
correction ${\cal O}[(2\lambda)^{-3/2}]$ in eq.(\ref{inf})
indicates that $f$ decreases monotonically from 1 to 3/4.
The question whether free energy interpolates smoothly between
weak and strong coupling limits deserves careful investigation,
especially in view of the recent claim in favor
of a phase transition at finite $\lambda$ \cite{miao}.\footnote{
It is beyond the scope of this note to review the arguments of ref.\cite{miao},
however we would like to point out that they involve certain
assumptions on the convergence properties of perturbative expansions.
The proposed $2\pi^2$ convergence radius does not seem realistic
after one looks at the two-loop correction, see
eq.(\ref{two}).}
There is, however, a place to look for further hints on the
properties of free energy:
the subleading terms in the weak coupling expansion.
Surprisingly enough, they cannot be found in the existing literature.
In order to fill this gap, we calculated the two-loop correction to
free energy. The result is:
\begin{equation}
{}F(\lambda\to 0 )~\approx~ -\frac{\pi^2}{6}N^2V_3T^4
\,[1-\frac{3}{2\pi^2}\lambda\, ]\ . \label{two}\end{equation}
The (relative) negative sign of the two-loop correction provides
further indication that the free energy is a smooth, monotonic
function of the 't Hooft coupling $\lambda$. In the following part
of this note we present some details of the two-loop computation leading to
eq.(\ref{two}).
For the purpose of diagrammatic computations, it is convenient
to use the ${\cal N}\,{=}\,1$ decomposition of ${\cal N}\,{=}\,4$ SYM \cite{sym},
with one Majorana fermion corresponding to
the gaugino, and the three remaining Majorana fermions combined
with scalars in three ${\cal N}\,{=}\,1$ chiral multiplets.
The two-loop diagrams are displayed in Figure 1, together with
the combinatorial/statistics factors.
\begin{figure}
\[
\psannotate{\psboxto(0cm;5cm){ftest.eps}}{}
\]\vskip -1cm
\caption{\em Two-loop diagrams contributing to free energy. Gauge
bosons are represented by wiggly lines, ghosts by dotted lines,
fermionic (Majorana) components of chiral multiplets by dashed lines,
scalars by solid lines and ${\cal N}\,{=}\, 1$ gauginos by double-dashed lines.}
\vskip 5mm
\end{figure}
The two-loop integrals can be readily performed by using
techniques described in refs.\cite{kap}. In the table below,
we list results for individual diagrams:\footnote{Diagrams
are computed in the Feynman gauge.}
\begin{equation}
\begin{array}{|c|c|c|c|c|c|c|c|c|c|}
\hline
{}~~A~~ & ~~B~~ & ~~C~~& ~~D~~& ~~E~~ &~~F~~ & ~~G~~ & ~~H~~& ~~I~~
&~~J~~\\ \hline
-\frac{9}{4}\alpha& 3\alpha& \frac{1}{4}\alpha& \frac{3}{4}\beta&
\frac{1}{4}\beta&
-\frac{9}{2}\alpha&\frac{3}{2}\beta&\frac{3}{2}\beta& 12\alpha&
\frac{15}{2}\alpha\\
\hline\end{array}\nonumber\end{equation}
where
\begin{eqnarray}
\alpha &=& g^2c_Ad\,V_3\bigg({T^4\over 144}+{T^2\over 12(2\pi)^3}\int
{d^3\vec{k}\over |\vec{k}|}\bigg)\ ,\\
\beta &=& g^2c_Ad\,V_3\bigg({5T^4\over 144}-{T^2\over 3(2\pi)^3}\int
{d^3\vec{k}\over |\vec{k}|}\bigg)\ ,
\end{eqnarray}
with $d$ denoting the dimension of the gauge group and $c_A$ the Casimir
operator in the adjoint representation. Note that individual diagrams
contain ultraviolet divergences. After combining all
contributions, we obtain the (finite) result:
\begin{equation}
{}F_{\rm 2-loop}=16\alpha+4\beta=\frac{1}{4}g^2c_Ad\,V_3 T^4\ .\label{f2}
\end{equation}
Specified to the case of $SU(N)$, with $d=N^2-1$ and $c_A=N$,
in the leading large $N$ order the above result yields eq.(\ref{two}).
Finally, we would like to make a few remarks on the structure of higher-order
perturbative corrections. The computation of higher-order terms requires
reorganizing the perturbation theory to account for Debye screening
and yields terms non-analytic in
$\lambda$ such as ${\cal O}(\lambda^{3/2})$ and
${\cal O}(\lambda^2\ln\lambda)$ \cite{kap,gpy}. The full ${\cal O}(\lambda^2)$
term requires a three-loop calculation \cite{arn} and a full accounting
of Debye screening at three loops would produce the ${\cal O}(\lambda^{5/2})$
terms. However, perturbation theory is believed to be incapable of
pushing the calculation to any higher order due to infrared problems
associated with magnetic
confinement and the presence of non-perturbative ${\cal O}(\lambda^3)$
contributions \cite{kap,gpy}.
It would be very interesting to analyze from this point of view
the strong coupling expansion.
\noindent {\bf Acknowledgments}
Most of this work was done while the authors were visiting
Laboratoire de Physique Th\'eorique et Hautes Energies
at l'Universit\'e Paris-Sud, Orsay.
We are grateful to Pierre Bin\'etruy and all members of LPTHE
for their kind hospitality.
| proofpile-arXiv_065-8348 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction : two Pomerons ?}
In a recent paper \cite{donnadrie} the
conjecture was made that not one, but two Pomerons could
coexist. This proposal is based on a description of data for the proton
singlet structure function $F\left( x,Q^{2}\right) $ in a wide range of $x\left(
<0.7\right) $ and all available $Q^{2}$ values (including also the charm structure function and elastic photoproduction of $J/\Psi $ on the
proton). The singlet structure function reads\begin{equation}
F\left( x,Q^{2}\right) =\sum\limits_{i=0}^2F_{i}\left( x,Q^{2}\right) =\sum\limits_{i=0}^2f_{i}\left( Q^{2}\right) x^{-\epsilon_{i}},
\label{1}
\end{equation}
corresponding \cite {donnadrie,parametrization}
to the sum of three contributions, namely a ``hard'' Pomeron contribution with a fitted intercept $\epsilon_{0}=.435, $ a ``soft'' Pomeron exchange, as seen in soft hadronic
cross-sections with a fixed intercept $\epsilon_{1}=0.0808, $ and a secondary Reggeon singularity necessary to describe the larger $x$ region with intercept fixed at $\epsilon_{2}=-.4525.$ The ``hard'' Pomeron is in particular needed to describe the strong rise
of $F$ at small $x$ observed at HERA \cite{adloff}. The key observation
of Ref.[1] is that the agreement with data can be obtained by assuming an opposite
$Q^{2}$-behaviour for the two Pomeron contributions in formula (1). Indeed, for $Q^{2}>10 \ GeV ^{2},$ $%
f_{0}\left( Q^{2}\right) $ is increasing and $f_{1}\left( Q^{2}\right) $
decreasing (the precise
parametrizations \cite{parametrization} are given in a Regge theory framework).
This picture is suggestive of a situation where the ``soft'' and
``hard'' Pomerons are not one and the same object but two separate Regge singularities with rather different intercept and $Q^2$ behaviour. The ``hard'' Pomeron may be expected to be governed by perturbative QCD evolution equations.\ Indeed, at
small $x,$ a Regge singularity is expected to occur as a solution of the
BFKL equation \cite{bfkl} corresponding to the resummation of the leading $%
\left( \bar{\alpha}\ln 1/x\right) ^{n}$ terms in the QCD perturbative
expansion, where $\bar{\alpha}= \frac {\alpha N_c} {\pi}$ is the (small) value of the coupling
constant of QCD. The intercept
value is predicted to be
$\epsilon_{0}= 4\bar{\alpha}\ln 2.$
It is interesting to note that the phenomenological fit for the hard Pomeron
in Ref.\cite{donnadrie} corresponds to a reasonable value for $\bar{\alpha}%
\left( \approx .15\right) .$ The goal of the present paper is
to show that the global conformal
invariance of the BFKL equation \cite{lipatov} leads to a natural mechanism
generating both the ``hard'' and ``soft'' Pomeron singularities.
The plan of the paper is as follows: in Section {\bf 2}, using the BFKL equation and the set of its conformal-invariant components, we exhibit the
phenomenon generating sliding singularities. In
Section {\bf 3}, we explicitly describe the two-Pomeron
configuration obtained from the ``sliding'' mechanism. In section {\bf 4} we confront the resulting effective singularities
with the parametrization of \cite{donnadrie} and
discuss some expectations from non-perturbative corrections at small $Q^2.$ Finally, in section {\bf 5}, we discuss some phenomenological and theoretical implications of our QCD two-Pomeron mechanism.
\section{The ``sliding'' phenomenon}
\quad Let us start with the solution of the BFKL
equation expressed in terms of an expansion over the whole set of conformal spin components \cite{lipatov}. For structure functions, one may write (using the notation $Y=\ln 1/x$):
\begin{equation}
F\left( Y,Q^{2}\right) =\sum\limits_{p=0}^{\infty }F_{p}\left(
Y,Q^{2}\right) =\sum\limits_{p=0}^{\infty }\int_{1/2-{\rm i}\infty }^{1/2+%
{\rm i}\infty }d\gamma \left( \frac{Q}{Q_{0}}\right) ^{2\gamma }e^{
\bar{\alpha}\chi _{p}\left( \gamma \right) Y}f_{p}\left( \gamma
\right) , \label{2}
\end{equation}
with
\begin{equation}
\chi _{p}\left( \gamma \right) =2\Psi \left( 1\right) -\Psi \left(
p+1-\gamma \right) -\Psi \left( p+\gamma \right) \label{3}
\end{equation}
and
$Q_{0},$ being some scale characteristic of the target (onium, proton,
etc...). $\chi _{p}(\gamma)$
is the BFKL kernel eigenvalue corresponding to the $SL(2,{\cal C})$
unitary representation \cite {lipatov} labelled by the conformal spin
$p.$ It is to be noted that the $p=0$ component
corresponds to the dominant ``hard'' BFKL Pomeron. Usually the $%
p\neq 0$ components, required by conformal invariance\footnote {In the following, we will stick to integer values of $p$ since half-integer spin components exist but do note contribute to elastic cross-sections \cite{navelet}.} but subleading by powers of the energy, are omitted with respect to the leading logs QCD resummation. They are commonly
neglected in the phenomenological discussions. We shall see that they may play an important r\^ ole, however.
The couplings of the BFKL components to external sources are taken into account by
the weights $f_{p}\left( \gamma \right) $ in formula (2). Little is known
about these functions and we shall treat them as much as possible in a model independent way. For instance, they should obey some general
constraints, such as a behaviour when $\gamma \rightarrow \infty $ ensuring
the convergence of the integral in (2). We will see that some extra analyticity constraints will appear in the context of the two Pomeron problem\footnote{%
Note that a general constraint on the coupling of the BFKL kernel to
external particles is coming from gauge invariance \cite{lipatov}. We checked
that this constraint is rather weak in our case, and not relevant to the discussion.}.
The key observation leading to the sliding phenomenon starts by considering the successive derivatives of the kernels $\chi _{p}\left(
\gamma \right) .$ One considers the following suitable form:
\begin{eqnarray}
\chi _{p}\left( \gamma \right) &\equiv &\sum\limits_{\kappa =0}^{\infty }%
\left\{\frac{1}{p+\gamma +\kappa } +\frac{1}{p+1-\gamma +\kappa }-\frac{2}{\kappa +1}\right\}
\nonumber \\
&& \nonumber \\
\chi _{p}^{\prime }\left( \gamma \right) &\equiv &-\mathop{\displaystyle\sum}\limits_{\kappa }
\left\{\frac{1}{\left( p+\gamma +\kappa \right) ^{2}}-\frac{1}{%
\left( p+1-\gamma +\kappa \right) ^{2}}\right\} \nonumber \\
&& \nonumber \\
\chi _{p}^{\prime \prime }\left( \gamma \right) &\equiv &2\mathop{\displaystyle\sum}\limits_{\kappa }
\left\{\frac{1}{\left( p+\gamma +\kappa \right) ^3}+\frac{1}{%
\left( p+1-\gamma +\kappa \right) ^{3}}\right\}\ .
\label{4}
\end{eqnarray}
As obvious from (4), the symmetry $\gamma \Longleftrightarrow 1\!-\!\gamma $
leads to a maximum at $\gamma =1/2$ for all $p,$ and thus to a saddle-point
of expression (2) at $\Re e(\gamma) =1/2$ for ultra asymptotic values of $Y.$ The saddle-point approximation gives
\begin{equation}
F\left( x,Q^{2}\right) \vert_{Y\rightarrow \infty }
\approx
\left( \frac{Q}{Q_{0}}\right)\mathop{\displaystyle\sum}\limits_{p=0}^{\infty }\frac {f_{p}\left( \frac 12\right)}
{\sqrt {\pi \bar{\alpha} \chi _{p}^{\prime \prime }\left( \frac12\right) Y}}\ e^{\bar \alpha \chi _{p}\left( \frac12\right) Y}.
\label{5}
\end{equation}
The $Q$-dependent factor corresponds to a common anomalous dimension $\frac 12$ for all $p.$ Note that the known $Q$-dependent ``$k_T$-diffusion'' factor is absent in this ultra-asymptotic limit.
The series of functions of $Y$ is such that
only the first term has intercept $ \bar \alpha\chi _{p}\left( \frac 12 \right) $ larger than $0.$ Indeed,
\begin{align}
\chi _{0}\left(\frac 12\right) &=4\ln 2 \approx 2.77 \nonumber\\
\chi _{1}\left(\frac 12\right) &=\chi _{0}\left( \frac12\right)
-4\approx -1.23 \nonumber \\
\chi _{p+1}\left( \frac12\right) &<\chi _{p}\left( \frac12\right)
<...<0 ,\ \ p\geq 1.
\label{6}
\end{align}
This ultra asymptotic result is the reason why the conformal spin components with $p>0$
are generally neglected or implicitly taken into account by ordinary secondary Regge singularities with intercept less than $0.$
However, at large enough values of $Q^2$ and even for very large $Y,$ a
sliding phenomenon moves away the singularities corresponding to these conformal
spin components, leading to a very different behaviour from (5). Indeed, the sliding mechanism is already known \cite {bartelo,npr} to generate the diffusion factor of the leading $p=0$ component. However it has an even more important effect on the higher spin components as we shall discuss now.
The sliding mechanism is based on the fact that $\chi _{p}^{\prime \prime}\left(\frac 12\right),$ the second derivative of the kernels at the asymptotic saddle-point value, becomes in absolute value very small when $p \geq 1,$ in such a way that the real saddle-points governing the
integrals of formula (2) are considerably displaced from $\gamma =1/2.$
Indeed, considering the expansions (4), one has:
\begin{align}
\chi _{0}^{\prime \prime }\left( \frac 12\right) &=28\zeta (3)\approx 33.6\nonumber
\\
\chi _{1}^{\prime \prime }\left( \frac 12\right) &=28\zeta \left( 3\right)
-32\approx 1.66 \nonumber \\
1.66>...&>\chi _{p}^{\prime \prime }\left(
\frac 12\right) >\chi _{p+1}^{\prime \prime }\left(
\frac 12\right) >0 ,\ p \geq 2\ .
\label{7}
\end{align}
For the $p=0$ component, the corresponding integral in (\ref {2}) can be evaluated by a saddle-point in the vicinity of $\gamma = \frac 12,$ and gives the diffusion factor $\exp\left(-\log^2(Q/Q_0)^2/2\bar \alpha Y \chi _{0}^{\prime \prime }\left( \frac 12\right)\right) .$ Considering the rapid decrease by a factor 20 of the modulus of the second derivative for $p=1,$ it is easy to realize that, for components $p \geq 1 ,$ it is no more justified to evaluate the integrals in the vicinity
of $\gamma = \frac 12,$ the real saddle-point being away from this value. We shall make the correct evaluation in the next section.
\section{The ``sliding'' mechanism}
Let us consider the $F_{p}$ component of the summation (2) in the following way: For
each value of $\left( Y,\ln \frac{Q^{2}}{Q_{0}^{2}}\right) ,$ we compute the {\it effective intercept} (in units of $\bar\alpha$) ${\displaystyle{\partial \ln F_{p} \over \bar \alpha \partial Y}}$ displayed as a function of the {\it effective anomalous dimension} ${\displaystyle{\partial \ln F_{p} \over \partial \ln Q^{2}}}=\gamma_{c}.$ Our observation is that, for any weight $f_{p} \left( \gamma \right)
$ in formula (2), the resulting set of points accumulates near the curve $\chi _{p}\left( \gamma \right).$ This result is valid provided a saddle-point dominates the
integral.
The proof comes as follows: If a saddle point $\gamma _{c}$ dominates the
integral (2) for $F_{p}\left( Y,Q^{2}\right) ,$ the saddle-point
equation
\begin{equation}
\frac{\partial \ln F_{p}}{\partial
\gamma _{c}}= 2\ln \left( Q/Q_0\right) ^{2}+\bar{\alpha}Y\chi _{p}^{\prime }\left( \gamma
_{c}\right) +\left[ \ln f_{p}\left( \gamma _{c}\right) \right] ^{\prime }=0 \label{8}
\end{equation}
is verified and the resulting integral is approximated by
\begin{equation}
F_{p}\left( Y,Q^{2}\right) \approx \frac {
\left( Q/Q_{0}\right) ^{2\gamma _{c}}e^{\bar{\alpha}\chi
_{p}\left( \gamma _{c}\right) Y}\ f_{p}\left( \gamma _{c}\right)}
{
\left\{2\pi \left( \bar{\alpha}Y \chi
_{p}^{\prime \prime }\left( \gamma _{c}\right) +\left[ \ln f_{p}\left(
\gamma _{c}\right) \right] ^{\prime \prime }\right)\right\}^{\frac 12}}\ .
\label{9}
\end{equation}
Neglecting in (9) derivatives of the slowly
varying saddle-point prefactor $\left\{...\right\}^{-\frac 12},$ one may write
\begin{eqnarray}
\frac{d\ln F_{p}}{\bar{\alpha}dY} &=&\frac{\partial \ln F_{p}}{%
\partial \gamma _{c}}\times \frac{d\gamma _{c}}{\bar{\alpha}dY}+\frac{%
\partial \ln F_{p}}{\bar{\alpha}\partial Y}=\frac{\partial \ln F_{p}}{\bar{%
\alpha}\partial Y }\equiv \chi _{p}\left( \gamma _{c}\right) \nonumber
\\
&& \nonumber \\
\frac{d\ln F_{p}}{d\ln Q^{2}} &=&\frac{\partial \ln F_{p}}{\partial
\gamma _{c}}\times \frac{d\gamma _{c}}{d \ln Q^{2}}+\frac{\partial \ln F_{p}}{%
\partial \ln Q^{2}}=\frac{\partial \ln F_{p}}{\partial \ln Q^{2}}\equiv
\gamma _{c},
\label{10}
\end{eqnarray}
where one uses the saddle-point equation (8) to eliminate the
contributions due to the implicit dependence $\gamma _{c}\left(
Y,Q^{2}\right) .$ This proves our statement.
Interestingly enough, the property (10) is valid for any weight $f_{p}\left( \gamma \right),$ and thus can be used to characterize the generic behaviour of the expression (2). The only condition is the validity of a saddle-point approximation which is realized whenever $Q^2$ or $Y$ is large enough.
Let us discuss some relevant example. In Figs.1,2 we have plotted the result of the numerical
integration in expression (2) for $p=0,1,2,$ choosing $f_{p}\left( \gamma \right) \equiv \frac{1}{\cos \frac{\pi\gamma }{4}}.$ This weight is chosen in such a way that the convergence properties of the integrands are ensured and no extra singularity is generated for $\vert\gamma\vert < 2.$ Other weights with the same properties were checked to give the same results.
For comparison we also display the functions $\chi _{0}\left( \gamma \right) ,\chi_{1}\left( \gamma \right) $ and $%
\chi _{2}\left( \gamma \right) .$ Note that we have also included for the discussion the auxiliary branches of $\chi _{0}\left( \gamma \right)$ for the intervals
$-1<\gamma<0$ and $-2<\gamma<-1.$
The results both for $p=0$ (white circles) and $p=1,2$ (black circles) are displayed in Fig.1 for a fixed large value of total rapidity $Y=10$ and various values of $\ln {\displaystyle{Q^{2} / Q_{0}^{2}}},$ while in Fig.2 they are shown for a fixed value of $\ln {\displaystyle{Q^{2} / Q_{0}^{2}}}=4$ and various $Y.$ Indeed,
it is seen on these plots that the saddle-point property (10) is verified, even for the auxiliary branches\footnote{In the case of the two auxiliary branches
considered in Figs.1,2, we have considered an integration contour shifted by one and two units to the left in order to separate the appropriate contributions from the leading ones.}. The observed small systematic shift of the numerical results w.r.t. the
theoretical curves $\chi (\gamma)$ is well under control. It is related to the saddle-point prefactor in formula (9).
By various verifications, we checked
that the results shown in Figs.1,2 are generic if the following three
conditions are realized:
\bigskip
i) $Y$ or $\ln Q^{2}/Q_{0}^{2}$ are to be large enough $\left( \geq 2,3\right) $ to allow for a saddle-point method.
ii) $f_{p}\left( \gamma \right) $ is constrained to ensure the convergence and positivity of
the integrals of expression (2) in the complex plane.
iii) $f_{p}\left( \gamma \right) $ has no singularity for $\Re e(\gamma)
>-p.$
\bigskip
The striking feature of the results displayed in Figs.1,2 is that, while remaining in vicinity of the
curve $\chi _{p}\left( \gamma _{c}\right) ,$ $ {\displaystyle{d\ln F_{p} \over \bar{\alpha}dY}}$ and ${\displaystyle{d\ln F_{p} \over d\ln Q^{2}}}$ are shifted away from the ultra asymptotic saddle-point point at $\gamma =1/2.$ Moreover, the shift is larger if the conformal spin $p$ is higher.
Let us make a particular comment on the analyticity constraint iii). Obviously, the presence of a singularity at ${\cal R}e\gamma
>-p$ would prevent the existence of a shift. Indeed,
in Fig.3, we show the result for $f_{p}\left( \gamma \right) ={\displaystyle{\left( \gamma \cos \pi \gamma /4\right)^{-1}}}$ where we have explicitely violated the constraint iii) by a pole at $%
\gamma =0.$ As a result, the components $F_{1}$ and $F_{2},$ remain still very close to their reference curves $\chi_{1}\left( \gamma \right) $ and $\chi _{2}\left( \gamma \right) ,$ but they appear ``sticked'' at the singularity point $\gamma =0.$ Thus the relation (10) remains verified, but the sliding
mechanism is ``frozen'' by the singularity, as expected from analyticity properties.
The main consequence of the sliding mechanism is to substantially modify the evaluation
of the sum (2) with respect to the ultra asymptotic expectation (5).
Indeed\footnote {Using various examples we found this result to be generic provided constraints i) -- iii) are verified.} the situation seen on Figs.1,2 is general:\ the first
contribution $F_{0}$ is subject to a rather small shift from $\gamma =1/2,$
while the $p=1$ component $F_{1}$ remains at values where ${\displaystyle{d\ln F_{1} \over \bar{\alpha}dY}}$ is slighly above 1 and ${\displaystyle{d\ln F_{1} \over d\ln Q^{2}}}$ is below $-1/2.$ The
higher components $F_{2}$ and a fortiori $F_{p>2}$ lie in
regions with negative effective intercept and lower and lower values of the effective
anomalous dimension.
It is instructive to compare the results of Figs.1,2 for the $p=1$ component with those obtained for the auxiliary branches of the $p=0$ one. Though being situated in the same range of effective anomalous dimension $\gamma$ as the
$p=1$ component, the first auxiliary branch gives sensibly lower (and almost all negative) values of the effective intercept in the considered kinematical range. Thus, the corresponding contributions to the $p=0$ amplitude are subdominant in energy with respect to the spin 1 amplitude. The same property holds for the second auxiliary branch which stays subdominant with respect to
the $p=2$ component which, in any case is itself subdominant with respect to $p=1$.
Thus, the mechanism we suggest for the two-Pomeron scenario is the following: the r\^ole of the ``hard'' Pomeron is played (as it should be) by the component $F_{0},$ while the r\^ole of the ``soft'' Pomeron is played by the other components, principally the component with unit conformal spin $F_{1}.$ Here this mechanism is realized in a range $\left( Y,\ln Q^{2}/Q_{0}^{2}\right)$ where perturbative QCD (with resummation) is valid. Extrapolation to
the non-perturbative domain will be discussed in the next section.
\section{Physical expectations}
\quad It is now worth discussing our results, obtained from QCD and conformal symmetry,
in the context of the
phenomenological analysis of paper \cite{donnadrie}. Our goal is not to identify the two approaches since the theoretical conformal spin expansion (2) is only valid in the perturbative QCD region at large $Y$ and $Q^{2},$ while the approach of paper \cite{donnadrie} takes into account data
in the whole range of $Q^{2}.$ Nevertheless it is interesting to confront our resulting effective parameters with those obtained from the description of paper \cite{donnadrie}.
In Fig.4 we show a plot comparing our results with those obtained from the two
Pomeron components of \cite{donnadrie} in terms of the effective parameters as previously. In the case of the parametrization of paper \cite{donnadrie}, the effective intercept and anomalous dimension are easily identified as, respectively, $ \epsilon _{i}$ and $d\ln f_{i}\left( Q^{2}\right) / d\ln Q^{2},$ see expression (1). In order to make contact with phenomenology, we have fixed $\bar \alpha = .15,$ and $Q_0 = 135 \ \mbox{MeV}.$ This last value is somewhat arbitrary but corresponds to rather high values of $\ln \left( Q/Q_{0}\right)
^{2}$ in the physical range, justifying the existence of a significant saddle-point. In practice, in Fig.4, we have considered $Y=10$ and $\ln \left( Q/Q_{0}\right)^2=\left(4,6,8,10\right).$ The crosses in Fig.4 correspond to the effective parameters extracted from the parametrization \cite{donnadrie} and the black dots to our numerical results of the integrals (2) for the same values of the kinematical variables. We performed the calculation with $f_{p}\left( \gamma \right) \propto 1/\cos \frac{\pi \gamma }{4},$ but checked the validity of the results for other weights (with similar analyticity properties, cf. section 3.)
The main thing to be noticed is the reasonable agreement between both results for large values of $Q^{2}$ corresponding to the direction of the arrows on the figure. A few remarks are in order:
i) The leading ``hard Pomeron'' singularity obtained by our results is of the type used e.g. in the phenomenological description of proton structure functions in the dipole model of BFKL dynamics \cite {npr}. However the value of the coupling constant, chosen here to match with the determination of the hard component by \cite{donnadrie}, is larger than in one-Pomeron fits \cite {npr} and in better agreement with the original BFKL framework.
ii) The nonleading singularity is obtained in the correct range fixed by \cite{donnadrie} to be given by the ``soft'' Pomeron \cite {donland}. It is to be remarked that, while the ``hard'' Pomeron singularity is mainly fixed by the choice of $\bar \alpha,$ the nonleading one is a result of the sliding mechanism. We thus find this feature to be model independent and related to the asymptotic conformal invariance of the input amplitudes.
iii) As also seen in the figure, the agreement is not quantitative, especially at lower $Q^2,$ since the results obtained from our formula (2) appear as {\it moving} effective singularities while those from paper \cite{donnadrie} are, by definition, {\it fixed} Regge singularities.
Let us comment further on this important difference. In perturbative QCD, submitted to obey a renormalization group property, one expects in rather general conditions a scale-dependent evolution, different from the Regge-type of singularities, at least for the singlet channel \cite {rujula} \footnote {Note, however, the different perturbative approach of \cite {indurain}.}. It is thus not surprising that the various components obtained from our approach show this characteristic feature, see Figs.1-4. On contrary, pure Regge singularities will correspond to fixed intercepts as shown in Fig.4 by the horizontal lines.
We feel that moving effective singularities will remain a typical feature of the ``hard'' singularity at high $Q^2$, at least if perturbative QCD is relevant in this case. The situation is obviously different for the ``soft'' singularity which intercept is fixed at the known ``universal'' value for soft interactions \cite{donland}. The behaviour of the ``soft'' singularity when $Q^{2}$ becomes small is not determined in our perturbative approach. It only predicts that it will become dominant when $Q^2$ will approach and decrease below $Q_0^2,$ as indicated by the effective anomalous dimension. Non-perturbative QCD effects could thus be expected to stabilize the perturbative soft singularity at the known location of the phenomenological soft Pomeron\footnote {Another possibility \cite{bialas} could be a pole in the weight $f_p{(\gamma)}$ at a suitable position, but it would not be easily justified by a physical property like e.g. conformal invariance.}. Moreover, one would have to consider also the other higher conformal spin components.
Some qualitative arguments can be added in favour of specific non perturbative effects for conformal spin components. Indeed, the same reason leading to the sliding mechanism, namely the
smallness of $\chi _{p}^{\prime \prime }\left( \gamma \right) $ in the
vicinity of $\gamma =1/2,$ implies a large ``$k_{T}$-diffusion'' phenomenon
\cite{bartelo}. One typically expects a range of ``$k_{T}$-diffusion''
for the gluon virtuality scales building the spin component $F_{p}$ depending on $p$ as $\left(\chi _{p}^{\prime \prime}\left(\frac 12\right)\right)^{-1}.$ Thus, while the contamination of non-perturbative unitarization effects
could be limited for $F_{0}, $ it is expected to be strong for $%
F_{1}$ and the higher spin components $F_{p>1}.$ All in all, it is a consistent picture that the softer components obtained in a perturbative QCD framework at high $Q^2$ are precisely those for which stronger ``$k_{T}$-diffusion'' corrections are expected. To go further would require a study of the low-$Q^{2}$ region, in particular of higher-twist contributions, which are outside the scope of our present paper \footnote {The known studies on higher-twists effects at low $x$ \cite {bartels1} seems to show a different behaviour from the one obtained from the sliding mechanism of higher conformal spin components. This feature certainly deserves further study.}.
Concerning the physical meaning of the analyticity constraints imposed on the integrand factors $%
f_{p}\left( \gamma \right) ,$ they amount to discuss the conformal
coupling of the BFKL components to, say, the virtual photon and the proton (or, more generally other projectiles/targets). Leaving for future work the complete derivation of the conformal couplings to different conformal spins \cite {navelet2,appear}, let us assume that the coupling is spin independent. Interestingly enough an eikonal coupling to a $q\bar{q}$ pair \cite {muellertang} then appears to be forbidden, since it has a pole at $\gamma =0,$ corresponding to the presence of the gluon coupling in the impact factor \cite {mp}. However, considering the direct coupling through the probability distribution of a virtual photon in terms of $q\bar{q}$ pair configurations \cite{nikolaev}, we remark, following the derivation of \cite {mp}, that the pole due to the gluon coupling is cancelled with no other singularity at $\gamma=0.$ We explicitely
checked that we obtain very similar results to
those displayed in Figs.1--3 within this framework. Note that such a model ensures the positivity of the conformal spin contributions.
In our derivation, which follows from the conformal invariance of the BFKL equation, we have sticked to the case of a fixed coupling constant. It has been proposed \cite {lipatov,braun,zoller} that the solution of the BFKL equation, once modified in order to take into account a running coupling constant, leads to two, or more probably, a series of Regge poles instead of the $j$-plane cut obtained originally at
fixed $\bar{\alpha}.$ However, this solution with more than one Pomeron singularity does not ensure the specific $Q^2$ behaviour required by the analysis of \cite {donnadrie} and obtained by the sliding mechanism. The running of the coupling constant, and more generally the results of the next-leading BFKL corrections \cite{fadin}, modify the singularity structure could preserve the sliding mechanism. Further study is needed in this respect.
\section{Conclusion and outlook}
\quad To summarize our results, using the full content of solutions of the BFKL equation in a perturbative QCD framework, and in particular their conformal invariance, we have looked for the physical consequences of the higher
conformal spin components of the conformal expansion on the problem of the Pomeron singularites. We
have found, under rather general conditions, that the obtained pattern
of effective singularities leads to two Pomeron contributions,
one ``hard'', corresponding to the ordinary conformal spin 0 component and one ``soft'', corresponding to higher spin contributions, mainly spin 1. This situation meets, at least in the
large $Q^{2}$ domain, the empirical observation of Ref.\cite{donnadrie}
leading to a ``hard'' Pomeron with leading-twist behaviour and a ``soft''
Pomeron with higher-twist behaviour. It is interesting to note that the
higher-twist behaviour we obtain corresponding to the $p=1$ component is of higher effective intercept than the one which may be associated with the
auxiliary branches of the ``hard'' component $p=0.$ Thus, there is no doubt that
the $p=1$ component behaviour is emerging from the other secondary BFKL contributions. However, its order of magnitude remains to be discussed \cite {appear}.
It is important to note that the higher spin components rely on the existence of an asymptotic global conformal invariance. This invariance has been proved to exist in the leading-log approximation. In the next-leading log BFKL calculations, It has been recently advocated \cite {brod} to be preserved, at least approximately. If this result is confirmed, and if the characteristics of the kernels are similar, the r\^ole of the modified higher conformal spin components is expected to be the same. Further tests of our conjecture also imply a study of the specific couplings of the higher spin components to the initial states and an extension of the predictions to the non-forward diffractive scattering. Indeed, it has been recently shown \cite {levy} that the photoproduction of $J/\Psi$ gives evidence for no shrinkage of the Pomeron trajectory. Thus the two-Pomeron conjecture could also be borne out by considering non-forward processes.
If confirmed in the future, the two-Pomeron conjecture leads to further
interesting questions, for instance:
- Can we built an Operator Product Expansion for the structure
functions, and thus higher-twist contributions, incorporating the conformal invariance structure?
- Can we get some theoretical information on the physical ``soft'' Pomeron by
considering high-$Q^{2}$ indications given by perturbative QCD indications?
- Can we see some remnants of the specific conformal spin structure associated with the two Pomerons?
- The sliding mechanism appears as a kind of a spontaneous violation of asymptotic conformal invariance: can we put this analogy in a more formal way?
One interesting conclusion to be drawn from our study is that the matching of hard and soft singularities could be very different from expectation. Usually, it is expected that a smooth evolution is obtained from the hard to the soft region thanks to the increase of the unitarity
corrections to some ``bare'' Pomeron. By contrast, in the empirical approach of \cite{donnadrie} and
in the theoretical sliding mechanism discussed in the present paper, the
``hard'' and ``soft'' regions are essentially dominated by distinct
singularities, with only small overlap. Clearly, this alternative deserves further phenomenological and theoretical studies. In particular, it has been suggested \cite {bialas} to extend the study to (virtual) photon-photon reactions where the perturbative singularities and their specific coupling are
expected to be theoretically well-defined. For instance, if the eikonal coupling is confirmed as a characteristic feature of the (virtual) photon
coupling to the BFKL kernel, the sliding mechanism should not work for the spin 1 component and thus the would-be ``soft'' Pomeron is expected to be absent from these reactions. Another case study is the Pomeron in hard diffractive reactions where the sliding mechanism, if present, could be different than for total structure functions, and thus leading to a different balance of hard and soft singularities.
\bigskip
{\bf ACKNOWLEDGEMENTS}
We want to thank the participants of the Zeuthen Workshop on DIS at small $x,$ (``Royon meeting'', june 1998) for fruitful discussions, among
whom Jeff Forshaw, Douglas Ross for stimulating remarks and particularly Peter Landshoff for provoking us with his and Donnachie's conjecture. We are also indebted to Andrzej Bialas and Henri Navelet for interesting suggestions and comments.
\newpage
| proofpile-arXiv_065-8355 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A QCD-based operator-product-expansion (OPE) formulation for
treatment of inclusive heavy hadron decays has been developed in
past years \cite{BIGI}. The optical theorem tell us that the
inclusive decay rates are related to the imaginary part of certain
forward scattering amplitudes along the physical cut. Since, based
on the hypothesis of quark-hadron duality, the final state effects
which are nonperturbative in nature, are eliminated after adding
up all of the states, the OPE approach thus can be employed for
such the smeared or averaged physical quantities. In order to test
the validity of (local) quark-hadron duality, it is very important
to have a reliable estimate of the heavy hadron lifetimes within
the OPE framework and compare them with experiment.
In the heavy quark limit, all bottom hadrons have the same lifetimes
in the parton picture. With the advent of heavy quark effective
theory, which gives a systematic way in expansion of the initial
heavy hadron, and the OPE approach for the analysis of inclusive
weak decays, it is realized that the first nonperturbative
correction to bottom hadron lifetimes starts at order $1/m_b^2$.
However, the $1/m_b^2$ corrections are small and essentially
negligible in the lifetime ratios. The nonspectator effects such
as $W$-exchange and Pauli interference due to four-quark
interactions are of order $1/m_Q^3$, but their contributions can
be potentially significant due to a phase-space enhancement by a
factor of $16\pi^2$. As a result, the lifetime differences of
heavy hadrons come mainly from the above-mentioned nonspectator
effects.
\section{Difficulties of the OPE approach}
The world average lifetime ratios of bottom hadrons are
\cite{LEP}:
\begin{eqnarray}\label{taudata}
{\tau(B^-)\over\tau(B^0_d)} &=& 1.07\pm 0.03 \,, \nonumber\\
{\tau(B^0_s)\over\tau(B^0_d)} &=& 0.94\pm 0.04 \,, \nonumber\\
\frac{\tau(\Lambda_b)}{\tau(B^0_d)} &=& 0.79\pm 0.05 \,.
\end{eqnarray}
Since, to order $1/m_b^2$, the OPE results for all of the above
ratios are very close to unity [see Eq.~(\ref{taucrude}) below],
the conflict between theory and experiment for this lifetime ratio
is quite striking \cite{Uraltsev,NS,Cheng,BLLS}. One possible
reason for the discrepancy is that (local) quark-hadron duality
may not work in the study of nonleptonic inclusive decay widths.
Another possibility is that some hadronic matrix elements of
four-quark operators are probably larger than what naively
expected so that the nonspectator effects of order $16\pi^2/m_b^3$
may be large enough to explain the observed lifetime ratios.
Therefore, one cannot conclude that (local) duality truly fails
before a reliable calculation of the four-quark matrix elements is
obtained~\cite{NS}.
Conventionally, the hadronic matrix elements of four-quark
operators are evaluated using the factorization approximation for
mesons and the quark model for baryons. However, as we shall see,
nonfactorizable effects absent in the factorization hypothesis can
affect the $B$ meson lifetime ratios significantly. To have a
reliable estimate of the hadronic parameters $B_1$,
$B_2,~\epsilon_1$ and $\epsilon_2$ in the meson sector, to be
introduced below, we will apply the QCD sum rule to calculate
these unknown parameters.
\section{Theoretical review}
In this talk we will focus on the study of the four-quark matrix
elements of the $B$ meson. Before proceeding, let us briefly
review the theory. Applying the optical theorem, the inclusive
decay width of the hadron $H_b$ containing a $b$ quark can be
expressed as
\begin{equation}\label{imt}
\Gamma(H_b\to X) = \frac{1}{m_{H_b}}\,\mbox{Im}\,
i\!\int{\rm d}^4x\, \langle H_b|\,T\{\,
{\cal L}_{\rm eff}(x),{\cal L}_{\rm eff}(0)\,\}
\,|H_b\rangle \,,
\end{equation}
where ${\cal L}_{\rm eff}$ is the relevant effective weak
Lagrangian that contributes to the particular final state $X$.
When the energy release in a $b$ quark decay is sufficiently
large, it is possible to express the nonlocal operator product in
Eq.~(\ref{imt}) as a series of local operators in powers of $1/
m_b$ by using the OPE technique. In the OPE series, the only
locally gauge invariant operator with dimension four, $\bar b
i\!\! \not\!\! D b$, can be reduced to $m_b \bar bb$ by using the
equation of motion. Therefore, the first nonperturbative
correction to the inclusive $B$ hadron decay width starts at order
$1/m_b^2$. As a result, the inclusive decay width of a hadron
$H_b$ can be expressed as~\cite{BIGI}
\begin{eqnarray}\label{gener}
\Gamma(H_b\to X) =&& \frac{G_F^2 m_b^5 |V_{\rm CKM}|^2}{192\pi^3}\,
\frac{1}{2m_{H_b}}
\left\{ c_3^X\,\langle H_b|\bar b b|H_b\rangle
+ c_5^X\, \frac{\langle H_b|\bar b\,{1\over 2} g_s\sigma\cdot G b
|H_b\rangle} {m_b^2}\right. \nonumber\\
&& \left.+ \sum_n c_6^{X(n)}\,\frac{\langle H_b| O_6^{(n)}|H_b
\rangle}{m_b^3} + O(1/m_b^4) \right\} \,,
\end{eqnarray}
where $\sigma\cdot G=\sigma_{\mu\nu}G^{\mu\nu}$, $V_{\rm CKM}$
denotes some combination of the Cabibbo-Kobayashi-Maskawa
parameters and $c_i^X$ reflect short-distance dynamics and
phase-space corrections. The matrix elements in Eq.~(\ref{gener})
can be systematically expanded in powers of $1/m_b$ in heavy quark
effective theory (HQET), in which the $b$-quark field is
represented by a four-velocity-dependent field denoted by
$h^{(b)}_v(x)$. To first order in $1/m_b$, the $b$-quark field
$b(x)$ in QCD
and the HQET-field $h^{(b)}_v(x)$ are related via
\begin{equation}\label{hqetb}
b(x) = e ^{-im_b v\cdot x} \left[ 1 + i\frac{\not\!\! D}{2m_b} \right]
h^{(b)}_v(x).
\end{equation}
Applying this relation, one can replace $b$ by the effective field
$h^{(b)}_v$ in Eq.~(\ref{gener}) to obtain
\begin{eqnarray}
\frac{\langle H_b|\bar b b|H_b \rangle}{2m_{H_b}} &=& 1
- \frac{K_{H_b}}{2m_b^2}+\frac{G_{H_b}}{2 m_b^2} + O(1/m_b^3) \,,
\nonumber\\
\frac{\langle H_b|\bar b{1\over 2}g_s\sigma\cdot G b|H_b
\rangle}{2m_{H_b}} &=& {G_{H_b}} + O(1/m_b) \,,
\end{eqnarray}
where
\begin{eqnarray}
K_{H_b} \equiv -\frac{\langle H_b|\bar h^{(b)}_v\, (iD_\perp)^2
h^{(b)}_v |H_b \rangle}{2m_{H_b}} \,,\ \ G_{H_b} \equiv
\frac{\langle H_b|\bar h^{(b)}_v\,{1\over 2}g_s\sigma\cdot G
h^{(b)}_v |H_b \rangle}{2m_{H_b}} \,.
\end{eqnarray}
Note that here we adopt the convention
$D^\alpha=\partial^\alpha-ig_s A^\alpha$. The inclusive
nonleptonic and semileptonic decay rates of a bottom hadron to
order $1/m_b^2$ are given by \cite{BIGI} \begin{eqnarray} \label{nl}
\Gamma_{\rm NL}(H_b) &=& {G_F^2m_b^5\over
192\pi^3}N_c\,|V_{cb}|^2\, {1\over 2m_{H_b}} \Bigg\{
\left(c_1^2+c_2^2+{2c_1c_2\over N_c}\right)\times \nonumber \\ && \Big[
\big(\alpha I_0(x,0,0)+\beta I_0 (x,x,0)\big)\langle H_b|\bar
bb|H_b\rangle \nonumber \\ && -{1\over
m_b^2}\big(I_1(x,0,0)+I_1(x,x,0)\big) \langle H_b|\bar bg_s\sigma
\cdot G b|H_b\rangle \Big] \nonumber \\ && -{4\over m_b^2}\,{2c_1c_2\over
N_c}\,\big(I_2(x,0,0)+I_2(x,x,0)\big) \langle H_b|\bar bg_s\sigma\cdot
G b|H_b\rangle\Bigg\}, \end{eqnarray} where $N_c$ is the number of colors, the
parameters $\alpha$ and $\beta$ denote QCD radiative corrections
to the processes $b\to c\bar ud$ and $b\to c\bar cs$, respectively
\cite{Bagan}, and \begin{eqnarray} \label{sl} \Gamma_{\rm SL}(H_b) &=&
{G_F^2m_b^5\over 192\pi^3}|V_{cb}|^2\,{ \eta(x,x_\ell,0)\over
2m_{H_b}} \nonumber \\ &\times& \Big[ I_0(x,0,0)\langle H_b|\bar
bb|H_b\rangle-{1\over m_b^2}\,I_1(x,0,0) \langle H_b|\bar bg_s\sigma\cdot
G b|H_b\rangle \Big] \,, \end{eqnarray} where $\eta(x,x_\ell,0)$ with
$x_\ell=(m_\ell/m_Q)^2$ is the QCD radiative correction to the
semileptonic decay rate and its general analytic expression is
given in \cite{Hokim}. In Eqs.~(\ref{nl}) and (\ref{sl}),
$I_{0,1,2}$ are phase-space factors (see e.g. \cite{Cheng} for
their explicit expressions): $I_i(x,0,0)$ for $b\to c\bar ud$
transition and $I_i(x,x,0)$ for $b\to c\bar cs$ transition. Note
that the CKM parameter $V_{ud}$ does not occur in $\Gamma_{\rm
NL}(H_b)$ and $\Gamma_{\rm SL}(H_b)$ when summing over the
Cabibbo-allowed and Cabibbo-suppressed contributions.
In Eq.~(\ref{nl}) $c_1$ and $c_2$ are the Wilson coefficients in the
effective Hamiltonian
\begin{eqnarray}
{\cal H}^{\Delta B=1}_{\rm eff} &=& {G_F\over\sqrt{2}}\Big[ V_{cb}V_{uq}^*(c_1
(\mu)O_1^u(\mu)+c_2(\mu)O_2^u(\mu)) \nonumber \\
&+& V_{cb}V_{cq}^*(c_1(\mu)O_1^c(\mu)+c_2(\mu)
O_2^c(\mu))+\cdots \Big]+{\rm h.c.},
\end{eqnarray}
where $q=d,s$, and
\begin{eqnarray} \label{O12}
&& O_1^u= \bar c\gamma_\mu(1-\gamma_5)b\,\,\bar q\gamma^\mu(1-\gamma_5)u,
\qquad\quad
O_2^u =
\bar q\gamma_\mu(1-\gamma_5)b\,\,\bar c\gamma^\mu(1-\gamma_5)u \,.
\end{eqnarray}
The scale and scheme dependence of the Wilson coefficients $c_{1,2}(\mu)$ are
canceled out by the corresponding dependence in the matrix element of the
four-quark operators $O_{1,2}$. That is, the four-quark operators
in the effective theory have to
be renormalized at the same scale $\mu$ and evaluated using the same
renormalization scheme as that for the Wilson coefficients.
Here we use the effective Wilson coefficients $c_i$ which are
scheme-independent~\cite{Cheng}. \begin{eqnarray} \label{c12} c_1=1.149\,,
\qquad \quad c_2=-0.325\,. \end{eqnarray} Using $m_b=4.85$ GeV, $m_c=1.45$
GeV, $|V_{cb}|=0.039$, $G_B=0.36\,{\rm GeV}^2$, $G_{\Lambda_b}=0$,
$K_B\approx K_{\Lambda_b}\approx 0.4\,{\rm GeV}^2$ together with
$\alpha=1.063$ and $\beta=1.32$ to the next-to-leading order
\cite{Bagan}, we find numerically
\begin{eqnarray}\label{taucrude}
\frac{\tau(B^-)}{\tau(B_d)} = 1 + O(1/m_b^3)\,,\ \
\frac{\tau(B_s)}{\tau(B_d)} = 1 + O(1/m_b^3)\,, \ \
\frac{\tau(\Lambda_b)}{\tau(B_d)} =
0.99 + O(1/m_b^3) \,.
\end{eqnarray}
It is evident that the $1/m_b^2$ corrections are too small to
explain the shorter lifetime of the $\Lambda_b$ relative to that
of the $B_d$. To the order of $1/m_b^3$, the nonspectator effects
due to Pauli interference and $W$-exchange parametrized in terms
of the hadronic parameters~\cite{NS}: $B_1$, $B_2$, $\epsilon_1$,
$\epsilon_2$, $\tilde B$, and $r$ (see below), may contribute
significantly to lifetime ratios due to a phase-space enhancement
by a factor of $16\pi^2$. The four-quark operators relevant to
inclusive nonleptonic $B$ decays are
\begin{eqnarray}\label{4qops}
O_{V-A}^q &=& \bar b_L\gamma_\mu q_L\,\bar q_L\gamma^\mu b_L
\,, \nonumber\\
O_{S-P}^q &=& \bar b_R\,q_L\,\bar q_L\,b_R \,, \nonumber\\
T_{V-A}^q &=& \bar b_L\gamma_\mu t^a q_L\,
\bar q_L\gamma^\mu t^a b_L \,, \nonumber\\
T_{S-P}^q &=& \bar b_R\,t^a q_L\,\bar q_L\, t^ab_R \,,
\end{eqnarray}
where $q_{R,L}={1\pm\gamma_5\over 2}q$. For the matrix elements of
these four-quark operators between $B$ hadron states, following,
\cite{NS} we adopt the definitions: \begin{eqnarray} \label{parameters}
{1\over 2m_{ B_q}}\langle \bar B_q|O^q_{V-A}|\bar B_q\rangle &&\equiv
{f^2_{B_q} m_{B_q} \over 8}B_1\,, \nonumber\\ {1\over 2m_{B_q}}\langle
\bar B_q|O^q_{S-P}|\bar B_q\rangle &&\equiv {f^2_{B_q} m_{B_q}\over
8}B_2\,,\nonumber\\ {1\over 2m_{B_q}}\langle \bar B_q|T^q_{V-A}|\bar B_q\rangle
&&\equiv {f^2_{B_q} m_{B_q}\over 8}\epsilon_1\,, \nonumber\\
{1\over 2m_{B_q}}\langle \bar B_q|T^q_{S-P}|\bar B_q\rangle &&\equiv {f^2_{B_q}
m_{B_q}\over 8}\epsilon_2\,,\nonumber\\ {1\over 2m_{\Lambda_b}}\langle
\Lambda_b |O^q_{V-A}|\Lambda_b \rangle &&\equiv -{f^2_{B_q} m_{B_q}\over
48}r\,,\nonumber\\ {1\over 2m_{\Lambda_b}}\langle \Lambda_b
|T^q_{V-A}|\Lambda_b \rangle &&\equiv -{1\over 2} (\tilde B+{1\over 3})
{1\over 2m_{\Lambda_b}}\langle \Lambda_b |O^q_{V-A}|\Lambda_b \rangle \,. \end{eqnarray}
Under the factorization approximation, $B_i=1$ and $\epsilon_i=0$,
and under the valence quark approximation $\tilde B=1$ \cite{NS}.
The destructive Pauli interference in inclusive nonleptonic $B^-$
decay and the $W$-exchange contributions to $B^0_d$ and $B^0_s$
are~\cite{NS} \footnote{The penguin-like nonspectator
contributions to $B_s$ are considered in \cite{Keum}, but they are
negligible compared to that from the current-current operators
$O_1$ and $O_2$ introduced in Eq.~(\ref{O12}).} \begin{eqnarray}
\label{bnonspec} \Gamma^{\rm ann}(B^0_d) &=& -\Gamma_0
|V_{ud}|^2\, \eta_{\rm nspec}(1-x)^2\Bigg\{ (1+{1 \over
2}x)\Big[({1\over N_c}c_1^2+2c_1c_2+N_cc_2^2)B_1+2c_1^2\epsilon_1
\Big]\nonumber \\ && -(1+2x)\Big[({1\over
N_c}c_1^2+2c_1c_2+N_cc_2^2)B_2 +2c_1^2\epsilon_2\Big] \Bigg\}
\nonumber \\ &&-\Gamma_0 |V_{cd}|^2\, \eta_{\rm
nspec}\sqrt{1-4x}\Bigg\{ (1+{1 \over 2}x) \Big[({1\over
N_c}c_1^2+2c_1c_2+N_cc_2^2)B_1+2c_1^2\epsilon_1\Big] \nonumber \\ &&
-(1+2x)\Big[({1\over N_c}c_1^2+2c_1c_2+N_cc_2^2)B_2
+2c_1^2\epsilon_2\Big] \Bigg\}, \nonumber \\ \Gamma^{\rm
int}_-(B^-) &=& \Gamma_0\,\eta_{\rm nspec}(1-x)^2\left
[(c_1^2+c_2^2)(B_1+6\epsilon_1)+6c_1c_2B_1\right],\nonumber\\
\Gamma^{\rm ann}(B^0_s) &=& -\Gamma_0 |V_{cs}|^2\, \eta_{\rm
nspec}\sqrt{1-4x}\Bigg\{ (1+{1 \over 2}x) \Big[({1\over
N_c}c_1^2+2c_1c_2+N_cc_2^2)B_1+2c_1^2\epsilon_1\Big] \nonumber \\ &&
-(1+2x)\Big[({1\over N_c}c_1^2+2c_1c_2+N_cc_2^2)B_2
+2c_1^2\epsilon_2\Big] \Bigg\} \nonumber \\ && -\Gamma_0
|V_{us}|^2\, \eta_{\rm nspec}(1-x)^2\Bigg\{ (1+{1 \over
2}x)\Big[({1\over N_c}
c_1^2+2c_1c_2+N_cc_2^2)B_1+2c_1^2\epsilon_1\Big]\nonumber\\ &&
-(1+2x)\Big[({1\over N_c}c_1^2+2c_1c_2+N_cc_2^2)B_2
+2c_1^2\epsilon_2\Big] \Bigg\} \,, \end{eqnarray} with \begin{eqnarray}
\Gamma_0={G_F^2m_b^5\over 192\pi^3}|V_{cb}|^2,~~~\eta_{\rm
nspec}=16 \pi^2{f_{B_q}^2m_{B_q}\over m_b^3}\,, \end{eqnarray} where
$f_{B_q}$ is the $B_q$ meson decay constant defined by
\begin{equation}
\langle 0|\bar q\gamma_\mu \gamma_5 b|\bar B_q(p)\rangle =if_{B_q} p_\mu \,.
\end{equation}
Likewise, the nonspectator effects in inclusive nonleptonic decays
of the $\Lambda_b$ baryon are given by \cite{NS} \begin{eqnarray}
\label{lnonspec} \Gamma^{\rm ann}(\Lambda_b) &=& {1\over
2}\Gamma_0\,\eta_{\rm nspec} \,r(1-x)^2\Big (\tilde
B(c_1^2+c_2^2)-2c_1c_2\Big), \\ \Gamma^{\rm int}_-(\Lambda_b)
&=& -{1\over 4}\Gamma_0 \, \eta_{\rm nspec}\,r
\left[|V_{ud}|^2(1-x)^2(1+x)+\left|{V_{cd}}\right|^2\sqrt{1-4x}\,\right]
\Big(\tilde Bc_1^2-2c_1c_2-N_cc_2^2\Big)\,. \nonumber \end{eqnarray}
Using
the values of $c_i$ in Eqs.~(\ref{c12}), we obtain
\begin{eqnarray} \label{numnspec} \Gamma^{\rm ann}(B_d) &=&
\Gamma_0\, \eta_{\rm nspec} (-0.0087 B_1+0.0098 B_2 -2.28
\epsilon_1 +2.58\epsilon_2)\,, \nonumber\\ \Gamma_-^{\rm int}(B^-)
&=& \Gamma_0 \, \eta_{\rm nspec}(-0.68B_1 +7.10 \epsilon_1)\,,
\nonumber\\ \Gamma^{\rm ann}(B_s) &=& \Gamma_0 \, \eta_{\rm
nspec}(-0.0085 B_1 +0.0096 B_2 -2.22 \epsilon_1 +2.50 \epsilon_2)
\,, \nonumber\\ \Gamma^{\rm ann}(\Lambda_b) &=& \Gamma_0 \,
\eta_{\rm nspec}r (0.59 \tilde B +0.31)\,, \nonumber\\ \Gamma^{\rm
int}(\Lambda_b) &=& \Gamma_0 \, \eta_{\rm nspec}r (-0.30 \tilde B
-0.097) \,. \end{eqnarray} Therefore, to the order of $1/m_b^3$, the
$B$-hadron lifetime ratios are given by
\begin{eqnarray}\label{ratios}
\frac{\tau(B^-)}{\tau(B^0_d)} &=& 1 +
\Bigl( {f_B\over 185~{\rm MeV}} \Big)^2 \Big( 0.043 B_1 + 0.0006 B_2
- 0.61 \epsilon_1 + 0.17 \epsilon_2 \Big) \,, \nonumber \\
\frac{\tau (B^0_s)}{\tau(B^0_d)} &=& 1+ \Big(\frac{f_B}{185~{\rm
MeV}}\Bigr)^2 (-1.7\times 10^{-5}\,B_1+1.9\times 10^{-5}\, B_2
\,-0.0044 \epsilon_1\, +0.0050\, \epsilon_2) \,, \nonumber\\
\frac{\tau(\Lambda_b)}{\tau(B^0_d)} &=& 0.99
+ \Bigl( {f_B\over 185~{\rm MeV}} \Big)^2 \Big[ -0.0006 B_1
+ 0.0006 B_2 \nonumber \\
&& - 0.15 \epsilon_1 + 0.17 \epsilon_2
- (0.014 + 0.019 \widetilde B) r \Big] \,.
\end{eqnarray}
We see that the coefficients of the color singlet--singlet
operators are one to two orders of magnitude smaller than those of
the color octet--octet operators. This implies that even a small
deviation from the factorization approximation $\epsilon_i=0$ can
have a sizable impact on the lifetime ratios. It was argued in
\cite{NS} that the unknown nonfactorizable contributions render it
impossible to make reliable estimates on the magnitude of the
lifetime ratios and even the sign of corrections. That is, the
theoretical prediction for $\tau(B^-)/\tau(B_d)$ is not
necessarily larger than unity. In the next section we will apply
the QCD sum rule method to estimate the aforementioned hadronic
parameters, especially $\epsilon_i$.
\section{The QCD sum rule calculation}
In HQET where the $b$ quark is
treated as a static quark,
we can use the renormalization group equation to express
them in terms of the operators renormalized at a scale
$ \Lambda_{\rm QCD}\ll \mu\ll m_b$. Their
renormalization-group evolution is determined by the ``hybrid"
anomalous dimensions~\cite{SV1} in HQET. The operators $O_{V-A}^q$
and $T_{V-A}^q$, and similarly $O_{S-P}^q$ and $T_{S-P}^q$, mix
under renormalization. In the leading logarithmic approximation,
the renormalization-group equation of the operator pair $(O,T)$
reads
\begin{equation}\label{rge}
\frac{d}{dt}
\left(
\begin{array} {cc} \phantom{ \bigg[ } O \\
T\end{array} \right)
= {3\alpha_s\over 2\pi}\,\left(
\begin{array} {cc} \phantom{ \bigg[ } C_F & -1 \\
\displaystyle -{C_F\over 2 N_c}~ & \displaystyle ~{1\over 2 N_c}
\end{array} \right)
\left(
\begin{array} {cc} \phantom{ \bigg[ } O \\
T\end{array} \right)
\,,
\end{equation}
where $t = \frac{1}{2} \ln(Q^2/\mu^2)$,
$C_F = (N_c^2-1)/2 N_c$, and effects of penguin operators induced
from evolution have been neglected.
The solution to the evolution equation
Eq.~(\ref{rge}) has the form
\begin{equation}\label{diag}
\left(
\begin{array} {cc} O \\
T\end{array} \right)_{Q}
= \left(
\begin{array} {cc} \phantom{ \bigg[ } \frac{8}{9}~ & \frac{2}{3} \\
-\frac{4}{27}~ &
\frac{8}{9}
\end{array} \right)
\left(
\begin{array} {cc} \phantom{ \bigg[ } L_{Q}^{9/(2\beta_0)}~ & 0 \\
0~ & 1
\end{array} \right)
{\bf D_\mu}\,,
\end{equation}
where
\begin{equation}\label{d}
{\bf D_\mu}=
\left(
\begin{array} {cc} D_1 \\
D_2\end{array} \right)_\mu
= \left(
\begin{array} {cc} \phantom{ \bigg[ } O-\frac{3}{4}T \\
\frac{1}{6}O+T\end{array} \right)_\mu
\,,
\end{equation}
$L_Q = {\alpha_s(\mu)/\alpha_s(Q)}$ and
$\beta_0=\frac{11}{3}\,N_c-\frac{2}{3}\,n_f$ is the leading-order
expression of the $\beta$-function with $n_f$ being the number of
light quark flavors. The subscript $\mu$ in Eq.~(\ref{d}) and in
what follows denotes the renormalization point of the operators.
Given the evolution equation (\ref{diag}) for the four-quark
operators, we see that the hadronic parameters $B_i$ and
$\epsilon_i$ normalized at the scale $m_b$ are related to that at
$\mu=1$~GeV by
\begin{eqnarray}\label{lowpara}
B_i(m_b) &\simeq& 1.54 B_i(\mu)
- 0.41\epsilon_i(\mu) \,, \nonumber\\
\epsilon_i(m_b) &\simeq& - 0.090 B_i(\mu)+1.07\epsilon_i(\mu) \,,
\end{eqnarray}
with $\mu=1$ GeV, where uses have been made of
$\alpha_s(m_{\rm Z})=0.118$,
$\Lambda^{(4)}_{\over{\rm MS}}=333~{\rm MeV}$, $m_b=4.85~{\rm GeV}$,
$m_c=1.45$ GeV.
The above results (\ref{lowpara}) indicate that renormalization effects
are quite significant.
It is easily seen from Eqs.~(\ref{diag}) and (\ref{d}) that the
normalized operator $D_1$ (or $D_2$) is simply multiplied by
$L_{Q}^{9/(2\beta_0)}$ (or 1) when it evolves from a
renormalization point $\mu$ to another point $Q$. In what
follows\footnote{In the sum rule calculation, the factorization
scale $\mu$ cannot be chosen too small, otherwise the strong
coupling constant $\alpha_s$ would be so large that Wilson
coefficients cannot be perturbatively calculated.}, we will apply
this property to derive the renormalization-group improved QCD
sum rules for $D_j$ at the typical scale $\mu=1$~GeV. We define
the new four-quark matrix elements as follows \begin{eqnarray} {1\over 2m_{
B_q}}\langle \bar B_q|D_j^{(i)}(\mu)|\bar B_q\rangle \equiv {f^2_{B_q}
m_{B_q}\over 8}\, d_j^{(i)}(\mu), \end{eqnarray} where the superscript $(i)$
denotes $(V-A)$ four-quark operators for $i=1$ and $(S-P)$
operators for $i=2$, and $d_j^{(i)}$ satisfy
\begin{equation}
\left(
\begin{array} {cc} d_1^{(i)} \\
d_2^{(i)}\end{array} \right)_\mu
= \left(
\begin{array} {cc} \phantom{ \bigg[ } B_i-\frac{3}{4}\epsilon_i \\
\frac{1}{6}B_i+\epsilon_i\end{array} \right)_\mu
\,.
\end{equation}
Since the terms linear in four-quark matrix elements are already
of order $1/m_b^3$, we only need the relation between the full
QCD field $b(x)$ and the HQET field $h^{(b)}_v(x)$ to the zeroth
order in $1/m_b$: $b(x) = e ^{-im_b v\cdot x}\, \{h^{(b)}_v(x) +
{\cal O}(1/m_b)\}$. In the following, within the framework of
HQET, we apply the method of QCD sum rules to obtain the value of
the matrix elements of four-quark operators. We consider the
three-point correlation function \begin{eqnarray} \label{corr}
\Pi^{D_j^{v(i)}}_{\alpha,\beta}(\omega,\omega')=i^2\int dx\, dy\,
e^{i\omega v\cdot x-i\omega' v\cdot y} \langle 0|T\{[\bar
q(x)\Gamma_\alpha h^{(b)}_v(x)]\, D_j^{v(i)}(0)\, [\bar
q(y)\Gamma_\beta h^{(b)}_v(y)]^\dagger\}|0\rangle \,, \end{eqnarray} where the
operator $D_j^{(i)}$ is defined in Eq.~(\ref{d}) but with $b\to
h_v^{(b)}$ and $\Gamma_\alpha$ is chosen to be $v_\alpha\gamma_5$
(some further discussions can be found in \cite{HY}).
The correlation function can be written in the double dispersion
relation form \begin{eqnarray}
\Pi^{D_j^{v(i)}}_{\alpha,\beta}(\omega,\omega')=\int\int {ds\over
s-\omega}\, {ds'\over s'-\omega'}\, \rho^{D_j^{v(i)}} \,. \end{eqnarray}
The results of the QCD sum rules are obtained in the following
way. On the phenomenological side, which is the sum of the
relevant hadron states, this correlation function can be written
as \begin{eqnarray} \Pi^{PS}_{D_j^{v(i)}}(\omega,\omega')=
\frac{F^2(m_b)F^2(\mu)d_j^{(i)}} {16(\bar \Lambda -\omega)(\bar
\Lambda -\omega')}+\cdots \,, \end{eqnarray} where $\bar\Lambda$ is the
binding energy of the heavy meson in the heavy quark limit and
ellipses denote resonance contributions. The
heavy-flavor-independent decay constant $F$ defined in the heavy
quark limit is given by \begin{eqnarray} \langle 0|\bar q\gamma^\mu\gamma_5
h^{(b)}_v|\bar B(v)\rangle =iF(\mu) v^\mu\,. \end{eqnarray} The decay
constant $F(\mu)$ depends on the scale $\mu$ at which the
effective current operator is renormalized and it is related to
the scale-independent decay constant $f_B$ of the $B$ meson by
\begin{eqnarray} F(m_b)=f_B\,\sqrt{m_B}. \end{eqnarray}
On the theoretical side, the correlation function can be
alternatively calculated in terms of quarks and gluons using the
standard OPE technique. Then we equate the results on the
phenomenological side with that on the theoretical side. However,
since we are only interested in the properties of the ground state
at hand, e.g., the $B$ meson, we shall assume that contributions
from excited states (on the phenomenological side) are
approximated by the spectral density on the theoretical side of
the sum rule, which starts from some thresholds (say,
$\omega_{i,j}$ in this study). To further improve the final result
under consideration, we apply the Borel transform to both external
variables $\omega$ and $\omega'$. After the Borel
transform~\cite{yang1}, \begin{eqnarray} {\bf
B}[\Pi^{D_j^{v(i)}}_{\alpha,\beta}(\omega,\omega')]=
\lim_{{\scriptstyle m\to \infty \atop\scriptstyle -\omega'\to
\infty} \atop\scriptstyle {-\omega'\over mt'}\ {\rm fixed}}
\lim_{{\scriptstyle n\to \infty \atop\scriptstyle -\omega\to
\infty} \atop\scriptstyle {-\omega\over nt}\ {\rm fixed}} {1\over
n!m!}(-\omega')^{m+1} [{d\over d\omega'}]^m (-\omega)^{n+1}
[{d\over d\omega}]^n
\Pi^{D_j^{v(i)}}_{\alpha,\beta}(\omega,\omega')\,, \end{eqnarray} the sum
rule gives \begin{eqnarray} &&\frac{F^2({m_b}) F^2(\mu)}{16} e^{-
\bar\Lambda/t_1} e^{-\bar\Lambda/t_2} d_j^{(i)}
=\int_0^{\omega_{i,j}} ds\int_0^{\omega_{i,j}} ds' e^{-(s/t_1 +
s'/t_2)}\rho^{\rm QCD}\,, \end{eqnarray} where $\omega_{i,j}$ is the
threshold of the excited states and $\rho^{\rm QCD}$ is the
spectral density on the theoretical side of the sum rule. Because
the Borel windows are symmetric in variables $t_1$ and $t_2$, it
is natural to choose $t_1=t_2$. However, unlike the case of the
normalization of the Isgur-Wise function at zero recoil, where the
Borel mass is approximately twice as large as that in the
corresponding two-point sum rule~\cite{Neubert2}, in the present
case of the three-point sum rule at hand, we find that the working
Borel windows can be chosen as the same as that in the two-point
sum rule since in our analysis the output results depend weakly on
the Borel mass. Therefore, we choose $t_1=t_2=t$.
By the renormalization group technique, the logarithmic dependence
$\alpha_s\ln (2t/\mu)$ can be summed over to produce a factor like
$[\alpha_s(\mu)/\alpha_s(2t)]^\gamma$. After some manipulation we
obtain the sum rule results:
\begin{eqnarray}\label{rule1}
&&\frac{F^2({m_b}) F^2(\mu)}{16} e^{-2 \bar\Lambda/t} \left(
\begin{array} {cc} \phantom{ \bigg[ } d_1^{v(i)} \\
d_2^{v(i)}\end{array} \right)_\mu \nonumber\\
=&& \Biggl( {\alpha_s(2t)\over \alpha_s(\mu)}
\Biggr)^{4\over\beta_0} \Biggl(
{1-2\delta{\alpha_s(2t)\over\pi}\over 1-2\delta{\alpha_s(\mu)
\over\pi}}\Biggr)
\left(
\begin{array} {cc} L_{t}^{-9/(2\beta_0)}~ & 0 \\
0~ & 1
\end{array} \right)
\left(
\begin{array} {cc} \phantom{ \bigg[ } {\rm OPE}_{B_{i,1}}
-\frac{3}{4}\,{\rm OPE}_{\epsilon_{i,1}} \\
\frac{1}{6}\, {\rm OPE}_{B_{i,2}}+{\rm OPE}_{\epsilon_{i,2}}
\end{array}
\right)_t \,,
\end{eqnarray}
where
\begin{eqnarray}\label{rule2}
&&{\rm OPE}_{B_{i,j}}\simeq \frac{1}{4}({\rm OPE})^2_{2pt;i,j} \,,
\nonumber\\ && {\rm OPE}_{\epsilon_{1,j}} \simeq
-\frac{1}{16}\Biggl[- \frac{ \langle \bar q g_s\sigma\cdot G
q\rangle}{8\pi^2} t (1-e^{-\omega_{1,j} /t})+\frac {\langle
\alpha_s G^2\rangle} {16\pi^3} t^2 (1-e^{-\omega_{1,j} /t})^2
\Biggr] \,, \nonumber\\ && {\rm OPE}_{\epsilon_{2,j}} \simeq {\cal
O}(\alpha_s)\,,
\end{eqnarray}
with
\begin{eqnarray} \label{2pt}
({\rm OPE})_{2pt;i,j} =&& \frac{1}{2} \biggl\{
\int_0^{\omega_{i,j}} ds\ s^2 e^{-s/t}{3\over\pi^2}
\biggl[1+{\alpha_s\over\pi} \Bigl({17\over 3}+{4\pi^2\over 9}-2\ln
{s\over t}\Bigr) \biggr]\nonumber\\
&&-\Bigl(1+{2\alpha_s\over\pi}\Bigr)\langle\bar qq\rangle
+{\langle\bar qg_s \sigma\cdot Gq\rangle\over 16 t^2} \biggr\} \,.
\end{eqnarray}
For reason of consistency, in the following numerical analysis we
will neglect the finite part of radiative one loop corrections in
OPE$_{B_{i,j}}$ and OPE$_{\epsilon_{i,j}}$ (and in Eq. (\ref{F})).
The parameter $\delta$ in (\ref{rule1}) is some combination of the
$\beta$ functions and anomalous dimensions (see Eq.~(4.2) of
\cite{BB}) and is numerically equal to $-0.23$. The relevant
parameters normalized at the scale $t$ are related to those at
$\mu$ by~\cite{BB,yang1} \begin{eqnarray}\label{RGevo} &&F(2t)=F(\mu)\Bigl(
{\alpha_s(2t)\over \alpha_s(\mu)} \Bigr)^{-2/\beta_0}
{1-\delta{\alpha_s(\mu)\over\pi} \over
1-\delta{\alpha_s(2t)\over\pi}}\,, \nonumber\\ &&\langle \bar
qq\rangle_{2t} =\langle \bar qq\rangle_\mu \cdot \Bigl(
{\alpha_s(2t)\over \alpha_s (\mu)}
\Bigr)^{-4/\beta_0}\,,\nonumber\\ &&\langle g_s\bar q\sigma\cdot
Gq\rangle_{2t}=\langle g_s\bar q\sigma\cdot Gq\rangle_\mu \cdot
\Bigl( {\alpha_s(2t)\over \alpha_s(\mu)} \Bigr)^{2/(3\beta_0)}
\,,\nonumber\\ &&\langle \alpha_s G^2 \rangle_{2t}= \langle
\alpha_s G^2 \rangle_\mu\,, \end{eqnarray} where $\langle\cdots \rangle$
stands for $\langle 0| \cdots |0\rangle$ and~\cite{yang1} \begin{eqnarray}
&&\langle \bar qq\rangle_{\mu=1~{\rm GeV}}=-(240~{\rm MeV})^3\,,
\nonumber\\ &&\langle \alpha_s G^2 \rangle_{\mu=1~{\rm GeV}}
=0.0377~{\rm GeV^4} \,,\nonumber\\ &&\langle \bar
qg_s\sigma_{\mu\nu} G^{\mu\nu} q\rangle_{\mu=1~{\rm GeV}}=
(0.8~{\rm GeV^2})\times \langle \bar qq\rangle_{\mu=1~{\rm GeV}}
\,. \end{eqnarray}
Some remarks are in order. First, in Eqs.~(\ref{rule1}) and
(\ref{rule2}). OPE$_{B_i}$ is obtained by substituting
$D_j^{v(i)}$ by $O^v$ and it can be approximately factorized as
the product of (OPE)$_{2pt;i,j}$ with itself, which is the same as
the theoretical part in the two-point $F(\mu)$ sum
rule~\cite{Neubert2,BB}. In the series of (OPE)$_{2pt;i,j}$, we
have neglected the contribution proportional to $\langle \bar
qq\rangle^2$. (More precisely, it is equal to $\alpha_s \langle
\bar qq\rangle^2 \pi/324$; see Ref.~\cite{Neubert2}.)
Nevertheless, the result of (OPE)$_{B_i}$ in Eq.~(\ref{rule2}) is
reliable up to dimension six, as the contributions from the
$\langle \bar qq\rangle^2$ terms in (OPE)$_{2pt;i,j}$ are much
smaller than the term $(1+\alpha_s/\pi)^2 \langle \bar qq
\rangle^2/16$ that we have kept [see Eq.~(\ref{2pt})]. Second, in
(OPE)$_{B_i}$ the contribution involving the gluon condensate is
proportional to the light quark mass and hence can be neglected.
Third, OPE$_{\epsilon_i}$ is the theoretical side of the sum rule,
and it is obtained by substituting $D_j^{v(i)}$ by $T^v$. Here we
have neglected the dimension-6 four-quark condensate of the type
$\langle \bar q\Gamma\lambda^a q\ \bar q\Gamma\lambda^a q
\rangle$. Its contribution is much less than that from
dimension-five or dimension-four condensates and hence unimportant
(see~\cite{Chern} for similar discussions). It should be
emphasized that nonfactorizable contributions to the parameters
$B_i$ arise mainly from the $O^v-T^v$ operator mixing.
In the following, we compare our analysis with the similar QCD sum
rule studies in \cite{Chern} and \cite{BLLS}. First, Chernyak
\cite{Chern} used the chiral interpolating current for the $B$
meson, so that all light quark fields in his correlators are
purely left-handed. As a result, there are no quark-gluon mixed
condensates as these require the presence of both left- and
right-handed light quark fields. Instead, the gluon condensate
contribution enters into the $\epsilon_1$ sum rule with an
additional factor of 4 in comparison with ours; thus their
$OPE_{\epsilon_1}$ is in rough agreement with ours. Second, our
results for OPE$_{\epsilon_i}$ are very different from that
obtained by Baek {\it et al}.~\cite{BLLS}. The reason is that
their results are mixed with the $1^+$ to $1^+$ transitions. Also
a subtraction of the contribution from excited states is not
carried out in \cite{BLLS} for the three-point correlation
function, though it is justified to do so for two-point
correlation functions. Indeed, in the following analysis, one will
find that after subtracting the contribution from excited states,
the contributions of OPE$_{\epsilon_i}$ are largely suppressed.
Furthermore, as in the study of the $B$ meson decay
constant~\cite{Neubert2}, we find that the renormalization-group
effects are very important in the sum rule analysis. Moreover,
$\epsilon_i$ at $\mu=m_b$ are largely enhanced by
renormalization-group effects.
The value of $F$ in Eq.~(\ref{rule1}) can be substituted by
\begin{eqnarray}\label{F}
F^2(\mu)e^{-\bar\Lambda/t}=&& \biggl[{\alpha_s(2t)\over
\alpha_s(\mu)}\biggr]^{4\over\beta}
\biggl[{1-2\delta{\alpha_s(2t)\over\pi}\over
1-2\delta{\alpha_s(\mu)\over\pi}} \biggr] \biggl\{
\int_0^{\omega_0} ds\ s^2 e^{-s/ t}{3\over\pi^2}
\biggl[1+{\alpha_s(2t)\over\pi} \Bigl({17\over 3}+{4\pi^2\over
9}-2\ln {s\over t}\Bigr) \biggr]\nonumber\\
&&-\Bigl(1+{2\alpha_s(2t)\over\pi}\Bigr)\langle\bar qq\rangle_{2t}
+{\langle\bar qg_s \sigma\cdot Gq\rangle_{2t}\over 16 t^2}
\biggr\} \,,
\end{eqnarray}
which is from the two-point sum rule approach~\cite{BB}. Next, to
determine the thresholds $\omega_{i,j}$ we employ the $B$ meson
decay constant $f_B=(185\pm 25\pm 17)~{\rm MeV}$ obtained from a
recent lattice-QCD calculation~\cite{lattice} and the
relation~\cite{Ball} \begin{eqnarray} f_B ={F(m_b)\over \sqrt {m_B}}\Bigl(
1-{2\over 3} {\alpha_s(m_b)\over \pi }\Bigr) \Bigl(1-{(0.8\sim 1.1
)~{\rm GeV}\over m_b} \Bigr)\,, \end{eqnarray} that takes into account QCD
and $1/m_b$ corrections. Using the relation between $F(m_b)$ and
$F(\mu)$ given by Eq.~(\ref{RGevo}) and $m_b=(4.85\pm 0.25)$~GeV,
we obtain \begin{eqnarray} \label{Fresult} F(\mu=1~{\rm GeV}) \cong (0.34\sim
0.52)~{\rm GeV^{3/2}} \,. \end{eqnarray} Since the $\bar\Lambda$ parameter
in Eq.~(\ref{F}) can be replaced by the $\bar\Lambda$ sum rule
obtained by applying the differential operator $t^2 \partial
\ln/\partial t$ to both sides of Eq.~(\ref{F}), the $F(\mu)$ sum
rule can be rewritten as \begin{eqnarray} \label{newF} F^2(\mu)={\rm (right\
hand\ side\ of\ Eq.~(\ref{F}))} \times {\rm exp} [t\,
{\partial\over \partial t} {\rm ln}{\rm (right\ hand\ side\ of\
Eq.~(\ref{F})) }]\,, \end{eqnarray} which is $\bar\Lambda$-free. Then using
the result (\ref{Fresult}) as input, the threshold $\omega_0$ in
the $F(\mu)$ sum rule, Eq.~(\ref{newF}), is determined. The result
for $\omega_0$ is $1.25-1.65~{\rm GeV}$. A larger
$F(\mu=1~\rm{GeV})$ corresponds to a larger $\omega_0$. The
working Borel window lies in the region $0.6~{\rm GeV} <t < 1~{\rm
GeV}$, which turns out to be a reasonable choice. Substituting the
value of $\omega_0$ back into the $\bar\Lambda$ sum rule, we
obtain
$\bar \Lambda=0.48-0.76~{\rm GeV}$ in the Borel window
$0.6~{\rm GeV} <t < 1~{\rm GeV}$. This result is consistent with
the choice $m_b=(4.85\pm 0.25)$~GeV, recalling that in the heavy
quark limit, $\bar\Lambda=m_B-m_b$. To extract the $d_j^{v(i)}$
sum rules, one can take the ratio of Eq.~(\ref{F}) and
Eq.~(\ref{rule1}) to eliminate the contribution of $F^2/ {\rm
exp}(\bar \Lambda /t)$. This means one has chosen the same $\bar
\Lambda$ both in Eq.~(\ref{F}) and Eq.~(\ref{rule1}). Since
quark-hadron duality is the basic assumption in the QCD sum rule
approach, we expect that the same result of $\bar \Lambda$ also
can be obtained using the $\bar\Lambda$ sum rules derived from
Eq.~(\ref{rule1}) (see \cite{yang1} for a further discussion).
This property can help us to determine consistently the threshold
in 3-point sum rule, Eq.~(\ref{rule1}). Therefore, we can apply
the differential operator $t^2 \partial \ln/\partial t$ to both
sides of Eq.~(\ref{rule1}), the $d^{v(i)}$ sum rule, to obtain new
$\bar \Lambda$ sum rules. The requirement of producing a
reasonable value for $\bar \Lambda$, say $0.48-0.76~{\rm GeV}$,
provides severe constraints on the choices of $\omega_{i,j}$. With
a careful study, we find that the best choice in our analysis is
\begin{eqnarray} \label{omega3pt} \omega_{i,1}=-0.02~{\rm GeV} +\omega_0\,,
\quad \omega_{1,2}=-0.5~{\rm GeV}+\omega_0 \,, \quad
\omega_{2,2}=-0.22~{\rm GeV}+\omega_0 \,. \end{eqnarray} Applying the above
relations with $\omega_0=(1.25 \sim 1.65)~{\rm GeV}$ and
substituting $F(\mu)$ in Eq.~(\ref{rule1}) by (\ref{F}), we study
numerically the $d_j^{v(i)}$ sum rules. In Fig.~1, we plot $B_i^v$
and $\epsilon_i^v$ as a function $t$, where $B_i^v=8d_1^{v(i)}/9 +
2d_2^{v(i)}/3,\ $ and $\epsilon_i^v=-4d_1^{v(i)}/27+
8d_2^{v(i)}/9$. The dashed and solid curves stand for $B_i^{v}$
and $\epsilon_i^{v}$, respectively, where we have used
$\omega_0=1.4~ {\rm GeV}$ (the corresponding decay constant is
$f_B=175\sim 195~ {\rm MeV}$ or $F(\mu)=0.405\pm 0.005~ {\rm
GeV}^{3/2}$). The final results for the hadronic parameters $B_i$
and $\epsilon_i$ are (see Fig.~2) \begin{eqnarray}
B_1^{v}(\mu=1~{\rm GeV})=0.60\pm 0.02, \qquad &&
B_2^{v}(\mu=1~{\rm GeV})=0.61\pm 0.01, \nonumber \\
\epsilon_1^{v}(\mu=1~{\rm GeV})=-0.08\pm 0.01, \qquad &&
\epsilon_2^{v}(\mu=1~{\rm GeV})=-0.024\pm 0.006. \end{eqnarray} The
numerical errors come mainly from the uncertainty of
$\omega_0=1.25\sim 1.65$~GeV. Some intrinsic errors of the sum
rule approach, say quark-hadron duality or $\alpha_s$ corrections,
will not be considered here.
\begin{figure}[ht]
\vspace{1cm}
\leftline{\epsfig{figure=fig21.eps,width=7.7cm,height=5.2cm}
\ \epsfig{figure=fig22.eps,width=7.7cm,height=5.2cm}}
\vspace{0.2cm}
\caption{{\small $B_i^v(\mu)$ and
$\epsilon_i^v(\mu)$ as a function $t$, where $B_i^v=8d_1^{v(i)}/9
+ 2d_2^{v(i)}/3,\ $ and $\epsilon_i^v=-4d_1^{v(i)}/27+
8d_2^{v(i)}/9$. The dashed and solid curves stand for $B_i^{v}$
and $\epsilon_i^{v}$, respectively. Here we have used
$\omega_0=1.2~ {\rm GeV}$ and Eq.~(\ref{omega3pt}).}}
\vspace{0.5cm}
\end{figure}
Substituting the above results into Eq.~(\ref{lowpara}) yields
\begin{eqnarray} \label{biei} B_1(m_b)=0.96\pm 0.04 + { O}(1/m_b)\,, &&
\qquad B_2(m_b)=0.95\pm 0.02 + { O}(1/m_b)\,, \nonumber\\
\epsilon_1(m_b)=-0.14\pm 0.01 +{ O}(1/m_b)\,, && \qquad
\epsilon_2(m_b)=-0.08\pm 0.01 +{ O}(1/m_b)\,. \end{eqnarray} It follows from
Eq.~(\ref{ratios}) that \begin{eqnarray} && \frac{\tau (B^-)}{\tau(B_d)} =
1.11 \pm 0.02\,, \nonumber \\ && \frac{\tau
(B_s)}{\tau(B_d)}\approx 1\,, \nonumber\\ && \frac{\tau
(\Lambda_b)}{\tau (B_d)} = 0.99 - \Big(\frac{f_B}{185~{\rm
MeV}}\Bigr)^2 (0.007+0.020\, \tilde B)\, r \,, \end{eqnarray} to the order
of $1/m_b^3$. Note that we have neglected the corrections of SU(3)
symmetry breaking to the nonspectator effects in $\tau
(B_s)/\tau(B_d)$. We see that the prediction for $\tau(B^-)/\tau
(B_d)$ is in agreement with the current world average:
$\tau(B^-)/\tau(B_d)$~=1.07$\pm$ 0.03 \cite{LEP}, whereas the
heavy-quark-expansion-based result for $\tau(B_s)/\tau(B_d)$
deviates somewhat from the central value of the world average:
$0.94\pm 0.04$. Thus it is urgent to carry out more precise
measurements of the $B_s$ lifetime. Using the existing sum rule
estimate for the parameter $r$ \cite{Col} together with $\tilde
B=1$ gives $\tau(\Lambda_b)/\tau(B_d)\geq 0.98$. Therefore, the
$1/m_b^3$ nonspectator corrections are not responsible for the
observed lifetime difference between the $\Lambda_b$ and $B_d$.
\section{Conclusions}
The nonspectator effects can be parametrized in terms of the
hadronic parameters $B_1$, $B_2$, $\epsilon_1$ and
$\epsilon_2$~\cite{recent}, where $B_1$ and $B_2$ characterize the
matrix elements of color singlet-singlet four-quark operators and
$\epsilon_1$ and $\epsilon_2$ the matrix elements of color
octet-octet operators. In OPE language, the prediction of $B$
meson lifetime ratios depends on the nonspectator effects of order
$16\pi^2/m_b^3$ in the heavy quark expansion. Obviously, the
shorter lifetime of the $\Lambda_b$ relative to that of the $B_d$
meson and/or the lifetime ratio $\tau(B_s)/\tau(B_d)$ cannot be
explained by the theory so far. It is very likely that local
quark-hadron duality is violated in nonleptonic decays.
As emphasized in \cite{Cheng}, one should not be contented with the
agreement between theory and experiment for the lifetime ratio
$\tau(B^-)/ \tau(B_d)$. In order to test the OPE approach for
inclusive nonleptonic decay, it is even more important to
calculate the absolute decay widths of the $B$ mesons and compare
them with the data. From (\ref{nl}), (\ref{sl}), (\ref{biei}) and
considering the contributions of the nonspectator effects, we
obtain \begin{eqnarray} \label{width} \Gamma_{\rm tot}(B_d) &=&
\,(3.61^{+1.04}_{-0.84}) \times 10^{-13}\,{\rm GeV}, \nonumber\\
\Gamma_{\rm tot}(B^-) &=& \,(3.34^{+1.04}_{-0.84}) \times
10^{-13}\,{\rm GeV}, \end{eqnarray} noting that the next-to-leading QCD
radiative correction to the inclusive decay width has been
included. The absolute decay widths strongly depend on the value
of the $b$ quark mass. The problem with the absolute
decay width $\Gamma(B)$ is intimately related to the $B$ meson
semileptonic branching ratio ${\cal B}_{\rm SL}$. Unlike the
semileptonic decays, the heavy quark expansion in inclusive
nonleptonic decay is {\it a priori} not justified due to the
absence of an analytic continuation into the complex plane and
hence local duality has to be invoked in order to apply the OPE
directly in the physical region.
To conclude, we have derived in heavy quark effective theory the
renormalization-group improved sum rules for the hadronic
parameters $B_1$, $B_2$, $\epsilon_1$, and $\epsilon_2$ appearing
in the matrix element of four-quark operators. The results are
$B_1(m_b)=0.96\pm 0.04$, $B_2(m_b)=0.95\pm 0.02$,
$\epsilon_1(m_b)=-0.14\pm 0.01$ and $\epsilon_2(m_b)=-0.08\pm
0.01$ to the zeroth order in $1/m_b$. The resultant $B$-meson
lifetime ratios are $\tau(B^-)/\tau(B_d)=1.11\pm 0.02$ and
$\tau(B_s)/\tau(B_d)\approx 1$.
\acknowledgments The author thanks V. L. Chernyak for helpful
discussions and comments. This work was supported in part by the
National Science Council of R.O.C. under Grant No.
NSC87-2112-M-001-048.
\thebibliography{99}
\def\bibitem{\bibitem}
\bibitem {BIGI}
I.I. Bigi, N.G. Uraltsev, and A.I. Vainshtein, Phys.\ Lett.\ B
{\bf 293}, 430 (1992) [{\bf 297}, 477(E) (1993)]; I.I. Bigi, M.A.
Shifman, N.G. Uraltsev, and A.I. Vainshtein, Phys.~
Rev.~Lett.~{\bf 71}, 496 (1993); A. Manohar and M.B. Wise, Phys.
Rev. {\bf D49}, 1310 (1994); B. Blok, L. Koyrakh, M. Shifman, and
A. Vainshtein, {\sl ibid.}, 3356 (1994); B. Blok and M. Shifman,
Nucl. Phys. {\bf B399}, 441 (1993); {\sl ibid.}, {\bf B399}, 459
(1993).
\bibitem{LEP} For updated world averages of $B$ hadron lifetimes,
see J. Alcaraz {\it et al.} (LEP $B$ Lifetime Group),
http://wwwcn.cern.ch/\~\,claires/lepblife.html.
\bibitem{Uraltsev} N.G. Uraltsev, \pl {\bf B376}, 303 (1996); J.L.
Rosner, \pl {\bf B379}, 267 (1996).
\bibitem{NS} M.~Neubert and C.T.~Sachrajda, Nucl.~Phys.~{\bf B483},
339 (1997); M.~Neubert, CERN-TH/97-148 [hep-ph/9707217].
\bibitem{Cheng} H.Y.~Cheng, Phys.~Rev.~D~{\bf 56}, 2783 (1997).
\bibitem{BLLS} M. S. Baek, J. Lee, C. Liu, and H.S. Song, Phys. Rev.
{\bf D57}, 4091 (1998).
\bibitem{Hokim} Q. Hokim and X.Y. Pham, \pl {\bf B122}, 297 (1983).
\bibitem{Bagan} E. Bagan, P. Ball, V.M. Braun, and P. Gosdzinsky, \pl {\bf B342},
362 (1995); {\sl ibid.} {\bf B374}, 363(E) (1996); E. Bagan, P. Ball, B.
Fiol, and P. Gosdzinsky, {\sl ibid.} {\bf B351}, 546 (1995); M. Lu, M. Luke,
M.J. Savage, and B.H. Smith, Phys. Rev. {\bf D55}, 2827 (1997).
\bibitem{CT} H.Y. Cheng and B. Tseng, hep-ph/9803457.
\bibitem{Keum} Y.Y. Keum and U. Nierste, Phys.\ Rev.\ {\bf D57}, 4282 (1998).
\bibitem {SV1}
M.A. Shifman and M.B. Voloshin, Sov.\ J.\ Nucl.\ Phys.\ {\bf 41},
120 (1985).
\bibitem{HY} H.-Y. Cheng and K.-C. Yang, IP-ASTP-03-98
[hep-ph/9805222], to appear in Phys. Rev. D.
\bibitem{yang1} K.-C. Yang, Phys.~Rev.~D~{\bf 57}, 2983 (1998);
K.-C. Yang, W-Y. P. Hwang, E.M. Henley, and L.S. Kisslinger, {\sl
ibid.}~D~{\bf 47}, 3001 (1993); K.-C. Yang and W-Y. P. Hwang, {\sl
ibid.}~D~{\bf 49}, 460 (1994).
\bibitem{Neubert2} M. Neubert, Phys.~Rev.~D {\bf 45}, 2451 (1992);
{\bf 46}, 1076 (1992).
\bibitem{BB} P. Ball and V.M. Braun, Phys.~Rev.~D~{\bf 49}, 2472 (1994).
\bibitem {Chern}
V. Chernyak, Nucl. Phys. {\bf B457}, 96 (1995).
\bibitem{lattice} C. Bernard {\it et al.,} hep-ph/9709328.
\bibitem{Ball} P. Ball, Nucl.~Phys.~{\bf B421}, 593 (1994).
\bibitem{Abe} CDF Collaboration, F. Abe {\it et al.,} Phys. Rev. {\bf D57}, 5382
(1998).
\bibitem{Col} P. Colangelo and F. De Fazio, \pl {\bf B387}, 371 (1996).
\bibitem{recent} For recent works, see M. Di Pierro and C.T. Sachrajda,
Nucl. Phys. B534, 373, (1998), and D. Pirjol and N. Uraltsev,
UND-HEP-98-BIG-03 [hep-ph/9805488].
\end{document}
| proofpile-arXiv_065-8364 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Measurements of the low and medium $Q^2$ \footnote{the negative of the
square of the four-momen\-tum transfer between the positron and the proton}
neutral current (NC) deep inelastic scattering (DIS) cross sections at
HERA have revealed the rapid rise of the proton
structure function $F_2$ as Bjorken-$x$ decreases below $10^{-2}$.
At low $Q^2$ down to $0.1\;\rm GeV^2$ ZEUS data allows study of the
`transition region' as $Q^2 \rightarrow 0$ in which perturbative QCD
(pQCD) must break down.
At high $Q^2$, NC DIS measurements are sensitive to details of the QCD
evolution of parton densities, electroweak couplings and the
propagator mass of the $Z^0$ gauge boson. Furthermore, such
measurements allow the searches for physics beyond the Standard Model,
such as resonance searches or contact interactions.
\section{Phenomenology of $F_2$ at low $x$ and low $Q^2$}
\subsection{Phenomenology of the low $Q^2$ region}
The primary purpose is to use NLO DGLAP QCD on the one hand and the
simplest non-perturbative models on the other to explore the $Q^2$
transition region and through probing their limitations to shed light
on how the pQCD description of $F_2$ breaks down. One way to
understand the rise in $F_2$ at low $x$ is advocated by Gl\"uck, Reya
and Vogt (GRV94) who argue that the starting scale for the evolution
of the parton densities should be very low $(\sim 0.3\;\rm GeV^2)$ and
at the starting scale the parton density functions should be
non-singular. The observed rise in $F_2$, with a parameterisation
valid above $Q^2 \approx 1\;\rm GeV^2$, is then generated
dynamically. On the other hand, at low $x$ one might expect that the
standard NLO $Q^2$ evolution given by the DGLAP equations breaks down
because of the large $\ln (1/x)$ terms that are not included. Such
terms are taken into account by the BFKL formalism, which in leading
order predicts a rising $F_2$ at low $x$. The rise comes from a
singular gluon density, $x g \sim x^\lambda$, with $\lambda$ in the
range $-0.3$ to -0.5. Clearly accurate experimental results on
$F_2$ at low $x$ and the implied value of $\lambda$ are of great
interest.
At some low value of $Q^2$ pQCD will break down and non-perturbative
models must be used to describe the data. At low $x$ and large
$\gamma^*p$ centre-of-mass energy, $W\approx \sqrt{Q^2/x}$, the total
$\gamma^*p$ cross-section is given by
\begin{eqnarray}
\sigma_{tot}^{\gamma^*p}(W^2,Q^2) = \sigma_T + \sigma_L = \frac{4
\pi^2 \alpha}{Q^2} F_2(x, Q^2)
\label{eqn:sigma_tot}
\end{eqnarray}
where $\sigma_T$ and $\sigma_L$ are the cross-sections for
transversely and longitudinally polarised virtual photons
respectively. Two non-perturbative approaches are considered, the
generalised vector meson dominance model (GVMD) and a Regge-type two
component Pomeron+Reggeon approach a la Donnachie and Landshoff (DL)
to give a good description of hadron-hadron and photoproduction total
cross-section data.
\subsection{Measurement of $F_2$ with Shifted Vertex Data}
The shifted vertex data correspond to an integrated luminosity of
$236\;\rm nb^{-1}$ taken in a special running period, in which the
nominal interaction point was offset in the proton beam direction by
$+70\;\rm cm$, away from the detecting calorimeter. Compared to the
earlier shifted vertex analysis, for the 1995 data taking period the
calorimeter modules above and below the beam were moved closer to the
beam, thus extending the shifted vertex $Q^2$ range down to $0.6\;\rm
GeV^2$.
The double differential cross-section for single virtual-boson
exchange in DIS is given by
\begin{eqnarray}
\frac{d^2 \sigma}{dx\, dQ^2} & = & \frac{2 \pi \alpha^2}{x\, Q^4}
\left[ Y_+ F_2 - y^2 F_L - Y_- x F_3 \right] \cdot
\left( 1 + \delta_r \right) \\
& \simeq & \frac{2 \pi \alpha^2}{x\, Q^4}
\left[ 2(1-y) + \frac{y^2}{1 + R} \right] F_2 \cdot (1 + \delta_r),
\label{eqn:doublediff}
\end{eqnarray}
where R is related to the longitudinal structure function $F_L$ by $R
= F_L / (F_2 - F_L)$ and $\delta_r$ gives the radiative corrections to
the Born cross-section, which in this kinematic region is at most
$10\%$. The parity violating term $x F_3$ arising from the $Z^0$
exchange is negligible in the $Q^2$ range of this analysis. Further
details about the data analysis can be found in ref. \cite{low_pheno}.
\begin{figure}
\center{
\hfill
\psfig{figure=figure1.ps,height=8cm}
\hfill}
\caption{Low $Q^2$ $F_2$ data for different $Q^2$ bins together with
the ZEUSDL style Regge model fit to the ZEUS BPC95 data. At larger
$Q^2$ values the ZEUS NLO QCD fit is also shown.}
\label{fig:svx95}
\end{figure}
Fig.~\ref{fig:svx95} shows the results for $F_2$ as a function of $x$
in bins of $Q^2$ between 0.65 and $6\;\rm GeV^2$ (ZEUS SVX95) together
with ZEUS $F_2$ measurements at very low $Q^2 = 0.11 - 0.65\;\rm
GeV^2$ (ZEUS BPC95)
and at larger $Q^2$ those from the ZEUS94. There is good agreement
between the different ZEUS data sets in the region of overlap. Also
shown are data from the shifted vertex measurements by H1 (H1 SVX95)
and fixed target data from E665. The steep increase of $F_2$ at low
$x$ observed in the higher $Q^2$ bins softens at the lower $Q^2$
values of this analysis. The curves shown will be discussed later in
the text.
\subsection{The low $Q^2$ region}
We first give an overview of the low $Q^2$ region, $Q^2 < 5\;\rm
GeV^2$, taking ZEUS SVX95, BPC95 and ZEUS94 $F_2$ data. Using
Eq.~\ref{eqn:sigma_tot} we calculate $\sigma_{tot}^{\gamma^*p}$ values
from the $F_2$ data. The DL model predicts that the cross-section
rises slowly with energy $\propto W^{2\lambda}$, $\lambda = \alpha_P -
1 \approx 0.08$ and this behaviour seems to be followed by the data at
very low $Q^2$. Above $Q^2 = 0.65\;\rm GeV^2$, the DL model predicts a
shallower rise of the cross-section than the data exhibit. For $Q^2$
values of around $1\;\rm GeV^2$ and above, the GRV94 curves describe
the qualitative behaviour of the data, namely the increasing rise of
$\sigma_{tot}^{\gamma^*p}$ with $W^2$, as $Q^2$ increases. This
suggests that the perturbative QCD calculations can account for a
significant fraction of the cross-section at the larger $Q^2$ values.
For the remainder of this section we concentrate on non-perturbative
descriptions of the ZEUS BPC95 data.
Since BPC95 data are binned in $Q^2$ and $y$ we first
rewrite the double differential cross-section of
Eq.~\ref{eqn:doublediff} as $\frac{d^2 \sigma}{dy dQ^2} = \Gamma
\cdot (\sigma_T + \epsilon \sigma_L)$ where $\sigma_L = \frac{Q^2}{4 \pi^2
\alpha} F_L$ and $\sigma_T$ has been defined by
Eq.~\ref{eqn:sigma_tot}. The virtual photon has flux factor $\Gamma$
and polarisation $\epsilon$.
Keeping only the continuum states in the GVMD at a fixed $W$ the
longitudinal and transverse $\gamma^*p$ cross-section are related to
the corresponding photoproduction cross-section $\sigma_0^{\gamma p}$
by
\begin{eqnarray}
\sigma_L(W^2, Q^2) & = & \xi \left[ \frac{M_0^2}{Q^2}\ln\frac{M_0^2 +
Q^2}{M_0^2} - \frac{M_0^2}{M_0^2 + Q^2} \right] \sigma_0^{\gamma
p}(W^2) \nonumber\\
\sigma_T(W^2, Q^2) & = & \frac{M_0^2}{M_0^2 + Q^2} \sigma_0^{\gamma p}
(W^2)
\end{eqnarray}
where the parameter $\xi$ is the ratio $\sigma_L^{Vp}/ \sigma_T^{Vp}$
for vector meson (V) proton scattering and $M_0$ is the effective
vector meson mass. Neither $\xi$ nor $M_0$ are given by the model and
they are either determined from a fit to data or by other
approaches. As we do not have much sensitivity to $\xi$ and it is
small (0.2 - 0.4) we set it here to zero. We thus have 9 parameters to
be determined by fitting the BPC data to the simplified GVMD
expression $F_2 =
\frac{Q^2 M_0^2}{M_0 + Q^2} \frac{\sigma_0^{\gamma p}}{4\pi^2 \alpha}$
in 8 bins of $W$ between 104 and 251 GeV. The fit is reasonable and
its quality might also be judged from the upper plot in
Fig.~\ref{fig:gvmd}. The value obtained for $M_0^2$ is $0.53 \pm 0.04
(stat) \pm 0.09 (sys)$. The resulting extrapolated values of
$\sigma_0^{\gamma p}$ are shown as a function of $W^2$ in the lower
plot of Fig.~\ref{fig:gvmd}, along with measurements from HERA and
lower energy experiments. The extrapolated BPC data lie somewhat above
the direct measurements from HERA. They are also above the cross
section prediction of the DL model.
It should be clearly
understood that the $\sigma_0^{\gamma p}$ data derived from the BPC95
data are not a measurement of the total photoproduction cross-section
but the result of a physically motivated ansatz.
\begin{figure}[ht]
\center{ \hfill
\psfig{figure=figure2.ps,height=7.2cm} \hfill}
\caption{Upper plot: ZEUS BPC95 measurements of the total cross-section
$\sigma_T + \epsilon \sigma_L$ in bins of $W$ and the GVMD fit to the
data. Lower plot: $\sigma_{tot}^{\gamma p}$ as a function of
$W^2$. The ZEUS BPC95 points are those from the GVMD extrapolation of
$\sigma_0^{\gamma p}$.}
\label{fig:gvmd}
\end{figure}
The simple GVMD approach just described gives a concise account of the
$Q^2$ dependence of the BPC data but it says nothing about the energy
dependence of $\sigma_0^{\gamma p}$. To explore this aspect of the
data we turn to a two component Regge model
\begin{eqnarray}
\sigma_{tot}^{\gamma p} (W^2) & = & A_R(W^2)^{\alpha_R-1} +
A_P(W^2)^{\alpha_P-1} \nonumber
\end{eqnarray}
where $P$ and $R$ denote the Pomeron and Reggeon contributions. The
Reggeon intercept $\alpha_R$ is fixed to the value 0.5 which is
compatible with the original DL value and by the re-evaluation of
Cudell et al. With such an intercept the Reggeon contribution is
negligible at HERA energies. Fitting the extrapolated BPC95 data alone
yields a value $1.141 \pm 0.020 (stat)$ for $\alpha_P$. Fitting both
terms to the real photoproduction data (with $W^2 > 3\;\rm GeV^2$) and
BPC95 data yields $\alpha_P = 1.101 \pm 0.002 (stat)$. Including in
addition the two original measurements from HERA as well gives
$\alpha_P = 1.100 \pm 0.002 (stat)$. All these values of $\alpha_P$ are
larger than the value of 1.08 used originally by DL, but we note that
the best estimate of Cudell et al. is
$1.0964^{+0.0115}_{-0.0094}$,
which within the errors is consistent with our result.
The final step in the analysis of the BPC data is to combine the GVMD
fitted $Q^2$ dependence with the Regge model energy dependence
\begin{eqnarray}
\sigma_{tot}^{\gamma^* p} & = & \left( \frac{M_0^2}{M_0^2 +
Q^2}\right) (A_R (W^2)^{\alpha_R - 1} + A_P(W^2)^{\alpha_P -1}).\nonumber
\end{eqnarray}
The parameters $M_0^2$ and $\alpha_R$ are fixed to their previous
values of 0.53 and 0.5, respectively. The 3 remaining parameters are
determined by fitting to real photoproduction data and the original
BPC data. The description of the low $Q^2$ $F_2$ data given by this DL
style model is shown in Fig.~\ref{fig:svx95}. Data in the BPC region
$Q^2 < 0.65\;\rm GeV^2$ is well described. At larger $Q^2$ values the
curves fall below the data. Also shown in Fig.~\ref{fig:svx95} for
$Q^2 > 6\;\rm GeV^2$ are the results of a NLO QCD fit (full line) as
described in Sec.~\ref{sec:qcdfit}.
\subsection{$F_2$ slopes: $d \ln F_2 / d \ln (1/x); dF_2 / d \ln Q^2$
\label{sec:slopes}}
To quantify the behaviour of $F_2$ as a function of $Q^2$ and $x$ at
low $x$ we calculate the two slopes $d \ln F_2 / d \ln (1/x); dF_2 /
d \ln Q^2$ from the ZEUS SVX95, BPC95 and ZEUS94 data sets.
At a fixed value of $Q^2$ and at small $x$ the behaviour of $F_2$ can
be characterised by $F_2 \propto x^{-\lambda}$, with $\lambda$ taking
rather different values in the Regge and BFKL
approaches. $\lambda_{eff}$ is calculated from horizontal slices of
ZEUS $F_2$ data between the $y = 1$ HERA kinematic limit and a fixed
cut of $x < 0.01$, here including E665 data. In a given $Q^2$ bin
$\langle\, x \,\rangle$ is calculated from the mean value of $\ln (1/x)$
weighted by the statistical errors of the corresponding $F_2$
values. The same procedure is applied to the theoretical curves shown
for comparison.
Figure \ref{fig:xq2_slopes} shows the measured values of $\lambda_{eff}$ as a
function of $Q^2$. From the Regge approach one would expect
$\lambda_{eff} \approx 0.1$ and independent of $Q^2$. Data for $Q^2 <
1\;\rm GeV^2$ is consistent with this expectation. The linked points
labelled DL are calculated from the Donnachie-Landshoff fit
and as expected from the discussion of the previous section are
somewhat below the data. For $Q^2 > 1\;\rm GeV^2$, $\lambda_{eff}$
increases slowly to around 0.3 at $Q^2$ values of $40\;\rm
GeV^2$. Qualitatively the tendency of $\lambda_{eff}$ to increase with
$Q^2$ is described by a number of pQCD approaches. The linked points
labelled GRV94 are calculated from the NLO QCD GRV94 fit. Although the
GRV94 prediction follows the trend of the data it tends to lie above
the data, particularly in the $Q^2$ range $3 - 20\;\rm GeV^2$.
\begin{figure}[ht]
\vspace*{-1mm}
\center{
\hfill
\psfig{figure=figure3.ps,height=5cm}
\hfill
\psfig{figure=figure4.ps,height=5cm}
\hfill }
\vspace*{-2mm}
\caption{Left plot: $d \ln F_2 / d \ln (1/x)$ as a function of $Q^2$
calculated by fitting ZEUS and E665 $F_2$ data in bins of $Q^2$. Right
plot: $d F_2/ d \ln Q^2$ as a function of $x$ calculated by fitting
ZEUS $F_2$ data in bins of $x$.}
\label{fig:xq2_slopes}
\end{figure}
Within the framework of pQCD, at small $x$ the behaviour of $F_2$ is
largely determined by the behaviour of the sea quarks $F_2 \sim xS$,
whereas the $d F_2/d\ln Q^2$ is determined by the convolution of the
splitting function $P_{qg}$ and the gluon density, $dF_2/d\ln Q^2
\propto
\alpha_s P_{qg} \otimes g$. In order to study the scaling violations
of $F_2$ in more detail the logarithmic slope $d F_2/d\ln Q^2$ is
derived from the data by fitting $F_2 = a + b\ln Q^2$ in bins of fixed
$x$. The statistical and systematic errors are determined as described
above. The results for $d F_2/d\ln Q^2$ as a function of $x$ are shown
in Fig.~\ref{fig:xq2_slopes}. For values of $x$ down to $3\times
10^{-4}$, the slopes are increasing as $x$ decreases. At lower values
of $x$ and $Q^2$, the slope decreases. Comparing the rapid increase in
$F_2$ at small $x$ with the behaviour of the $d F_2/d \ln Q^2$, one is
tempted to the naive conclusion that the underlying behaviour of the
sea quark and gluon momentum distributions must be different at small
$x$, with the sea dominant and the gluon tending to zero. The failure
of DL is in line with the earlier discussion. GRV94 does not follow
the trend of the data when it turns over.
\subsection{NLO QCD fit to $F_2$ data\label{sec:qcdfit}}
In perturbative QCD the scaling violations of the $F_2$ structure
function are caused by gluon bremsstrahlung from quarks and quark pair
creation from gluons. In the low $x$ domain accessible at HERA the
latter process dominates the scaling violations. A QCD analysis of
$F_2$ structure functions measured at HERA therefore allows one to
extract the gluon momentum density in the proton down to low values of
$x$. In this section we present NLO QCD fits to the ZEUS 1994 nominal
vertex data and the SVX95 data of this paper. We are not attempting to
include all available information on parton densities, but
concentrating on what ZEUS data and their errors allow us to conclude
about the gluon density at low $x$.
To constrain the fits at high $x$ proton and deuteron $F_2$ structure
function data from NMC and BCDMS are included. The kinematic range
covered in this analysis is $3 \times 10^{-5} < x < 0.7$ and $1 < Q^2
< 5000\;\rm GeV^2$.
The QCD predictions for the $F_2$ structure functions are obtained by
solving the DGLAP evolution equations at NLO. These equations yield
the quark and gluon momentum distributions at all values of $Q^2$
provided they are given at some input scale $Q_0^2$. In this analysis
we adopt the so-called fixed flavour number scheme where only three
light flavours $(u, d, s)$ contribute to the quark density in the
proton. The corresponding structure functions $F_2^c$ and $F_2^b$ are
calculated from the photon-gluon fusion process including massive NLO
corrections. The input valence distributions are taken from the parton
distribution set MRS(R2). As for MRS(R2) we assume that the strange
quark distribution is a given fraction $K_s = 0.2$ of the sea at the
scale $Q^2 = 1\;\rm GeV^2$. The gluon normalisation is fixed by the
momentum sum rule. The input value for the strong coupling constant is
set to $\alpha_s(M_Z^2) = 0.118$ and the charm mass is taken to be
$m_c = 1.5\;\rm GeV$. In the QCD evolutions and the evaluation of the
structure functions the renormalisation scale and mass factorisation
scale are both set equal to $Q^2$. In the definition of the $\chi^2$
only statistical error are included and the relative normalisation of
the data sets is fixed at unity. The fit yields a good description of
the data as shown in Fig.~\ref{fig:svx95}. We have also checked that
the gluon obtained from this fit to scaling violations is in agreement
with the recent ZEUS measurements of charm production and $F_2^c$ in
deep inelastic scattering at HERA.
Two types of systematic uncertainties have been considered in this
analysis. `HERA standard errors' contain statistical error on the
data, experimental systematic uncertainties, relative normalisation
of the different data sets and uncertainties on $\alpha_s$, the
strange quark content of the proton and the charm
mass. `Parametrisation errors' contain uncertainties from a $\chi^2$
definition including statistical and experimental systematic errors,
variations of the starting scale $Q_0^2$ and an alternative, more
flexible parametrisation of the gluon density using Chebycheff
polynomials. The first type of errors amounts to $16\%$ $\Delta g/g$
at $x = 5 \times 10^{-5}$, $Q^2 = 7\;\rm GeV^2$, the second type
yields $9.5\%$ in $\Delta g/g$.
\begin{figure}[ht]
\vspace*{-3mm}
\center{
\hfill
\psfig{figure=figure5.ps,height=8cm}
\hfill }
\caption{The quark singlet momentum distribution, $x\Sigma$ (shaded),
and the gluon momentum distribution, $xg(x)$ (hatched), as a function
of $x$ at fixed values of $Q^2 = 1$, 7 and $20\;\rm GeV^2$. The error
bands correspond to the quadratic sum of all error sources considered
for each parton density.}
\label{fig:nlo_fit}
\end{figure}
The three plots of Fig.~\ref{fig:nlo_fit} show the distribution for
$x\Sigma$ and $xg$ as a function of $x$ for $Q^2$ at 1, 7 and $20\;\rm
GeV^2$. It can be seen that even at the smallest $Q^2$ $x\Sigma$ is
rising at small $x$ whereas the gluon distribution has become almost
flat. These results give support to the naive conclusion of
Sec.~\ref{sec:slopes}, that the sea distribution dominates at low $x$
and $Q^2$. At $Q^2 = 1\;\rm GeV^2$ the gluon distribution is poorly
determined and can, within errors, be negative at low $x$.
\section{Measurement of the Proton Structure Function $F_2$ from 1996
and 1997 data}
\subsection{Kinematics in Deep Inelastic Scattering}
Recalling the double differential NC
cross-section (\ref{eqn:doublediff}), but now including the corrections
($\delta_L \mbox{ and } \delta_3)$ for $F_L$ and $xF_3$ yields
\begin{eqnarray}
\frac{d^2 \sigma}{dx\, dQ^2} & = & \frac{2 \pi \alpha^2 Y_+}{x Q^4} F_2
\left( 1 - \delta_L - \delta_3 \right)
\left( 1 + \delta_r \right)
\end{eqnarray}
Here the $F_2$ structure function contains contributions from virtual
photon and $Z^0$ exchange
\begin{eqnarray}
F_2 & = & F_2^{em} + \frac{Q^2}{\left( Q^2 + M_Z^2 \right) } F_2^{int} +
\frac{Q^4}{\left( Q^2 + M_Z^2 \right)^2} F_2^{wk}
\end{eqnarray}
where $M_Z$ is the mass of the $Z^0$ and $F_2^{em}$, $F_2^{wk}$ and
$F_2^{int}$ are the contributions to $F_2$ due to photon exchange,
$Z^0$ exchange and $\gamma Z^0$ interference respectively. In this
analysis we determined the structure function $F_2^{em}$ using 1996
and 1997 data with an integrated luminosity of $6.8\;\rm pb^{-1}$ and
$27.4\;\rm pb^{-1}$, respectively.
The selection and kinematic reconstruction of NC DIS events is based
on an observed positron and the hadronic final state. For further
details see ref. \cite{vanc_f2}.
\subsection{Results}
Monte Carlo samples are used to estimate the acceptance, migration,
radiative corrections, electroweak corrections and background
contributions. $F_2^{em}$ is then determined based on a bin-by-bin
unfolding. The resulting statistical error, including the Monte Carlo
statistics, ranges from 2\% below $Q^2 = 100\;\rm GeV^2$ to 5-6\% at
$Q^2 \approx 800\;\rm GeV^2$.
The systematic uncertainties have been estimated by varying the
selection cuts, efficiencies and reconstruction techniques and
redetermining the cross section including background
estimates. Potential error source such as possible detector
misalignment, event vertex reconstruction, calorimeter energy scale,
positron identification efficiency, background contributions and
hadronic energy flow have been considered. The total systematic
uncertainty amounts to 3-4\% except at low and high $y$, where it
grows to 12\%. At the present preliminary state of the analysis we
estimate an overall normalisation uncertainty of 3\%.
The resulting $F_2^{em}$ is shown as a function of $x$ for fixed $Q^2$
in Figure~\ref{fig:f2x_1}. Results from our previous analysis, and
from fixed target experiments are also shown for comparison. At low
$Q^2$ the rise in $F_2$ for $x
\rightarrow 0$ is measured with improved precision. The coverage in
$x$ has also been extended to higher $x$, yielding extended overlap
with the fixed target experiments; in the overlap region reasonable
agreement has been found. The $F_2$ scaling violation from this
analysis and the fixed target data are also shown in
Figure~\ref{fig:f2x_1}. For $Q^2 > 100\;\rm GeV^2$ the increase in
statistics allows a measurement of $F_2^{em}$ in smaller bins with
respect to our previous measurement. Above $Q^2 = 800\;\rm GeV^2$, the
statistical error grows typically to 5-15\% and dominates the total
error. Overall our data are in agreement with our published data.
\begin{figure}[ht]
\center{\hbox{
\psfig{figure=figure6.ps,height=5.6cm}
\psfig{figure=figure7.ps,height=5.6cm}}}
\vskip 0.2cm
\center{\hbox{
\psfig{figure=figure8.ps,height=5.6cm}
\hspace*{8mm}
\psfig{figure=figure9.ps,height=5.7cm}}}
\caption{Top and bottom left plots: $F_2^{em}$ versus $x$ for fixed
$Q^2$. Bottom right plot: $F_2^{em}$ as a function of $Q^2$ for fixed
$x$.}
\label{fig:f2x_1}
\end{figure}
\section*{References}
| proofpile-arXiv_065-8382 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Appendix \thesection\protect\indent \parbox[t]{11.715cm} {#1}}
\addcontentsline{toc}{section}{Appendix \thesection\ \ \ #1}
}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\newcommand{\newsection}{
\setcounter{equation}{0}
\section}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\newcommand{\tr}[1]{\:{\rm tr}\,#1}
\newcommand{\Tr}[1]{\:{\rm Tr}\,#1}
\def{\rm const}{{\rm const}}
\def{\,\rm e}\,{{\,\rm e}\,}
\def|0\rangle{|0\rangle}
\def|\Psi\rangle{|\Psi\rangle}
\def\langle\Psi |{\langle\Psi |}
\def\Phi^\dagger{\Phi^\dagger}
\def\Pi^\dagger{\Pi^\dagger}
\def\partial{\partial}
\def\delta{\delta}
\def\varphi{\varphi}
\def\kappa{\kappa}
\def^{\dagger}{^{\dagger}}
\renewcommand{\em}[1]{\varepsilon_{-\, #1 }}
\newcommand{\ep}[1]{\varepsilon_{+\, #1 }}
\def\varepsilon{\varepsilon}
\newcommand{\epm}[1]{\varepsilon_{\pm\, #1 }}
\newcommand{\psm}[1]{\psi_{-\, #1 }}
\newcommand{\psp}[1]{\psi_{+\, #1 }}
\newcommand{\pspm}[1]{\psi_{\pm\, #1 }}
\def\phi_{+}{\phi_{+}}
\def\phi_{-}{\phi_{-}}
\def\phi_{\pm}{\phi_{\pm}}
\newcommand{\bpsm}[1]{\bar{\psi}_{-\, #1 }}
\newcommand{\bpsp}[1]{\bar{\psi}_{+\, #1 }}
\newcommand{\bpspm}[1]{\bar{\psi}_{\pm\, #1 }}
\newcommand{\am}[1]{a_{-\, #1 }}
\newcommand{\ap}[1]{a_{+\, #1 }}
\newcommand{\apm}[1]{a_{\pm\, #1 }}
\newcommand{\mn}[1]{\left\langle #1 \right\rangle}
\newcommand{\br}[1]{\left( #1 \right)}
\newcommand{\nor}[1]{\langle #1 | #1 \rangle}
\newcommand{\rf}[1]{(\ref{#1})}
\newcommand{\nonumber \\*}{\nonumber \\*}
\hyphenation{di-men-sion-al}
\hyphenation{di-men-sion-al-ly}
\begin{document}
\thispagestyle{empty}
\begin{flushright}
NORDITA--HEP--98/63\\
ITEP--TH--72/98\\
\end{flushright}
\vskip 2true cm
\begin{center}
{\Large\bf Screening of Fractional Charges}\\
\vskip 0.5cm
{\Large\bf in (2+1)-dimensional QED}
\vskip 1.5true cm
{\large\bf
Dmitri Diakonov$^{\diamond *}$ and Konstantin Zarembo$^{\dagger +}$}
\\
\vskip 1true cm
\noindent
{\it
$^\diamond $NORDITA, Blegdamsvej 17, 2100 Copenhagen \O, Denmark \\
\vskip .2true cm
$^* $Petersburg Nuclear Physics Institute, Gatchina,
St.Petersburg 188 350, Russia\\
\vskip .2true cm
$^\dagger $Department of Physics and Astronomy,
University of British Columbia,
6224 Agricultural Road, Vancouver, B.C. Canada V6T 1Z1
\vskip .2true cm
$^+ $Institute of Theoretical and Experimental Physics,
B. Cheremushkinskaya 25, 117259 Moscow, Russia}
\vskip 1.5cm
E-mails: {\tt diakonov@nordita.dk, zarembo@theory.physics.ubc.ca/@itep.ru}
\end{center}
\vskip 2true cm
\begin{abstract}
\noindent
We show that the logarithmically rising static potential between
opposite-charged sources in two dimensions is screened by dynamical
fields even if the probe charges are fractional, in units of the
charge of the dynamical fields. The effect is due to quantum mechanics:
the wave functions of the screening charges
are superpositions of two bumps localized
both near the opposite- and the same-charge sources, so that each of
them gets exactly screened.
\end{abstract}
\newpage
\newsection{Introduction}
The static potential between trial external charges, or the Wilson
loop expectation value, carries important information about infrared
behavior of gauge theories. An infinite growth of the potential
provides the simplest criterium for confinement. However, in certain
theories the rising potential can be screened at large distances by
dynamical fields. The screening is inevitable if dynamical charges can
form neutral bound states with external sources, for example, when the
external and the dynamical charges belong to the same
representation of the gauge group (have the same magnitude in the
Abelian case). Since the potential between bare charges can be
arbitrary large, at some point the creation of a pair from the
vacuum becomes energetically favorable. Each of the created charges
couples to the static charge of the opposite sign. The interaction
energy between resulting bound states no more grows with the
separation.
The question of whether {\it fractional} charges can be screened or not
is more involved. This problem has been studied in two dimensions,
both in Abelian \cite{CJS71,GKMS95,ada96} and in non-Abelian
\cite{GKMS95,YM2} models. It appears that massless matter fields can
screen any Abelian fractional charge \cite{CJS71,GKMS95}. In
non-Abelian case, massless fields in any representation of the gauge
group screen sources in the fundamental representation
\cite{GKMS95,YM2}, as follows from comparison of the
models with massless
adjoint matter and with multiple flavors of fundamental matter
\cite{KS95}.
In this paper we consider the problem of charge screening in
three-dimensional scalar QED. This theory is confining when matter
decouples, as the Coulomb potential in two dimensions
grows logarithmically with distance. We
consider bosonic theory to purify the discussion,
because fermions, at least massive,
screen any charge in 2+1 dimensions \cite{AB97}. Three-dimensional
fermions induce the topological mass for a photon
at one loop \cite{CS} thus changing the logarithmically rising Coulomb
potential to the exponentially decreasing Yukawa one.
This phenomenon does not take place in scalar
QED, in which the photon remains massless.
We argue that, nevertheless,
any fractional static charge in 3D scalar QED is
screened, at least in the weak coupling regime. The
coupling constant $e^2$ in three dimensions has the dimension of mass,
so the weak coupling means that the ratio $e^2/m$ is small.
We do not expect any abrupt changes to happen as this parameter
is increased. Therefore, the screening, most probably, persists in the
strongly coupled theory as well.
We consider the static potential between external charges of the
magnitude $q$ separated by a distance $L$ (we imply that $0<q<1$
for simplicity
-- the screening of the integral part of the charge is obvious).
The screening mechanism is very simple and
is based on the consideration
of a two-particle state. If
$q^2e^2\ln(L\mu)>2m$, the interaction energy is sufficiently
large to create a pair
of dynamical charges from the vacuum.
The parameter $\mu$ is determined by the typical scale of the screening
charge distribution, actually $\mu\sim\sqrt{me^2}$.
At first sight, such rearrangement
of the vacuum can only change $q$ to $q-1$,
but cannot stop the
logarithmic growth of the potential with $L$, since the dynamical and
the external charges form a bound state which is also charged. However,
the wave functions of dynamical particles need not be localized
near the opposite-charged sources only.
Suppose the wave function
has the form of a superposition of two states well localized near
each of the static sources. Let these
bumps be normalized to the probabilities $p$ and $1-p$,
respectively.
\begin{figure}[t]
\hspace*{5cm}
\epsfxsize=7cm
\epsfbox{wv.eps}
\caption[x]{The delta-peaked external charges
(dashed lines)
together with the distribution of screening charges
(solid lines).}
\label{wvfig}
\end{figure}
This situation is schematically illustrated in fig.~\ref{wvfig}.
It is clear that if $p-(1-p)=q$, that is, if $p=(1+q)/2$,
the net charge localized near each of the external sources sums up
to zero. At the same time the total probabilities to find the
positive- and the negative-charged particles are both unities.
Only short-range interactions are present in such configuration
of charges; its energy does not grow with $L$, and,
thus, it becomes energetically favorable at very large separations. We
will argue that leaking of the charge to the region where it is
classically repulsed actually takes place for the Klein-Gordon particle
in the electric field of well-separated static charges. After that we
show that the configuration described above has smaller energy than
bare sources for sufficiently large $L$. The stability of this
configuration can be heuristically explained as follows. The
logarithmic Coulomb potential of the point charge is singular at short
distances, so it is quite natural that the negatively charged particle
form a bound state with the positive external source. This bound state
is described by a larger part of the double-bump wave function of type
plotted in fig.~\ref{wvfig}. The charge of this bound state is
$-(1-q)/2$, so, as a whole, it attracts the positively charged
particle, which explains the
stability of the smaller part of the wave function.
The paper is organized as follows. In Sec.~2 we diagonalize the
Hamiltonian of the scalar QED in the presence of external charged
sources in the weak coupling approximation. In Sec.~3 we discuss
the Klein-Gordon equation for a charged particle in the electric
field of a dipole.
In Sec.~4 the two-particle state becoming energetically
favorable at large separation is considered in more detail.
In Sec.~5 and in Appendix we comment on the path integral for
scalar QED in the presence of external charges.
\newsection{External charges in scalar QED}\label{hsec}
We consider scalar QED in three dimensions. The Lagrangian density
of this theory is
\begin{equation}
{\cal L}=-\frac14\,F_{\mu\nu}F^{\mu\nu}+(D_\mu\Phi)^\dagger D^\mu\Phi
-m^2\Phi^\dagger\Phi,
\end{equation}
where $\Phi$ is a complex scalar field and
\begin{equation}
D_\mu=\partial_\mu+ieA_\mu.
\end{equation}
For our purposes the canonical formalism is more appropriate.
In the Schr\"odinger representation, $A_0=0$
and $E_i=F_{0i}=\dot{A}_i$, $A_i$, $\Pi=\dot{\Phi}^\dagger$,
$\Phi$, $\Pi^\dagger=\dot{\Phi}$,
$\Phi^\dagger$ are canonical variables:
\begin{equation}
[A_i(x),E_j(y)]=i\delta_{ij}\delta(x-y),
\end{equation}
\begin{equation}
[\Phi(x),\Pi(y)]=i\delta(x-y)=[\Phi^\dagger(x),\Pi^\dagger(y)].
\end{equation}
The Hamiltonian is
\begin{equation}\label{ham}
H=\int d^2x\,\left[\frac12\,E_i^2+\frac14\,F_{ij}^2+\Pi^\dagger\Pi
+(D_i\Phi)^\dagger D_i\Phi+m^2\Phi^\dagger\Phi\right].
\end{equation}
The physical states
in the presence of external charges are
subject to the Gauss' law constraint:
\begin{equation}\label{gl}
\partial_iE_i|\Psi_{\rm phys}\rangle=(J_0+\rho)|\Psi_{\rm phys}\rangle,
\end{equation}
where $J_0$ is the charge density operator,
\begin{equation}
J_0(x)=ie\Bigl(\Phi^\dagger(x)\Pi^\dagger(x)-\Phi(x)\Pi(x)\Bigr)
\end{equation}
and $\rho$ is the density of external sources. In our case of two
well-separated point-like charges
\begin{equation}\label{defrho}
\rho(x)=qe\Bigl(\delta\br{x+L/2}-\delta\br{x- L/2}\Bigr).
\end{equation}
The potential of interaction between charges is equal to the difference
of the ground state energies in the sectors of the Hilbert space
defined by the Gauss' law with and without external sources.
The qualitative picture of charge screening is based on purely
classical notion of electric field
which becomes strong enough to create
pairs. It is difficult to visualize this picture in the Hamiltonian
formalism, where electric fields are operators in the Hilbert space
and logarithmically rising electrostatic potentials are somehow
encoded in the dependence of the wave functional on $A_i$.
However, it is possible to introduce classical, c-number
fields which play the role of electric potentials in the Hamiltonian
formalism, despite $A_0=0$
by definition.
Consider the following Hamiltonian:
\begin{equation}\label{hamf}
H(\varphi)=H+\int d^2x\,\varphi\br{\partial_iE_i-J_0-\rho},
\end{equation}
which now acts in the unconstraint Hilbert space. The
eigenfunctions $|\Psi\rangle$ and the eigenvalues $E$ of this Hamiltonian
are the functionals of $\varphi$:
\begin{equation}\label{sef}
H(\varphi)|\Psi\rangle=E|\Psi\rangle.
\end{equation}
We want to show that, if $\varphi$ is determined by
stationarity condition
\begin{equation}\label{sta}
\frac{\delta E}{\delta \varphi}=0,
\end{equation}
the state $|\Psi\rangle$ satisfies the Gauss' law \rf{gl} and is an eigenstate of
the Hamiltonian \rf{ham} with the eigenvalue $E$.
Both of the Hamiltonians \rf{ham} and
\rf{hamf} commute with the Gauss' law and with one another,
so they can be
simultaneously diagonalized. Therefore, it is sufficient to show that
the Gauss' law is satisfied in average. This immediately follows from
the stationarity condition \rf{sta} and
the Schr\"odinger equation \rf{sef}:
$$
0=
\frac{\delta}{\delta\varphi}\,\frac{\langle\Psi | H(\varphi)|\Psi\rangle}{\nor{\Psi}}
=2\,\frac{\langle\Psi | (H(\varphi)-E)\,\frac{\delta}{\delta \varphi}|\Psi\rangle}{\nor{\Psi}}
+\frac{\langle\Psi |\frac{\delta H(\varphi)}{\delta\varphi}|\Psi\rangle}{\nor{\Psi}}
=\frac{\langle\Psi | \partial_iE_i-J_0-\rho|\Psi\rangle}{\nor{\Psi}}.
$$
The interaction of the scalar fields with photons entering the Hamiltonian
through the covariant derivative squared term can be disregarded in the
weak coupling limit since it is
of order $e^2/m$ and is not enhanced by a $\ln(L\mu)$
factor. The Hamiltonian $H(\varphi)$ is quadratic in this approximation:
\begin{eqnarray}\label{hamq}
H(\varphi)&=&\int d^2x\,\left[\frac12\,E_i^2-E_i\partial_i\varphi+\frac14\,F_{ij}^2
\right.\nonumber \\*
&&\left.+\Pi^\dagger\Pi
+ie\varphi(\Phi\Pi-\Phi^\dagger\Pi^\dagger)+\partial_i\Phi^\dagger \partial_i\Phi+m^2\Phi^\dagger\Phi-\varphi\rho\right],
\end{eqnarray}
and can be explicitly diagonalized.
Solutions of the Schr\"odinger equation for the Hamiltonian
\rf{hamq} have factorized form
\begin{equation}
\Psi=\Psi_{\rm gauge}[A]\Psi_{\rm matter}[\Phi,\Phi^\dagger].
\end{equation}
We first consider the gauge-field part of the wave function. The ground
state is described by a Gaussian wave functional:
\begin{equation}
\Psi_{\rm gauge}[A]=\exp\left(-\frac12\int d^2xd^2y\,A_i(x)K_{ij}(x,y)A_j(y)
+i\int d^2x\,{\cal E}_i(x)A_i(x) \right).
\end{equation}
Substituting this expression in the Schr\"odinger equation (electric
fields act on the wave functional as variational derivatives:
$E_i=-i\delta/\delta A_i$), we obtain for $K_{ij}$ and ${\cal E}_i$:
\begin{eqnarray}
&&K_{ij}=\sqrt{-\partial^2}\br{\delta_{ij}+\frac{\partial_i\partial_j}{-\partial^2}},
\\*
&&{\cal E}_i=\partial_i\varphi.
\end{eqnarray}
The energy of this state is
\begin{equation}\label{ea}
E_{\rm gauge}=E_0-\frac12\int d^2x\,(\partial \varphi)^2,
\end{equation}
where $E_0$ is the divergent zero-point energy
$$
E_0=\frac12\Tr K=\int \frac{d^2p}{(2\pi)^2}\,p,
$$
which does not depend on $\varphi$ and is omitted below.
The Hamiltonian for matter fields can be diagonalized introducing
creation and annihilation operators:
\begin{equation}\label{cran}
[H(\varphi),a^{\dagger}]=\varepsilon a^{\dagger}.
\end{equation}
Since Hamiltonian commutes with electric charge
\begin{equation}
Q=\int d^2x\,J_0,
\end{equation}
operators satisfying eq.~\rf{cran} are linear combinations of
$\Pi$ and $\Phi^\dagger$ (or of $\Pi^\dagger$ and $\Phi$):
\begin{equation}
a^{\dagger}=\int d^2x\,\br{\Pi\psi+\Phi^\dagger\tilde{\psi}}.
\end{equation}
Substituting this operator in eq.~\rf{cran} we find that
$\tilde{\psi}=i(\varepsilon+e\varphi)\psi$, where $\psi$ and $\varepsilon$
are determined by the equation
\begin{equation}\label{kg}
\left[\br{\epm{n}+e\varphi}^2+\partial^2-m^2\right]\pspm{n}=0.
\end{equation}
Here the subscripts $\pm$ mark positive- and negative-energy states:
\begin{equation}
\ep{n}>0,~~~~\em{n}<0.
\end{equation}
The equality \rf{kg} is nothing but the
Klein-Gordon equation for eigenmodes
in the time-in\-de\-pen\-dent
external field $A_\mu=\delta_{\mu 0}\varphi$. Its solutions
form two complete sets of functions normalized by \cite{mig72}
\begin{eqnarray}\label{norma}
\int d^2x\,\bpspm{m}\br{\epm{m}+\epm{n}+2e\varphi}\pspm{n}&=&\pm\delta_{mn},
\\*
\int d^2x\,\bpspm{m}\br{\epm{m}+
\varepsilon_{\mp\,n}+2e\varphi}\psi_{\mp\,n}&=&0.
\end{eqnarray}
These eigenfunctions determine two sets of operators
\begin{eqnarray}
\ap{n}^{\dagger}&=&\int d^2x\,\left[\Pi(x)\psp{n}(x)
+i\Phi^\dagger(x)\br{\ep{n}+e\varphi(x)}\psp{n}(x)\right],
\nonumber \\*
\ap{n}&=&\int d^2x\,\left[\bpsp{n}(x)\Pi^\dagger(x)
-i\bpsp n(x)\br{\ep{n}+e\varphi(x)}\Phi(x)\right],
\nonumber \\*
\am{n}^{\dagger}&=&\int d^2x\,\left[\bpsm{n}(x)\Pi^\dagger(x)
-i\bpsm n(x)\br{\em{n}+e\varphi(x)}\Phi(x)\right],
\nonumber \\*
\am{n}&=&\int d^2x\,\left[\Pi(x)\psm{n}(x)
+i\Phi^\dagger(x)\br{\em{n}+e\varphi(x)}\psm{n}(x)\right],
\end{eqnarray}
which create and annihilate
particles of charge $\pm e$ and energy $|\epm{n}|$:
\begin{equation}
[Q,\apm{n}^{\dagger}]=\pm e\apm{n}^{\dagger},~~~~[Q,\apm{n}]=\mp e\apm{n}
\end{equation}
\begin{equation}
[H(\varphi),\apm{n}^{\dagger}]=\pm\epm{n}\apm{n}^{\dagger}=|\epm{n}|\apm{n}^{\dagger},
~~~~[H(\varphi),\apm{n}]=\mp\epm{n}\apm{n}=-|\epm{n}|\apm{n},
\end{equation}
\begin{equation}
[\apm n,\apm m^{\dagger}]=\delta_{nm},~~~~[\apm n,\apm m]=0=[\apm n,a_{\mp\, m}^{\dagger}].
\end{equation}
The field variables are expressed in terms of creation and annihilation
operators as
\begin{eqnarray}
\Phi(x)&=&i\sum_n\br{\psp{n}(x)\ap{n}-\psm{n}(x)\am{n}^{\dagger}},
\nonumber \\*
\Pi(x)&=&\sum_n\left[\ap{n}^{\dagger}\bpsp{n}(x)\br{\ep{n}+e\varphi(x)}
-\am{n}\bpsm{n}(x)\br{\em{n}+e\varphi(x)}\right].
\end{eqnarray}
The total energy is comprised of $E_{\rm gauge}$ given by eq.~\rf{ea}, the
energy of the matter fields, $E_{\rm matter}$, and the source term:
\begin{equation}\label{ef}
E=-\int d^2x\,\left[\frac12\,(\partial\varphi)^2+\varphi\rho\right]+E_{\rm matter}.
\end{equation}
The energy of the state satisfying the Gauss' law corresponds to the
extremum of this functional, which is determined
by the following equation:
\begin{equation}\label{pois}
-\partial^2\varphi+\rho+\mn{J_0}=0,
\end{equation}
where we used the fact that $\delta E_{\rm matter}/\delta\varphi
=\mn{\delta H_{\rm matter}(\varphi)/\delta\varphi}=
-\mn{J_0}$. Note that this extremum is a maximum of $E(\varphi)$.
When the separation of external charges is not very large,
an empty state,
\begin{equation}
\apm n|0\rangle=0,
\end{equation}
has the lowest energy. The vacuum charge density can be expanded in
powers of $1/m$:
\begin{equation}\label{avj0}
\langle 0|J_0|0\rangle={\rm const}\,\frac{e^2}{m}\,\partial^2\varphi+\ldots,
\end{equation}
and is small compared to the first term in eq.~\rf{pois}.
Therefore, we can safely omit vacuum
contributions to the energy and to the
charge density.
The equation \rf{pois} is then solved by
\begin{equation}\label{logfrac}
\varphi=\frac{eq}{2\pi}\,\ln \frac{|x+L/2|}{|x-L/2|}
\end{equation}
and \rf{ef} gives
the Coulomb law:
\begin{equation}\label{coul}
E=\frac{q^2e^2}{2\pi}\,\ln\frac{L}{2\zeta}.
\end{equation}
Here $\zeta$ is an UV cutoff necessary to regularize an infinite
Coulomb self-energy of the static charges (see sec.~4 for the precise
definition).
However, the logarithmic raise of the interaction
energy cannot last infinitely without a substantial rearrangement of
the vacuum. To study it, we consider next the spectrum of the
Klein-Gordon equation for the potential \rf{logfrac} of the electric
dipole.
\newsection{Klein-Gordon equation in the dipole field}\label{kgsec}
First, when the distance
$L$ is small, the Klein-Gordon equation \rf{kg}
has no normalizable solutions and its
spectrum consists of two continua with $\ep{}>m$ and with
$\em{}<-m$. Near the boundaries of the spectrum, for $\epm{}=
\pm(m+\lambda)$, $\lambda\ll m$, the
non-relativistic approximation can be used: the
first term in eq.~\rf{kg} can be expanded in $e\varphi$, and
the eigenfunctions $\pspm{}$ satisfy ordinary Schr\"odinger
equation with the potential $\mp e\varphi$:
\begin{equation}\label{nrel}
\br{-\frac{1}{2m}\,\partial^2\mp e\varphi-\lambda_\pm}
\pspm{}=0.
\end{equation}
The potential energy for positively charged (coming from the upper
\begin{figure}[t]
\hspace*{5cm}
\epsfxsize=7cm
\epsfbox{log.eps}
\caption[x]{The potential \rf{logfrac}.}
\label{logfig}
\end{figure}
continuum) particles, shown in fig.~\ref{logfig}, has a form
of the separated peak and the well. For sufficiently
large $L$ the attraction by the well
becomes strong enough for a discrete level to appear.
Since a positive charge is
attracted to the source at $x=L/2$ and is repulsed from the one at
$x=-L/2$, its wave function $\psp{0}$ is localized near $x=L/2$. The
wave function of the negative charge is localized near $x=-L/2$ and its
energy is $\em 0=-\ep 0$ by symmetry. The lowest positive- and
negative-energy levels converge with the increase of $L$ and collide at
zero for some critical
\begin{figure}[t] \hspace*{5cm} \epsfxsize=7cm
\epsfbox{spec.eps}
\caption[x]{Spectrum of the Klein-Gordon equation for the potential
\protect\rf{logfrac}.}
\label{specfig}
\end{figure}
value of $L=L_0$ (fig.~\ref{specfig}). After that they do not
disappear, but rather go off to the complex plane \cite{zp71}. This
behavior of the eigenvalues is generic for Klein-Gordon equation in
strong electric fields \cite{mig72,zp71,GMR85}. For $L>L_0$ vacuum
polarization can no longer be neglected and the external electric field
creates a pair of charged particles in the vacuum. This effect is
analogous to a pair creation in the field of a heavy ion
with the nuclear charge $Z>137$ \cite{zp71}.
For large $L$ the ground state wave function is no longer localized
near the well of the potential. Rather, the wave function has the shape
with two bumps, like the one used in the qualitative
arguments in the introduction.
As $L$ grows, the charge leaks from the
well to the region where the potential is peaked. This, at first sight,
anti-intuitive behavior reflects a generic property of
a charged Klein-Gordon particle to form a bound state in a sufficiently
strong repulsive electric potential \cite{zp71,mig72,GMR85}. The fact that
the charge is redistributed between the well and the
peak of the potential can be proved by the following arguments. For
$L=L_0$ both eigenvalues $\em 0$ and $\ep 0$ are equal to zero and the
Klein-Gordon equation formally has the form of the
Schr\"odinger one:
\begin{equation}
\left[-\partial^2-\br{\frac{e^2q}{2\pi}\,\ln \frac{|x+L/2|}{|x-L/2|}}^2
+m^2\right]\pspm 0=0~~~~(L=L_0).
\end{equation}
The potential here has the shape of a symmetric double well. The
ground state wave function is symmetrically distributed between the
two wells. By continuity reasons, as $L$ is decreased, the charge
begin to leak from the region near the repulsive source to the
attractive one, and eventually all the positive charge is concentrated
near $+L/2$ and the negative one near $-L/2$.
\newsection{Screening of the logarithmic potential}
So far, the vacuum sector was considered.
However, when $e^2\ln(L\mu)\sim m$, the two-particle state
\begin{equation}\label{2}
| 2\rangle=\ap 0^{\dagger}\am 0^{\dagger}|0\rangle
\end{equation}
can become energetically more favorable, if the dynamical charges screen
the sources and reduce the Coulomb energy by an amount
sufficient to create a pair. The energy of the state \rf{2} is
\begin{equation}
E_{\rm matter}=\ep 0-\em 0
\end{equation}
and the induced charge density is
\begin{equation}\label{j02}
\langle 2|J_0| 2 \rangle
=2e\bpsp 0(\ep 0+e\varphi)\psp 0+2e\bpsm 0(\em 0+e\varphi)\psm 0.
\end{equation}
Vacuum contributions are neglected here.
The charged particles which cause the screening of the static charges
are non-relativistic, since the screened electric fields are small
everywhere, unlike the unscreened ones. Therefore, it is possible to
use the non-relativistic approximation \rf{nrel} to the Klein-Gordon
equation. In this approximation, the wave functions $\pspm {}$ -- we
omit the subscript $0$ for brevity -- are normalized to $1/(2m)$, as
follows from equation \rf{norma}. For the sake of clarity it is,
however, convenient to introduce new wave functions, $\psi_\pm^\prime=
\sqrt{2m}\psi_\pm$, normalized to unity,
as it is custom in the non-relativistic limit:
\begin{equation}\label{nor}
\int d^2x\,|\psi_\pm^\prime|^2=1,
\end{equation}
whereas the induced charged density is
\begin{equation}\label{j}
\mn{J_0}=e\br{|\psi_+^\prime|^2-|\psi_-^\prime|^2}.
\end{equation}
The total energy of the two-particle state is
\begin{equation}\label{en}
E=2m+\frac{1}{2m}\!
\int\!\! d^2x\,\Bigl(|\partial\psi_+^\prime|^2+|\partial\psi_-^\prime|^2\Bigr)
-\!\frac{1}{4\pi}\!\int\!\!d^2x d^2y\left[\rho(x)+\mn{J_0(x)}\right]
\ln|x-y|\left[\rho(y)+\mn{J_0(y)}\right].
\end{equation}
This expression is obtained after $2m+\lambda_++\lambda_-$ is
substituted for $E_{{\rm matter}}$ in eq.~\rf{ef}, and the solution
of the Poisson equation \rf{pois} is substituted for $\varphi$.
The energy \rf{en} can be regarded as a functional of
$\psi_\pm^\prime$. It is straightforward to check
that the minimum of this functional is determined exactly by the
Schr\"odinger equation \rf{nrel}. The coupled set of equations
\rf{pois}, \rf{nrel}, \rf{nor} and \rf{j} constitute a rather
complicated eigenvalue problem. The ground state corresponds to the
global minimum, and can be found numerically. Instead of solving
these equations directly we will suggest simple variational wave
functions $\psi_\pm^\prime$ corresponding to a two-particle state
whose energy does not grow with the separation of the external charges
$L$.
We take the variational wave functions in a form of a superposition of
states with charges $\pm e$ localized in the vicinity of the sources
with charges $\pm eq,\;\;q<1$, see fig.~\ref{wvfig}. For symmetry
reasons we take $\psi_-^\prime(x)=\psi_+^\prime(-x)$:
\[
\psi_+^\prime(x)=\phi_1\left(x-\frac{L}{2}\right)
+\phi_2\left(x+\frac{L}{2}\right), \]
\begin{equation}\label{an}
\psi_-^\prime(x)=\phi_2\left(x-\frac{L}{2}\right)
+\phi_1\left(x+\frac{L}{2}\right).
\end{equation}
The functions $\phi_{1,2}$ are supposed to be well-localized
and normalized to
\begin{equation}\label{sub}
\int d^2 x\, \phi_{1,2}^2=\frac{1\mp q}{2}.
\end{equation}
For large $L$ the overlap between $\phi_1$ and $\phi_2$ is exponentially
small and can be neglected, therefore the wave functions \rf{an} are
normalized to unity. This ansatz corresponds to the distribution of
charges described in the introduction. The $(1+q)/2$ portion of the
dynamical charge is localized near the source of the opposite sign and
the $(1-q)/2$ portion is localized near the one with the same sign.
Neglecting the exponentially small overlap of $\phi_{1,2}$ we get
for the total charge density:
\[
\rho(x)+\mn{J_0(x)}
=e^2\left\{\left[q\,\delta\left(x+\frac{L}{2}\right)
+\phi_1^2\left(x+\frac{L}{2}\right)
-\phi_2^2\left(x+\frac{L}{2}\right)\right]\right.
\]
\begin{equation}\label{fullcharge}
\left.
-\left[q\,\delta\left(x-\frac{L}{2}\right)
+\phi_1^2\left(x-\frac{L}{2}\right)
-\phi_2^2\left(x-\frac{L}{2}\right)\right]\right\}.
\end{equation}
We see that the total charge density noticeably differs from zero only
in the vicinity of the points $x=-L/2$ and $x=+L/2$. The screening
of the delta-peaked external sources is achieved if the integral
of the total charge density over the region much smaller than $L$
is zero. This is guaranteed by our choice of the normalization
condition \rf{sub}.
To make an estimate of the minimal energy of the two-particle
screening state and, hence, of the critical separation between the
external charges where the rising potential breaks up, we take
a simple Gaussian ansatz for the wave functions $\phi_{1,2}$:
\begin{equation}\label{Gauss}
\phi_{1,2}(x)=\sqrt{\frac{1\mp q}{4\pi a_{1,2}^2}}\exp\left(
-\frac{x^2}{4a_{1,2}^2}\right),
\end{equation}
where the widths of the wave functions $a_{1,2}$ are the variational
parameters. To make all integrals finite we shall temporarily introduce
a Gaussian smearing of the external sources replacing
\begin{equation}\label{smear}
q\delta(x)\rightarrow \frac{q}{2\pi\zeta^2}\exp\left(
-\frac{x^2}{2\zeta^2}\right),\;\;\;\;\zeta\rightarrow 0.
\end{equation}
Now one has to substitute the trial wave functions \rf{Gauss}
into the energy functional \rf{en} and to find the best
widths $a_{1,2}$ from its minimum. The integrals are readily performed
by using the Fourier transforms. Recalling that the Fourier
transform of $\ln(x^2)/4\pi$ is $-1/k^2$ we get for the
interaction or the potential energy term in the total energy
(the last term in eq.\rf{en}):
\begin{equation}\label{poten}
E_{pot}(L)=-e^2\int\frac{d^2k}{(2\pi)^2}\frac{e^{ik\cdot L}-1}{k^2}
\left[q\,e^{-\frac{\zeta^2k^2}{2}}+\frac{1-q}{2}e^{-\frac{a_1^2k^2}{2}}
-\frac{1+q}{2}e^{-\frac{a_2^2k^2}{2}}\right]^2.
\end{equation}
The term proportional to $\exp(ik\cdot L)$ accounts for the interaction
between the regions near $x+ L/2$ and near $x-L/2$, while the
subtracted term proportional to unity takes into account the
self-interaction of charge distributions inside these regions.
Eq.\rf{poten} should be compared to the interaction of two bare
external charges:
\begin{equation}\label{bare}
E_{bare}(L)=-e^2\int\frac{d^2k}{(2\pi)^2}\frac{e^{ik\cdot L}-1}{k^2}
\left[q\,e^{-\frac{\zeta^2k^2}{2}}\right]^2
=\frac{e^2q^2}{4\pi}\left(\ln\frac{L^2}{4\zeta^2}+\gamma_E\right).
\end{equation}
Terms exponentially small in $L/\zeta$ have been neglected here.
The integral \rf{poten} is immediately calculated using \rf{bare},
yielding
\[
E_{pot}=\frac{e^2}{4\pi}\left[q^2\ln\frac{L^2}{4\zeta^2}
+\left(\frac{1-q}{2}\right)^2\ln\frac{L^2}{4a_1^2}
+\left(\frac{1+q}{2}\right)^2\ln\frac{L^2}{4a_2^2}\right.\]
\begin{equation}\label{poten1}
\left.+2q\frac{1-q}{2}\ln\frac{L^2}{2(\zeta^2+a_1^2)}
-2q\frac{1+q}{2}\ln\frac{L^2}{2(\zeta^2+a_2^2)}
-2\frac{1-q}{2}\frac{1+q}{2}\ln\frac{L^2}{2(a_1^2+a_2^2)}\right].
\end{equation}
The coefficient in front of $\ln L^2$ is zero, so that the energy is
now independent of $L$, up to exponentially small corrections which
are neglected. It is exactly the screening effect we are after,
and it is due to the choice of the normalization of charge
distributions, eq.\rf{sub}. Neglecting also the spread of the external
charges $\zeta^2$ as compared to $a_{1,2}^2$ we get
\[
E_{pot}=\frac{e^2}{4\pi}\frac{1}{4}\left[2(1-q)(1+q)\ln(a_1^2+a_2^2)
-(1-q)(1+3q)\ln a_1^2- (1+q)(1-3q)\ln a_2^2\right.\]
\begin{equation}\label{poten2}
\left. -2(1+3q^2)\ln 2-4q^2\ln\zeta^2\right].
\end{equation}
The kinetic energy term (the second term in eq.\rf{en}) is
\begin{equation}\label{kin}
E_{kin}=\frac{1}{4m}\left(\frac{1-q}{a_1^2}+\frac{1+q}{a_2^2}\right)
\end{equation}
The sum, $E_{kin}+E_{pot}$, has a minimum at
\begin{equation}\label{min}
a_{1,2}^2=\frac{4\pi}{me^2}
\frac{1\pm q+4q^2+(1\pm q)\sqrt{1+8q^2}}{4q^2(1\mp q)},
\end{equation}
which should be substituted into \rf{kin} and \rf{poten2} to
get a variational estimate of the energy of the two-particle state
screening the external charges. Naturally, the energy-at-rest, $2m$,
should be added, too.
It follows from \rf{min} that the distribution of the dynamical charge
having the {\it same} sign as the external charge ($a_1$) is
{\it broader} than that having the opposite charge ($a_2$). For
example, if the external charge is one half of the dynamical charge
($q=1/2$) the same-charge cloud is about 2.5 times broader than
the opposite-charge cloud. See the table, where examples for other
values of $q$ are given.
Finally, the energy of the two-particle ground state can be written as
\begin{equation}\label{ground}
E=2m+\frac{e^2q^2}{4\pi}\left(\ln\frac{4\pi C_q^2}{me^2q^24\zeta^2}
+\gamma_E\right),
\end{equation}
where $C_q$ is a number of the order of unity coming from substituting
the best values of $a_{1,2}$ given by eq.\rf{min} into
$E_{kin}+E_{pot}$. Notice that the dependence on the spread of the
delta-peaked external charges, $\zeta$, is the same as in the case
of the bare charges, eq.\rf{bare}. This is because the extended
dynamical charge distribution cannot screen the logarithmic potential
at small separations.
\begin{center}{\bf Table}
\vskip .5true cm
\begin{tabular}{|c|c|c|c|}
\hline
$q$ & $a_1\sqrt{\frac{me^2}{4\pi}}$ & $a_2\sqrt{\frac{me^2}{4\pi}}$ &
$C_q$ \\
\hline
\hline
0.1 & 7.96 & 6.53 & 0.536 \\
\hline
$\frac{1}{3}$ & 3.49 & 1.85 & 0.589 \\
\hline
$\frac{1}{2}$ & 3.19 & 1.26 & 0.649 \\
\hline
$\frac{2}{3}$ & 3.44 & 0.976 & 0.717 \\
\hline
0.9 & 5.65 & 0.776 & 0.824 \\
\hline
\end{tabular}\end{center}
\vskip .1true cm
The logarithmically rising potential between external charges at
large separations breaks up when the bare energy \rf{bare} exceeds the
energy of the screening state, eq.\rf{ground}. It happens at the
critical separation between the external charges
\begin{equation}\label{crit}
L_c=C_q\sqrt{\frac{4\pi}{me^2q^2}}\exp\left(\frac{4\pi m}
{e^2q^2}\right).
\end{equation}
Since we assume the non-relativistic limit, $m\gg e^2/4\pi$, this
distance is exponentially large. Notice that the widths of the
screening distributions $a_{1,2}$ as given by eq.\rf{min} are
much less than the critical distance $L_c$, which justifies
neglecting of the overlaps between the screening clouds belonging
to the two centers.
The numerical values of the coefficient $C_q$ are given in the table
for certain values of $q$, together with the values of the widths
$a_{1,2}$ measured in natural units of $\sqrt{me^2/4\pi}$. Since we
have used a variational estimate for the ground state energy, the
true minimum can be only lower, that is to say that the numerical
coefficient $C_q$ of the order of unity in eqs.\rf{ground},\rf{crit}
can be somewhat smaller than given in the table. However, the
dependence on the algebraic parameters in these equations follow
from the dimension analysis, and is of a general nature.
At $L\approx L_c$ the logarithmic growth of the potential stops;
more precisely it slowly grows
approaching its asymptotic value at infinity \rf{ground}, the deviation
corresponding to the residual
forces between neutral charge clouds at $x=\pm L$.
\newsection{Path-integral approach}
The arguments we used above are in essence the variational ones.
For this reason, we preferred to use the Hamiltonian formalism.
Although the discussion of the screening from the path-integral
point of view is beyond the scope of the present paper, we would like
to outline how the main ingredients of our analysis can be derived
from the path integral. Here we consider only the vacuum sector.
The vacuum average of the
Wilson loop infinitely stretched in the time direction, which determines
the energy of two static charges, is given by the path integral
\begin{eqnarray}\label{zed}
Z&=&\int DAD\Phi^\dagger D\Phi\,\exp\left\{
i\int d^4x\,\left[-\frac14\,F_{\mu\nu}F^{\mu\nu}+
(D_\mu\Phi)^\dagger D^\mu\Phi-m^2\Phi^\dagger\Phi\right]
\right.\nonumber \\*
&&\left.+iqe\int dx^0\,\Bigl(A_0(x^0,-L/2)-A_0(x^0,+L/2)\Bigr)
\right\}.
\end{eqnarray}
Integration over the scalar fields induces an effective action
for the gauge potentials:
\begin{equation}\label{ind}
\Delta S=i\Tr\ln(-D^2-m^2+i0).
\end{equation}
The next step, justified by the smallness of the coupling, is to
calculate the remaining integral over $A_\mu$ in the saddle-point
approximation. This amounts to solving classical equations of motion
taking into account the induced action \rf{ind} and the source
term in \rf{zed}. We are going to show that these saddle-point
equations are nothing but the ones derived in
Secs.~\ref{hsec},~\ref{kgsec} in the Hamiltonian formalism.
Owing to the symmetries of the problem the classical fields
are time-independent, and $A_i^{\rm cl}=0$.
For $A_0^{\rm cl}$ we get the Poisson equation,
\begin{equation}\label{sp}
-\partial_i\partial_iA_0^{\rm cl}+\rho+\mn{J_0}=0,
\end{equation}
where $\rho$ is the same as in \rf{defrho} and the induced charge
density is
\begin{equation}\label{chad}
\mn{J_0}=i\,\frac{\delta}{\delta A_0^{\rm cl}}\,\Tr\ln(-D^2-m^2+i0)
=e\left.\Bigl(\overrightarrow{D}_0G(x,y)
+G(x,y)\overleftarrow{D}_0\Bigr)\right|_{x=y},
\end{equation}
where
\begin{equation}\label{green}
G(x,y)=\mn{x\left|\frac{1}{-D^2-m^2+i0}\right|y}.
\end{equation}
We leave the calculation of the induced charge density to Appendix,
where we show that it reduces to solving the Klein-Gordon
equation \rf{kg} with $\varphi$ replaced by $A_0^{\rm cl}$
and that the saddle-point equation \rf{sp} coincides with \rf{pois}.
\newsection{Conclusions}
To summarize, the infinitely rising potential between fractionally
charged external (pro\-be) sources is screened in (2+1)-dimensional
scalar QED.
Of course, if the mass of the
dynamical fields is large, the rising potential persists at
intermediate scales. The critical distance is exponentially large in
$m/e^2$, in contrast to 3D spinor QED where the screening length is of
the order of $1/e^2$ \cite{AB97}.
The screening is a typical quantum-mechanical effect: the wave
functions of the screening particles are superpositions of two
distinctive bumps localized near external sources of both signs and
carrying fractional charge. In case of a half-integer charge of the
probe ($q=\frac{1}{2}$) the bumps carry charges $\frac{3}{4}$ and
$\frac{1}{4}$, so that the total probability is unity but the external
sources are completely screened, see fig.~\ref{wvfig}.
It is interesting that the screening effect would be probably not
easy to observe from Euclidean lattice simulations of the theory
(given by the partition function \rf{zed}). Indeed, the essence
of the mechanism is a formation of two bumps in the screening
wave functions, which is a kind of tunneling effect. The larger the
separation between sources $L$, the longer computer time one would need
for this effect to come into action, with the time growing
exponentially with $L$.
Nevertheless, it would be very useful to check the screening of
fractional charges by lattice simulations, in view of apparent
analogies with a more difficult case of non-Abelian gauge theories.
\subsection*{Acknowledgments}
D.D. would like to thank Victor Petrov for many discussions in the past
that stimulated this investigation.
K.Z. is grateful to NORDITA for hospitality while this work was
in progress. The work of K.Z. was supported in part by NATO Science
Fellowship, CRDF grant 96-RP1-253,
INTAS grant 96-0524,
RFFI grant 97-02-17927
and grant 96-15-96455 for the promotion of scientific schools.
\vskip 2true cm
\setcounter{section}{0}
| proofpile-arXiv_065-8385 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The magnetic fields have been observed in various astrophysical scales
\cite{Obs}. The origin is one of the important problems in cosmology
\cite{PMF}.
Although an attractive mechanism in protogalaxy was proposed by
Kulsrud et al\cite{Cen} for the galactic magnetic field, it cannot
explain how the magnetic field is made in intergalactic and
intercluster regions\cite{PMF}. Thus, it is worth investigating the
generation and the evolution of the primordial magnetic field.
Since the strength of these fields strength is too small we expect
that the amplification of these fields
occurs due to the dynamo mechanism. It is well known that
the mean magnetic field can be amplified enough to explain
the present observation in the kinetic dynamo theory\cite{Dynamo}.
However, as Kulsrud and Anderson showed\cite{KA}, the growth rate of
the fluctuation around the mean magnetic fields is much larger than that of
the mean field in interstellar mediums. This means that
the kinetic dynamo theory breaks down. Hence, one
must investigate the effect of the back-reaction to the kinetic
theory.
So far the back-reaction on the mean field has been considered\cite{MFT}.
Setting apart the problem of the kinetic dynamo theory in interstellar
mediums, it is obvious that
the kinetic theory cannot hold near the equipartition state in
general.
In this paper, we consider the back-reaction on the fluctuation
and derive the evolutional equation of energy of the magnetic
field, that is, modified Kulsrud and Anderson equation.
Then we apply it to some examples.
The rest of the paper is organized as follows. In Sec. II, we
derive the modified Kulsrud and Anderson equation with
the lowest order back-reaction under a phenomenological
assumption. In Sec. III and IV, we give applications
and remarks, respectively.
\section{Modified Kulsrud and Anderson Equation}
In the Fourier space the basic equations of the
incompressible MHD are
\begin{eqnarray}
\partial_t v_i({\bf k},t)=-i P_{ijk}({\bf k})
\int \frac{d^3q}{(2 \pi)^3}\Bigl[v_j({\bf k}-{\bf q},t)v_k({\bf q},t)
-b_j({\bf k}-{\bf q},t)b_k({\bf q},t) \Bigr]
\end{eqnarray}
and
\begin{eqnarray}
\partial_t b_i({\bf k},t)=i k_j \int \frac{d^3q}{(2\pi)^3}
\Bigl[v_i({\bf k}-{\bf q},t)b_j({\bf q},t)-v_j({\bf k}-{\bf q},t)
b_i({\bf q},t)\Bigr],
\end{eqnarray}
where $b_i({\bf k},t):=B_i({\bf k},t)/{\sqrt {4\pi \rho}}$,
$\rho$ is the energy density of the fluid and
$P_{ijk}({\bf k})=k_jP_{ik}({\bf k})
=k_j(\delta_{ik}-k_ik_k/|{\bf k}^2|)$. For simplicity, we neglected
the diffusion terms in the above equations.
This simplification can be justified by the fact that almost of
astrophysical systems have high magnetic Reynolds number.
Following Kulsrud and Anderson\cite{KA} and considering
a small time step as the parameter of expansion, we evaluate the
time evolution of the magnetic field by iterations;
\begin{eqnarray}
b_i({\bf k},t)& = & b_i({\bf k},0)+b_i^{(1)}({\bf k},t)
+b_i^{(2)}({\bf k},t)+\cdots \nonumber \\
& = & b_i^{(0)}({\bf k})+b_i^{(1)}({\bf k},t)
+b_i^{(2)}({\bf k},t)+\cdots,
\end{eqnarray}
where $b_i^{(0)}({\bf k})$ is the initial field. For the fluid
velocity, we take account of the back-reaction from the magnetic
field (Lorentz force) as follows;
\begin{eqnarray}
v_i({\bf k},t)=v_i^{(1)}({\bf k},t)+\delta v_i({\bf k},t),
\end{eqnarray}
where $v_i^{(1)}({\bf k},t)$ is statistically homogeneous and
isotropic component and satisfy
\begin{eqnarray}
\langle v_i^{(1)*}({\bf k},t') v_j^{(1)}({\bf q},t) \rangle
& = & (2\pi)^3\Bigl[J_1(k)P_{ij}({\bf k})+iJ_2(k)\epsilon_{ikj}k_k\Bigr]
\delta^3({\bf k} -{\bf q}) \delta (t-t') \nonumber \\
& = & (2 \pi)^3V_{ij}({\bf k}) \delta^3 ( {\bf k}-{\bf q})\delta (t-t').
\end{eqnarray}
This statics holds in the region where is far from the boundary.
$J_1(k)$ and $J_2(k)$ denote the velocity dispersion and
the mean helicity of the fluid,
$$
\langle {\bf v}^{(1)}({\bf x},t) \cdot {\bf v}^{(1)}({\bf x},t) \rangle =
2 \int \frac{d^3k}{(2\pi)^3}J_1(k)\delta (0)
$$
and
$$
\langle {\bf v}^{(1)}({\bf x},t) \cdot \nabla \times {\bf v}^{(1)}({\bf
x},t) \rangle =-2 \int \frac{d^3k}{(2\pi)^3}k^2J_2(k)\delta (0).
$$
The second term in the right-hand side of the eq.(4), $\delta v_i({\bf
k},t)$, is determined by eq. (1) and this corresponds to
the back-reaction term from the magnetic field.
In each orders, the MHD equation becomes
\begin{eqnarray}
\partial_t b_i^{(1)}({\bf k},t)= 2ik_j
\int \frac{d^3q}{(2\pi)^3}v_{[i}({\bf k}-{\bf q},t)b^{(0)}_{j]}
({\bf q})
\end{eqnarray}
\begin{eqnarray}
\partial_t b_i^{(2)}({\bf k},t)= 2ik_j
\int \frac{d^3q}{(2\pi)^3}v_{[i}({\bf k}-{\bf q},t)b^{(1)}_{j]}
({\bf q},t)
\end{eqnarray}
and
\begin{eqnarray}
\partial_t v_i ({\bf k},t) = 2i P_{ijk}({\bf k}) \int
\frac{d^3q}{(2\pi)^3}
b_{(j}^{(0)}({\bf k}-{\bf q})b^{(1)}_{k)} ({\bf q},t).
\end{eqnarray}
The last equation contains an effect of the lowest order back-reaction
on the fluid and
it gives an explicit expression of $\delta v_i({\bf k},t)$
\begin{eqnarray}
\delta v_i ({\bf k},t) \simeq 2i P_{ijk}({\bf k}) \int^t_0dt' \int
\frac{d^3q}{(2\pi)^3}
b_{(j}^{(0)}({\bf k}-{\bf q})b^{(1)}_{k)} ({\bf q},t').
\end{eqnarray}
From the eqs. (6) $\sim$ (9), the time derivative of the energy becomes
\begin{eqnarray}
\partial_t \langle |{\bf b}({\bf k},t)|^2\rangle
& = & \langle b_i^{(1)*}({\bf k},t){\dot b}_i^{(1)}({\bf k},t) \rangle
+b_i^{(0)*}\langle {\dot b}_i^{(2)}({\bf k},t) \rangle +{\rm c.c.}
\nonumber \\
& = & 4\int^t_0 dt' \int \frac{d^3qd^3p}{(2\pi)^6}k_j k_k
b_{[j}^{(0)*}({\bf q})
\langle v_{i]}^*({\bf k}-{\bf q},t')v_{[i}({\bf k}-{\bf p}, t)\rangle
b_{k]}^{(0)}({\bf p}) \nonumber \\
& & -4 \int^t_0 dt' \int \frac{d^3qd^3p}{(2\pi)^6}k_j q_\ell
b^{(0)*}_i({\bf k})\langle v_{[i}^*({\bf q}-{\bf k},t)
v_{[j]}({\bf q}- {\bf p},t') \rangle b_{\ell ]}^{(0)}({\bf p}) \nonumber \\
& & ~~~~~~+{\rm c.c.},
\end{eqnarray}
where
\begin{eqnarray}
\langle v_i^*({\bf k},t')v_j({\bf q},t)\rangle
& \simeq & \langle v_i^{(1)*}({\bf k},t')v_j^{(1)}({\bf q},t)\rangle
\nonumber \\
& & +\langle v_i^{(1)*}({\bf k},t')\delta v_j ({\bf q},t)\rangle
+\langle \delta v_i^{*}({\bf k},t') v_j^{(1)}({\bf q},t)\rangle
\nonumber \\
& = & (2\pi)^3V_{ij}({\bf k})\delta^3 ({\bf k}-{\bf q}) \delta (t-t')
\nonumber \\
& & -4 \int^t_0dt'' \int\frac{d^3 p}{(2\pi)^3}P_{jk \ell}({\bf q})
p_m b^{(0)}_{[m}({\bf p}-{\bf k})V_{i( \ell ]}
({\bf k}) b^{(0)}_{k)}({\bf q}-{\bf p}) \nonumber \\
& & -4 \int^{t'}_0dt'' \int \frac{d^3 p}{(2\pi)^3}P_{i k \ell}
({\bf k})p_m
b^{(0)*}_{[m}({\bf p}-{\bf q})V_{(\ell ] |j|}({\bf q})
b^{(0)*}_{k)}({\bf k}-{\bf p}) \nonumber \\
& =: & (2\pi)^3V_{ij}({\bf k})\delta^3({\bf k}-{\bf q})\delta (t-t')
+\delta \langle v_i^*({\bf k},t')v_j({\bf q},t)\rangle.
\end{eqnarray}
The above eq. (10) with (11) is the formal equation with the
effect of the back-reaction.
Let us consider the simple example with the following
initial condition
\begin{eqnarray}
b^{(0)}_i({\bf x})=b_0 \delta_{i z}~~~{\rm or}~~~
b^{(0)}_i({\bf k})=b_0 (2\pi)^3\delta^3({\bf k})\delta_{i z}.
\end{eqnarray}
This condition holds approximately as long as the spatial scale of the
magnetic field is much larger than the typical scale of eddies.
In this case, the eq. (10) becomes
\begin{eqnarray}
\partial_t \langle |{\bf b}({\bf k},t)|^2 \rangle
& = & 2(2\pi)^3\delta({\bf 0}) k_z^2V_{ii}({\bf k})
+2 \int^t_0dt'k_z^2 \delta \langle v_i^*({\bf k},t') v_i({\bf k},t)
\rangle b_0^2 \nonumber \\
& = & 4(2\pi)^3k_z^2J_1(k)b_0^2\delta^3({\bf 0})
-6(2\pi)^3 k_z^4b_0^4(\Delta t)_k^2J_1(k)\delta^3({\bf 0}),
\end{eqnarray}
where $(\Delta t)_k$ is the time scale of the eddy turnover and its expression
will be given below. We assumed that
the time integral should be estimated as
$\int^t_0 dt' [\cdots] \sim (\Delta t)_k [\cdots] $ in the second line
of the right-hand side of the eq. (13) because the back-reaction works
only during the time scale of the eddy turnover.
Here we assume Kolmogoroff spectrum for the inertial range
\footnote{The inertial range is defined by the scale which
is smaller than the largest eddy ($\sim k_0^{-1}$)
and larger than a small scale ($\sim R^{-3/4}k_0^{-1}$)
under where the viscosity term is dominant. In this
range, the transfer of the energy works from large eddy to small one
without the dissipation of the energy. This leads a sort of
`equilibrium state' with Kolmogoroff spectrum\cite{LL}(Kolmogoroff
Theory).} $k_0 < k < k_{\rm max} \sim R^{3/4}k_0$\cite{LL},
$k_0$ is the wave number of the largest eddy and $R$ is the Reynolds
number. From the definition of the velocity dispersion
\begin{eqnarray}
\langle v^2 \rangle =2 \int\frac{d^3k}{(2\pi)^3}J_1(k)\delta (0)
=: \int^{k_{\rm max}}_{k_0} dk I(k),
\end{eqnarray}
we obtain the relation
\begin{eqnarray}
I(k) = \frac{1}{\pi^2} k^2 J_1(k) (\Delta t)_k^{-1} \simeq
\frac{2}{3}v_0^2 \frac{k_0^{2/3}}{k^{5/3}},
\end{eqnarray}
where $v_0$ is the typical velocity($v_0 \sim {\sqrt {\langle
v^2\rangle}}$) and we used $\delta (0)
\sim (\Delta t)_k^{-1}$. The expression of $(\Delta t)_k$ is
given by the estimation of the order of the magnitude in the
eq. (14), that is, $ (1/k(\Delta t)_k)^2 \sim k I(k)$.
Integrating the above equation (13)
over ${\bf k}$, we obtain the modified
Kulsrud and Anderson equation
\begin{eqnarray}
\partial_t \rho_M = 2 \gamma \rho_M -2 \zeta \rho_M^2,
\end{eqnarray}
where
\begin{eqnarray}
\rho_M :=\frac{{\cal E}_M}{4\pi \rho}:=
\frac{1}{V}\int \frac{d^3k}{(2\pi)^3}\langle |{\bf b}({\bf k},t)|^2 \rangle
\end{eqnarray}
\begin{eqnarray}
\gamma := 2 \int \frac{d^3k}{(2\pi)^3}k_z^2J_1(k)
\end{eqnarray}
and
\begin{eqnarray}
\zeta := 3 \int \frac{d^3k}{(2\pi)^3}k_z^4J_1(k)(\Delta t)_k^2.
\end{eqnarray}
In the above derivation, we used $\delta^3({\bf 0}) \sim V$, where
$V$ is the typical volume of the system.
Now we evaluate the coefficients $\gamma$ and $\zeta$. Results are
given by
\begin{eqnarray}
\gamma \simeq \int^{k_{\rm max}}_{k_0} dk
k^2I(k)(\Delta t)_k \simeq \int^{k_{\rm max}}_{k_0}dk [kI(k)]^{1/2}
\sim v_0k_0^{1/3}k_{\rm max}^{2/3} \sim R^{1/2}v_0k_0
\end{eqnarray}
and
\begin{eqnarray}
\zeta \simeq \int^{k_{\rm max}}_{k_0} dk k^4 I(k) (\Delta t)_k^3 \simeq
\int^{k_{\rm max}}_{k_0} dk [kI(k)]^{-1/2} \sim
\frac{k_{\rm max}^{4/3}}{v_0k_0^{1/3}}
\sim R\frac{k_0}{v_0},
\end{eqnarray}
respectively. Defining a dimensionless quantity $ \mu_M:=
\rho_M/v_0^2$, we can see that the eq.(16) becomes
\begin{eqnarray}
\partial_t \mu_M=2\gamma \mu_M-2 \zeta' \mu_M^2,
\end{eqnarray}
where $\zeta'=\zeta v_0^2 \sim R k_0 v_0$. The second term in the
right-hand side of the eq. (22) comes from the effect of the
back-reaction effect. One can see easily from the
above equation that the back-reaction gives an opposite effect to the
original kinetic term and
make the energy of the magnetic field balance with the energy
of the fluid.
Although we know from the procedure used here that the eq. (22) holds only in
a small time step as $\mu_M \ll 1$, we try to
extrapolate. As a result we find the solution
\begin{eqnarray}
\mu_M=\frac{\gamma}{\zeta'}\frac{1}{1-\Bigl(1-
\frac{1}{\mu_M(0)}\frac{\gamma}{\zeta'} \Bigr)e^{-2 \gamma t}}.
\end{eqnarray}
One can see easily that the magnetic `energy' goes toward the terminal
value $\mu_M^*=\gamma /\zeta' \sim R^{-1/2}$ for a time scale
$\sim \gamma^{-1}$.
This value corresponds to the saturation value, which is estimated
naively on the assumption that the drain by the magnetic field is
comparable to the turbulent power\cite{Cen}\cite{KA}.
\section{Applications}
In this section we apply the eq. (22) to two examples which
the magnetic field is amplified by the dynamo mechanism. First, we treat
the time evolution of the magnetic field during
the first order phase transition in the very early universe.
We also consider briefly the
amplification of the magnetic field in interstellar mediums.
\subsection{Electroweak Plasma}
There are attractive mechanisms of the generation of the
primordial magnetic field in the course of cosmological
phase transitions\cite{PT}. In these scenarios the strong magnetic
field is expected to be amplified by MHD turbulence during the first
order phase transition. The detail of the amplification
has been discussed by using Kulsrud and
Anderson equation in the ref. \cite{Olinto}.
We reconsider the amplification of the magnetic field during
the phase transition by using the modified Kulsrud and Anderson
equation. The time scale for the equipartition is
$t_{\rm equi} \sim \gamma^{-1} \sim
R^{-1/2}v_0^{-1}k_0^{-1}$. Since the Reynolds number is $R\sim 10^2$
\cite{Olinto}, we can see that it is the same order
with the time scale of the phase transition. Thus, the
magnetic field can be amplified enough and the final energy is
given by
\begin{eqnarray}
{\cal E}_M^* \sim R^{-1/2} {\cal E}_v \sim 0.1 \times {\cal E}_v,
\end{eqnarray}
where ${\cal E}_v$ is the energy of the plasma fluid.
\subsection{Interstellar Mediums}
As we stated in Introduction, the kinetic dynamo theory breaks
down in interstellar mediums\cite{KA}. For interstellar mediums,
typical values of key quantities are $2\pi/k_0 \sim 100{\rm pc}$, $v_0
\sim 10^6{\rm cm/s}$ and $R \sim v_0/k_0 \nu \sim 10^8$, where
$\nu$ denotes the kinetic ion viscosity;
$\nu \sim 10^{18}{\rm cm}^2{\rm s}^{-1}$\cite{Cen}.
Then the typical time scale
is given by $t_{\rm ISM} \sim \gamma^{-1} \sim 10^2{\rm yr}$.
Since the time scale of the mean field is $\sim 10^{10}{\rm yr}$\cite{KA},
we realize again the mean field theory is meaningless in the present
perturbative approach. The final energy of the magnetic field is given by
${\cal E}_M^* \sim 10^{-4} \times {\cal E}_v$.
\section{Concluding Remark}
In this paper, we considered the lowest order back reaction to the
kinetic dynamo theory and modified the equation for the
energy of the magnetic field. As a result we
obtained the successful time evolution of the energy of the
magnetic field.
That is, the terminal value of the magnetic energy obtained from
the eq. (23) equals to the previous qualitative estimation
of the saturation energy\cite{Cen}\cite{KA}.
We also presented the expression depending on $k$ (eq. (13)),
with which we can evaluate the evolution of the magnetic field
for various scales.
Since the present formalism is general, our equation is useful
for other situations, for example, the fireball model for
$\gamma$-ray bursts\cite{Gamma}.
Finally, we should comment on our assumption for the initial
condition (eq. (12))
and the extrapolation of the eq. (22). We choose the initial condition
in order to obtain the simple result
like the eq. (22). Although this assumption holds approximately in
some cases, it may not be correct in general cases.
We should also note that we considered only the effect of lowest order
back-reaction. Properly
speaking, if one wishes to analyse the vicinity of the equipartition,
one must take account of effects of higher order back-reaction.
The study near the equipartition might become clear by using
something like the renormalization group approach.
The study for more general initial
condition and with higher order back reaction should be done in the
future. At the same time, the spatial structure as the typical
coherent length of the magnetic field also should be discussed.
\section*{Acknowledgements}
We would like to thank Katsuhiko Sato for his continuous
encouragement and Masahiro Morikawa for his comment. TS
is grateful to Gary Gibbons and DAMTP relativity group for their
hospitality. We also thanks T. Uesugi for a careful reading of the
manuscript of thispaper. This work was partially supported by the Japanese
Grant-in-Aid for Scientific Research on Priority Areas (No. 10147105)
of the Ministry of Education, Science, Sports, and Culture
and Grant-in-Aid for Scientific Research from the Ministry of E
ducation, Science, Sports, and Culture, No. 08740170 (RN).
| proofpile-arXiv_065-8391 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
It is well established that galaxy populations vary with the
density of neighbouring galaxies in clusters of galaxies (Dressler 1980) and
depend on distance from the center of clusters of galaxies
(Whitmore et al. 1993).
The increase in the fraction of blue, star-forming cluster galaxies
with redshift (Butcher \& Oemler 1978, 1984a, b) also has been well
established.
Several physical processes have been proposed to explain theses effects,
including shocks induced by rampressure from the
intracluster medium (Bothun \& Dressler 1986; Gavazzi \& Jaffe 1987),
effects of cluster tidal field (Byrd \& Valtonen 1990),
galaxy-galaxy interactions (Barnes \& Hernquist 1991, Moore et al. 1996,
Moore, Katz, \& Lake 1998), and mergers of individual galaxies in the
hierarchical clustering universe (Kauffmann et al. 1993; Kauffmann 1996;
Baugh et al. 1996).
The purpose of our study is to investigate effects of galaxy-galaxy
and galaxy-cluster interactions on cluster member galaxies and to
investigate when these interactions become important during the cluster
evolution.
As the first step for this purpose, we make cosmological N-body simulations
and study how and when galactic dark halos are
affected by these interactions.
In particular, we pay our attention to evolution of large galactic
halos ($M_{\rm h} \ge 10^{11} M{\odot}$).
Unfortunately, previous almost dissipationless numerical simulations have been
failing to follow the evolution of galactic halos in dense environments
such as galaxy groups and clusters owing to their low-resolution
(White 1976; van Kampen 1995; Summers et al. 1995; Moore, Katz, \& Lake 1996).
To avoid this apparent erasing of substructures in the dense environments
known as the "over merging problem", many approaches have been done.
For example, Couchman \& Carlberg (1992) tagged particles in galactic
halos before a cluster forms, then applied a halo-finding algorithm only
for the tagged particles at final epoch. Another idea is to introduce
"artificial cooling" in a collisionless simulation by collecting
particles in their collapsing regions into more massive super-particles
(van Kampen 1995).
By these approaches, however, we cannot explore strength of
galaxy-galaxy and galaxy-cluster interactions.
Therefore, we use high resolution N-body simulations and the improved method of
tracing galaxies to investigate those interactions.
We should consider hydrodynamic processes of baryonic component,
because radiative cooling allows baryonic component to sink into the center
of a dark matter halo where it forms a compact and tightly bound stellar
system which is hardly destroyed by the tidal force and helps its host halo
to survive to some degree (Summers et al. 1995).
However, hydrodynamic simulations, e.g. smoothed particle
hydrodynamics (SPH) simulations, need much more
CPU time than the collisionless simulation.
Then, it is difficult to perform wide dynamical range simulations by this
approach.
Therefore, we restrict ourselves to follow evolution of dark matter halos.
We trace not only surviving halos but also strongly stripped halos
which may survive as galaxies if hydrodynamic process are considered,
because we find many strongly stripped halos in a cluster of galaxies in this paper.
Recently, Ghigna et al. (1998) have reported results of similar simulations
to ours, independently.
However, there are two large differences between our study and theirs.
The first difference is that the mass of their cluster is about half of ours.
Therefore our galactic halos suffer influence of
denser environment and our cluster forms at lower redshift than theirs.
The second difference is that they investigated the evolution of the cluster
halos from $z = 0.5$ to $z = 0$. On the other hand,
we investigate it before the formation epoch of galactic halos to present time.
Clearly, our investigation gives more information
about galaxy-galaxy and galaxy-cluster interactions which affect evolution of galaxies.
In Section 2, we present the method of numerical simulations, our halo-finding
algorithm and the algorithm to create halo merging history trees.
The algorithm to create halo merging history trees is improved
to handle the galactic halos in very dense environments.
Our results are presented in Section 3 and are discussed in Section 4.
\section{Simulation}
\subsection{The simulation dataset}
We first describe two simulations used in this paper, and their specific
purpose. The overall parameters and mass of the most massive virialized
objects at $z = 0$ in both simulations are listed in Table \ref{data}.
The back ground model for both simulations is the standard cold dark matter
(SCDM) universe with the Hubble constant $H_0 = 100 h$ km/s/Mpc,
where $h = 0.5$.
This model is normalized with $\sigma_8 = 1/b$, where $b = 1.5$.
The simulation A represents to an average piece of the universe
corresponding to the ``field" environment within a sphere of radius 7 Mpc
and we use it to check our halo-finding algorithm and compare with another
simulation.
In the simulation B, we adopt the constrained random field method to generate
the initial density perturbation field in which a rich cluster is formed at
the center of a simulation sphere of radius 30 Mpc (Hoffman \& Ribak 1991).
The constraint which we impose is the $3 \sigma$ peak with the 8 Mpc Gaussian
smoothed density field at the center of the simulation sphere.
To get enough resolution with relatively small number of particles, we
use the multi-mass initial condition for the simulation B
(Navarro, Frenk, \& White 1996, Huss et al. 1997).
This initial condition is made as follows.
First, only long wave length components are used for realization
of initial perturbation in the simulation sphere using $\sim 10^5$
particles, and then we perform a simulation with these low resolution
particles.
After this procedure, we tag the particles which are inside a sphere of
radius 3 Mpc centered on the cluster center at $z = 0$.
Next, we divide the tagged particles according to the density perturbation
which is produced by additional shorter wave length components.
The mass of a high resolution particle is 1/64 of low resolution one.
As a result, the total number of the particles becomes $\sim 10^6$.
Our analyses are operated only for the high resolution particles.
Mass of the high resolution particle is $m \simeq 10^9 M_{\odot}$ , and its
softening length, $\epsilon$, is set to $5$ kpc.
\subsection{N-body calculation}
To follow the motion of the particles, we use a tree-code
(Barnes \& Hut 1986) with the angular accuracy parameter $\theta = 0.75$,
and we include quadrapole and octupole moments in the expansion of the
gravitational field.
The numerical calculation is started from redshift $z = 20$
and it is integrated by using the individual time step
(Hurnquist \& Katz 1989).
A time step for particle $i$ is given as
\begin{equation}
\triangle t_i = C \left(\frac{\epsilon^2}{a_i}\right)^{1/2},
\end{equation}
where $C$ is a constant and $a_i$ is acceleration of
particle $i$. This constant, $C$, is set to 0.25.
In this case, errors in total energy is less than 1 \% through
our simulations.
\subsection{Halo identification}
Finding galactic halos in dense environments is a challenging work. The most
widely used halo-finding algorithm called the friends-of-friends algorithm
(e.g., Davis et al. 1985) and the spherical overdensity algorithm
(Cole \& Lacey 1994, Navarro, Frenk, \& White 1996) are not
acceptable (Bertschinger \& Gelb 1991),
because they cannot separate substructures within large halos.
DENMAX algorithm (Bertschinger \& Gelb 1991; Gelb \& Bertschinger 1994) makes
significant progress, but requires a substantial amount of CPU-time in
actual calculations. Since we search halos a lot of times through our
simulations, we adopt lighter numerical procedure with good performance.
Therefore, we use the adaptive friends-of-friends algorithm (Suto et al. 1992;
Suginohara \& Suto 1992; van Kampen 1995) which enables us to avoid the
problem in the friends-of-friends by using local densities to determine
local linking lengths. Moreover, we remove unbound particles from halos found
by this algorithm. This procedure is important for galactic halos in groups
and clusters.
In our adaptive friends-of-friends algorithm, a local linking length,
$b_{ij}$, is calculated as follows,
\begin{equation}
b_{ij} = \beta \times \min \left[L_p,
\frac{\rho_i(r_s)^{-1/3} + \rho_j(r_s)^{-1/3}}{2} \right],
\end{equation}
where
\begin{equation}
\rho_i(r_s) = \frac{1}{(2 \pi r_s^2)^{3/2}} \sum^N_{j=1}
\exp \left(-\frac{|\mbox{\boldmath $r$}_i -
\mbox{\boldmath $r$}_j|^2} {2 r_s^2} \right),
\end{equation}
$L_p$ is the mean particle separation,
$r_s$ is the filtering length to
obtain a smoothed density field and $\mbox{\boldmath $r$}_i$ is
the position of the particle $i$.
We specify a combination of the value of two parameters, $\beta$ and $r_s$,
in our algorithm as follows.
For $\beta$, we require that our algorithm is equivalent to
the conventional friends-of-friends in the field region, so that $\beta$
is set to 0.2 which corresponds to the mean separation of particles in
a virialized object.
The filtering length, $r_s$, should be determined depending on the size of
objects in which we are interested.
Thus, it must be larger than the size of galactic halos and smaller than the
size of clusters. In this paper, we set it to $1$ Mpc after several tests.
After identifying galactic halos, we remove the unbound particles.
At first, we compute the potential, $\phi_i$, for each particle $i$
due to all members of the halo:
\begin{equation}
\phi_i = \sum^{N_{\rm h}}_{j \ne i}\ \phi(r_{ij}),
\end{equation}
where $N_h$ is the number of particles belong to the halo.
We then iteratively remove unbound particles as follows. We compute the
energy $E_i = (1/2) m \ |\mbox{\boldmath $v$}_i - \mbox{\boldmath $v$}_{\rm h}
|^2 + \phi_i$ for each particle in the halo,
where $\mbox{\boldmath $v$}_{\rm h}$ is the mean velocity of the member
particles.
We then remove all particles with $E_i > 0$.
The procedure is repeated until no more particles are removed.
Finally, it is identified as a galactic halo when it contains more particles
than the threshold number, $n_{\rm th}$, which is usually set to 15 in this
paper. We show some tests on our halo-finding algorithm in Section 3.1.
\subsection{Creation of merging history trees of galaxies}
Our method to create galaxy merging history trees resembles to the method
which was used by Summers et al. (1995). The main improvement is that we
trace ``halo stripped galaxies" as well as galactic halos because halo
disruption is probably due to insufficient resolution
(Moore, Katz, \& Lake 1996) and lack of dissipative processes
(Summers et al. 1995).
To follow the evolution of galaxies in our simulation, we identify the
galactic halos at 26 time stages with a 0.5 Gyr time interval.
The most bound three particles in each galactic halo are tagged as tracers.
We consider three cases to follow their merging histories.
First, for a galactic halo at a time stage, $t_{i+1}$,
where $i$ is a number of time stage, if the halo has
more than two tracers which were contained in the same halo at the previous
time stage, $t_i$, then the halo at $t_{i+1}$ is a ``next halo" of the halo
at $t_i$. In this case, the halo at $t_i$ is an "ancestor" of the halo at
$t_{i+1}$.
Next, we consider the case that some halos at $t_{i+1}$ have one of three
tracers of a halo at $t_i$, the halo which has the tracer that was more
bound in the halo at
$t_i$ is defined as the ``next halo" of the halo at $t_i$.
Finally, we consider the final case.
When none of three tracers of a halo at $t_i$ are contained in any halos at
next time stage ($t_{i+1}$),
we define the most bound particle in this halo at $t_i$ as a ``stripped tracer".
Then, we call both of the halos and the stripped tracers the ``galaxies"
throughout this paper.
In this way, we construct merging history trees of galaxies.
In order to estimate mass of stellar component of a galaxy,
we assume that the mass of the stellar component is proportional to
the sum of the masses of its all ``ancestors"
(hereafter, we call this mass the "summed-up-mass").
Except for the case in which a large fraction of the stellar component of the
galaxy was stripped during the halo stripping, this assumption may be valid.
To consider mass increase due to accretion of dark matter to the halo
after its first identification,
we replace the summed-up-mass with the halo mass
when the summed-up-mass is smaller than the halo mass.
The reason using three tracers for each halo is to avoid possibility that
we select an irregular tracer which happens to appear near the density peak
of the halo.
However, for almost all halos we get the same result even if we use a single
tracer for each halo. Therefore, three tracers are enough to avoid this
possibility.
\section{Results}
\subsection{Galactic halos in N-body simulation}
In this subsection we show the results of some tests of our halo-finding
algorithm to show its features and to check its reliability.
First, we present the distribution of dark matter and halos in the simulated
cluster at $z = 0$ in Fig. \ref{halos}.
The upper panel is a $x-y$ projection of a density map in a cube with sides
$2 \times r_{200}$ ($r_{200}$ is the radius of the sphere having overdensity
$\delta = 200$) centered on the cluster.
Gray scale represents logarithmic scaled density given by the SPH like method
with neighbouring 64 particles (Hernquist \& Katz 1989).
The $x$-$y$ projection of the particles contained in galactic halos
identified by our halo-finding algorithm is plotted in the lower panel
in Fig. \ref{halos}.
It is found from Fig.1 that many galaxy size density peaks survive even
in the central part of the rich cluster and our halo-finding algorithm
can pick up these peaks as halos.
Next, we compare the density profiles of the halos in the simulation A
(hereafter we refer them to field halos) with the density profile proposed by
Navarro, Frenk, \& White (1996) (hereafter NFW).
The NFW profile approximates profiles of virialized objects obtained by
cosmological N-body simulations well and it is written as follows:
\begin{equation}
\rho(r) = \frac{\rho_{\rm c} \delta_{\rm c}}
{\left(\frac{r}{r_{\rm s}}\right)\left(\frac{r}{r_{\rm s}} + 1\right)^2},
\label{nfw}
\end{equation}
where $\rho_{\rm c}$ is the critical density of the universe,
\begin{equation}
\delta_{\rm c} = \frac{200}{3}\frac{c^3}{\ln(1+c)-c/(1+c)},
\end{equation}
and
\begin{equation}
r_{\rm s} = r_{200}/c.
\end{equation}
In Fig. \ref{field}, we plot the density profiles of the field halos
obtained by our halo-finding algorithm (plus signs), the profiles
based on the spherical overdensity algorithm for $\delta = 200$ (crosses), and
the NFW fits for latter profiles (solid lines).
We find that the halos identified by our halo finding algorithm are well
fitted by the NFW model except for very massive ones, and these massive halos
have smaller radii than $r_{200}$ because it separates the dominant halos and
their companions.
It is also found that these halos have cores and their sizes
are comparable to the softening length, $\epsilon$.
potential. These cores are numerical artifacts due to the softened potential
and they make it easier to disrupt these halos by tidal force.
As we mentioned above, our method can pick up galaxy size density peaks
even in the cluster environment and most selected halos without substructures
in the field show the NFW profiles.
However, since this method is improved only to avoid
the clouds-in-clouds problem, we should use an alternative independent
method for cluster halos when we argue their radii and outer density profiles.
We can define a radius of a halo within the cluster
using the halo density profile $\rho(r)$, where $r$ is the distance
from center of the halo, and measuring the radius at which
$\rho(r)$ flattens due to dominance of the cluster back ground density
(Klypin et al. 1997; Ghigna et al. 1998).
At the radius where the density profile is flattened, the circular velocity,
$v_{\rm c} = (GM(r)/r)^{1/2}$, profile turns around and increases
(Ghigna et al. 1998).
The radius where $\rho(r)$ flattens and one where $v_{\rm c}$ takes a
minimum value are essentially equal (see Fig. \ref{gpv}).
Therefor, we refer the radius at which $v_{\rm c}$ takes a minimum value
as a radius of a cluster halo.
It should be noted that this method allows overlap of halos,
that is, if we estimate the mass of a halo by this method,
mass of a halo includes the mass of the satellite's halos.
Therefore, we cannot determine the mass of a halo by the $v_{\rm c}$ method.
Our halo finding algorithm seems to underestimate the extent of the cluster
halo comparing with that obtained by the $v_{\rm c}$ method.
Does this feature cause serious problems in estimating summed-up-mass of
galaxies?
Before cluster size objects form, we can estimate their size correctly,
because such environment is similar to the field environment and our finding
algorithm gives reasonable halos in the field (Fig. \ref{field}).
After they fall into the cluster, there are three way to increase their
summed-up-mass, that is, merging with other halos, merging with
stripped tracers, and accretion of dark matter particles.
Since our method identifies halos according to density peaks,
we can treat merging of halos (i.e. peaks) properly independent of their
size. Only when stripped tracers are enough near a density peak of a halo,
we should regard this as merging, therefore underestimate of the extent of
halos may not matter.
When cluster halos increase their mass by accretion of dark matter, we
cannot estimate increase of their summed-up-mass properly, however,
such case may be rare, because the size of halos diminished by tidal
interactions in the cluster as we will show in Section 3.4.3.
Thus, we conclude that we can estimate the summed-up-mass of the cluster
halos by our method.
\subsection{Evolution of the whole cluster}
We define a sphere having mean over density, 200, as a virialized object,
and we show mass, $M_{200}$, and radius, $r_{200}$, of
the most massive virialized object at each time stage in the simulation B
in Table \ref{tbcl}.
It is found that a cluster size object begins to form from redshift
$z \simeq 1$, therefore we call this object a ``cluster" after $z \simeq 1$.
Indeed, the main clump of the cluster has already formed at $z = 1$ and
it does not undergo major merging after $z = 1$ (Fig. \ref{snap}).
We define the formation redshift, $z_{\rm form}$, of the final cluster
(cluster at $z = 0$) as the redshift when it has accreted half of
its final mass (Lacey \& Cole 1993), thus its formation epoch is
$z_{\rm form} \sim 0.15$ (see Table \ref{tbcl}).
The density profiles and the velocity dispersion profiles of the cluster
are shown in Fig. \ref{cpr} and Fig. \ref{csig}, respectively.
The distribution of the dark matter inside $r_{200}$ changes
little with time (see thin lines in the upper panels of Fig. \ref{cpr}
and Fig \ref{csig}).
It agrees the fact that this cluster evolves mainly by the accretion of dark
matter and small clumps (Fig. \ref{snap}).
The density profile at $z = 0$ (thin solid line) is well fitted by the NFW
model (thick solid line) except for the central cusp ($r < 100$ kpc)
where its slope ($\rho_{\rm cusp}(r) \propto r^{-1.35}$)
is steeper than the NFW profile ($\rho_{\rm cusp}(r) \propto r^{-1}$) and
well consistent with that obtained by Moore et al. (1998)
who give $\rho_{\rm cusp}(r) \propto r^{-1.4}$.
In our case, the softening length, $\epsilon = 5$ kpc, is much smaller than
$r_{s} = 300$ kpc and, moreover, the number of the particles which are inside
the virial radius of the cluster is about two order of magnitude lager than
that of the NFW's simulation. Thus, we conclude that we have enough resolution
to argue density profile of central cusp ($20 < r < 100$ kpc).
The number density profiles of halos in the cluster are plotted in the lower
panel of Fig. \ref{cpr} (thin lines).
The number density of the halos decreases with time, especially in the central
part of the cluster.
The thick solid line and the thick dashed line denote the dark matter
density, $\rho_{\rm d}$, and the number density of galaxies
which consist of both halos and stripped tracers
and these values are normalized by the values at $r_{200}$,
respectively. We can see that the halo distribution is ``antibiased"
with respect to the dark matter distribution. It is because that
a softened halo has a core with $r_{\rm core} \sim \epsilon$, therefore, it is
rapidly disrupted by the encounters with other halos and the tidal field of
the cluster when $r_{\rm tidal} < 3-4 \times \epsilon$
(Moore, Lake \& Katz 1998). This scale is enough small for large halos
($M_{\rm h} > 10^{11} M_{\odot}$), thus when such halos are disrupted,
we can say that they are stripped significantly.
Since such disruption is an artificial numerical effect and due to lack
of physics (i.e, lack of dissipational effects),
we expect that if we perform simulations with infinite resolution or with
baryonic component, the number density of galaxies is similar to that of
galaxies obtained here, which we get by assuming that no galaxy is disrupted
completely.
The number density of the galaxies at $z = 0$ (thick dashed)
shows no ``bias" respect to the dark matter density except
for the central part of the cluster where a central massive
halo dominates (Fig. \ref{halos}).
This result differs from van Kampen (1995) who suggested that
galaxies are more concentrated than dark matter.
We guess that their result was the artifact produced by the artificial
cooling adopted in his model.
To show the effect of the dynamic friction and the domination of the
central very massive halo, we plot the mass weighted
velocity dispersion of galactic halos in the lower panel of Fig. \ref{csig}.
In the central part of the cluster, it has smaller value than
that of dark matter (upper panel).
The difference of these two velocity dispersions implies that the large halos
are slowed down by the dynamical friction and a central massive halo becomes
dominant in this region.
Except for the central region, the velocity dispersion of the cluster halos
is almost same with that of the dark matter. Therefore, we cannot find
the "velocity bias" which Carlberg (1994) has found for the simulated cluster
galaxies.
The dark matter velocity dispersion profile also decreases from $r \simeq 200$
kpc toward the center.
This is consistent with the fact that density profile within this radius
is shallower than the isothermal profile, $\rho(r) \propto r^{2}$.
We interpret that the cold component in the central cusp of the cluster
($r < 200$ kpc) is due to the contribution of the low velocity
dispersion dark matter component which is confined in the potential well of
the central dominant halo which is always placed at the center of the
cluster. We will show some features of this halo in the next section.
\subsection{Evolution of the central dominant halo}
In the simulation B, the most massive halo is always seen at the center of the
cluster, thus we call this halo the "central dominant halo" (CDH).
There is no doubt about the existence of the CDH in our simulated cluster,
because 75 \% of the particles which were identified as the member of the
CDH at $z \simeq 0.5$ also remains in the CDH at $z = 0$.
Remaining 25 \% of them are probably belong to the cluster.
The mass evolution of the CDH which is identified by our halo-finding
algorithm is presented in Fig \ref{frg}.
The mass of the CDH increases quickly from $z \simeq 0.4$.
It always absorbs 15-30 galaxies of the former time stage.
Therefore, we can say that the CDH has evolved through merging
and accretion.
Arag\'{o}n-Salamanca et al. (1998) estimated that the stellar mass component
in the brightest cluster galaxies (BCGs) have grown by a factor 4-5 for
critical density models from $z \simeq 1$ by using the observed
magnitude-redshift relation of the BCGs and evolutionary population
synthesis models.
The trend of the increase of mass of the CDH seems to be consistent with their
result and the predictions by semi-analytic models
(Kauffman et al. 1993; Cole et al. 1994; Arag\'{o}n-Salamanca et al. 1998).
However, since there is ambiguity in distinguishing the component of
the CDH from that of the cluster and it is difficult to determine the extent
of the CDH in our dissipationless simulation, we should perform simulations
including hydrodynamic processes to investigate the evolution of the CDH and
the stellar component within the CDH realistically, and that is left for
further studies.
\subsection{Formation and evolution of the galactic halos in the cluster}
\subsubsection{Mass functions}
It is interesting to compare the mass function of galaxies in a region
which becomes the final cluster in the simulation B
(hereafter ``pre-cluster" region) to that of the simulation A
(hereafter ``field" region) before larger objects (groups and clusters) have
formed. In the field region, since effects of tidal stripping are negligible,
stripped tracers are rare objects and the summed-up-mass function of galaxies
and the mass function of halos are almost same.
Fig. \ref{mf21} shows that the summed-up-mass functions in both region
at $z = 2$ are very similar except for the existence of very massive galaxies
($m_{\rm sum} \gtrsim 10^{12} M_{\odot}$) in the pre-cluster region.
The absence of high mass galaxies in the field region may be a
consequence of the small volume of the simulation A.
However, it is also likely that this difference is naturally explained by the
peak formalism (Bardeen et al. 1986) which predicts that rare peaks at a mass
scale that we selected as the massive halos should be highly correlated in
space, that is, they are likely to form in high density region at larger mass
scale.
It is also interesting to compare the above mass functions to the mass function
expected from the Press-Schechter (PS) formalism (Press \& Schechter 1974,
Lacey \& Cole 1993) and the conditional mass function (Lacey \& Cole 1993).
By the PS formula (here in the notation of Lacey \& Cole),
the number density of halos with mass between $M$ and $M + dM$ at $z$ is:
\begin{equation}
\frac{dn}{dM}(M,t)\ dM = \frac{\rho_0}{M}f(S,\omega)
\left|\frac{dS}{dM}\right|dM,
\label{ps}
\end{equation}
where
\begin{equation}
f(S,\omega)\ dS = \frac{\omega}{(2\pi)^{1/2}S^{3/2}}
\exp\left[-\frac{\omega^2}{2S}\right]dS,
\end{equation}
$S = \sigma (M)^2$ is the variance of the linear density field of mass scale
$M$, and $\omega = \delta_{\rm th} (1 + z)$ is the linearly extrapolated
threshold on the density contrast required for structure formation.
The conditional mass function, that is, the number of halos with mass between
$M_1$ and $M_1 + dM_1$ at $z_1$ that are in a halo with mass $M_0$ at $z_0$
($M_1 < M_0, z_0 < z_1$) is:
\begin{equation}
\frac{dN}{dM_1}(M_1, z_1 | M_0, z_0)\ dM_1
= \frac{M_0}{M_1}f(S_1, \omega_1 | S_0, \omega_0)
\left|\frac{dS}{dM}\right| dM_1,
\label{cond}
\end{equation}
where
\begin{equation}
f(S_1, \omega_1 | S_0, \omega_0)\ dS_1
= \frac{\omega_1 - \omega_0}{(2\pi)^{1/2}(S_1 - S_0)^{3/2}}
\exp\left[-\frac{(\omega_1 - \omega_0)^2}{2(S_1 - S_0)}\right]dS_1,
\end{equation}
In Fig. \ref{mf0} we plot equation (\ref{ps}) with $\delta_{\rm th} = 1.69$
assuming the spherical collapse for the density contrast (Lacey \& Cole 1993)
and equation (\ref{cond}) with $z_1 = 2$, $z_0 = 0$, and
$M_0 = M_{200}(z = 0)$.
In this mass range, there is not so much difference between the PS mass
function and the conditional mass function, and the summed-up-mass function in
both regions show good agreement with the PS mass function at $z = 2$.
The reason why the summed-up-mass function in the pre-cluster region
agrees with the PS mass function better than the conditional mass function
in the high mass range may be that our halo finding algorithm divides a large
halo into small halos according to density peaks, thus,
if we use the friends-of-friends or the spherical overdensity algorithm,
this mass function may be more similar to the conditional mass function.
To investigate effects of the cluster formation on the cluster galaxies,
we plot the summed-up-mass function in the cluster at $z = 0$
in Fig. \ref{mf0}.
Although the summed-up-mass function of field galaxies evolves similar to
the PS theory, that of the cluster galaxies hardly evolves from $z = 2$
except for the existence of several very massive galaxies.
This result implies that most of cluster galaxies have not increased
their mass of the stellar component by merging and accretion from $z \simeq 2$
very much.
These features seem to be consistent with the observed old population of
cluster ellipticals, that is, the bulk of their stellar population has
been formed at $z > 2$ and then passively evolved until present day
(Ellis et al. 1996), estimated from the surprisingly
tight color-magnitude relation both at present (Bower et al. 1992) and at
higher z (Ellis et al. 1997). However, inclusion of star formation processes
and gas dynamics in our models is needed for more detailed investigation
of the color-magnitude relation and the ages of cluster galaxies.
\subsubsection{Merging of galaxies}
A halo that has more than two ancestors at the former time stage
is defined as a ``merger remnant". In Fig. \ref{mrate}, we show
the merger remnant fraction of the large galaxies
($M_{\rm sum} \ge 10^{11} M_{\odot}$) as a function of redshift.
In counting the number of merger remnants, we include the galaxies which
have undergone minor mergers as well as major mergers because minor mergers
also can lead starbursts (Hernquist 1989).
We find that this fraction in the region dominated by the high resolution
particles in the simulation B (hereafter cluster forming region) is
larger than that in the field at high redshift ($z \gtrsim 3$) as expected
from analytic work (Bardeen et al. 1986; Kauffmann 1996), that is,
for a random Gaussian field, redshifts of collapse of galaxy scale density
peaks are boosted by presence of surrounding, large-scale overdensity.
Therefore, the presence of larger objects at $z \simeq 2$ in the cluster
formation region than in the field (see, Fig. \ref{mf21}) is well explained by
the difference of merging efficiency between in the cluster formation
environment and in the field.
On the other hand, it is also found that after $z \sim 3$ this fraction
in the cluster forming region decreases rapidly and becomes smaller than that
in the field, and this fraction is always less than 10 \% inside the
cluster's virial radius.
This decline of the merger remnant fraction of cluster galaxies is due to
high velocity dispersion of the larger objects, that is, if the relative
velocity of a pare of galaxies is larger than inner velocity dispersion
of the halos of these galaxies, they cannot merge (Binney \& Tremain 1987).
Moreover, the stripping of halos by tidal fields of the groups and
clusters also prevents merging of individual halos
(Funato \& Makino 1992, Bode et al. 1994).
Clearly, this decrease is the reason why the summed-up-mass of cluster
galaxies has not evolve after $z \simeq 2$.
Because the large halos preferentially merge, about 30 \% of the cluster
galaxies with $M_{\rm sum} > 10^{11} M_{\odot}$ at $z = 0$ have undergone
merging since $z \simeq 0.5$, while only 8\% of all the cluster galaxies have
undergone it since $z \simeq 0.5$.
\subsubsection{Tidal stripping of halos}
To show the effect of the tidal stripping on the galactic halos, we
investigate whether large halos ($M_{\rm h} \ge 10^{11} M_{\odot}$)
at high redshift ($z \simeq 2$) are found as halos at lower redshift.
Unless their descendants become stripped tracers for $n_{\rm th} = 10$, we
call their descendants "surviving halos".
If their descendants become stripped tracers, it means that they have lost
large fraction of their original halo mass and in such case dissipative effects
should become important, which are not included in our simulation.
In Fig. \ref{storsu}, we show the surviving fraction and the stripped fraction
of such galaxies in the 0.5 Mpc radius bins from the cluster center.
At $z \simeq 0.5$, a large fraction of halos (60-100 \%) has survived
(upper panel). On the other hand, at $z = 0$ (lower panel),
more than 60 \% of the halos have been destroyed in the central part
of the cluster, and these fractions have clear correlation with the
distance from the cluster center. Although we may have overestimated the
stripping effect due to the softened potential for each particle
(Moore, Katz, \& Lake 1996), lack of dissipative effects
(Summers et al. 1995), and the feature of our halo-finding algorithm
(see, Sec. 3.1), we expect that these stripped galaxies are actually stripped
significantly, because former two effects affect only at very small scale
($r \lesssim 3 \epsilon$), our halo-finding algorithm can pick up very
small density peaks within the cluster, and we treat only large halos here.
Recently, Ghigna et al. (1998) have presented similar result to ours
independently. However, they have shown only the evolution of the cluster
halos from $z \simeq 0.5$. Clearly, the tidal stripping from halo formation
epoch more important for the evolution of the galaxies. Our result shows that
a number of cluster galaxies have already been strongly stripped their halos
at $z \simeq 0.5$.
Next we compare the radii of the halos, $r_{\rm h}$, determined by the
$v_{\rm c}$ method to the tidal radii of the halos estimated by the density
of the cluster at their pericentric positions, $r_{\rm peri}$, which we
calculate by the NFW fit of the cluster density profile at $z = 0$.
The mean ratio of pericentric to apocentric radii, $r_{\rm peri}/r_{\rm apo}$,
is 0.2, and 26 \% of the cluster halos are on very radial orbits,
$r_{\rm peci}/r_{\rm apo} < 0.1$. In spite of the difference of mass of the
clusters, these values completely agrees with those of Ghigna et al. (1998).
The tidal radii of the halos, $r_{\rm est}$, are estimated by the following
approximation,
\begin{equation}
r_{\rm est} \simeq r_{\rm peri} \frac{v_{\rm max}}{V_{\rm c}},
\end{equation}
where $v_{\rm max}$ is the maximum value of circular velocity of a halo
and $V_{\rm c}$ is the circular velocity of the cluster.
In Fig. \ref{rt} we plot $r_{\rm est}$ against $r_{\rm h}$
for our outgoing halos that must have passed pericenter recently.
We find that most of our halos have larger radii than $r_{\rm est}$.
Therefore, $r_{\rm est}$ seems to give roughly the minimum radius of a
cluster halo. It is implied that the most dominant process which leads the
mass loss of the large cluster halos is not the high speed encounters with
other halos but the tidal stripping due to the global tidal field of the
cluster, because galaxies should have smaller $r_{\rm h}$ if the
high speed encounters are important to mass loss of the cluster halos.
There is difference between our result and the result of Ghigna et al. (1998)
who show much better agreement as $r_{\rm h} \simeq r_{\rm est}$
except for the halos with $r_{\rm peri} < 300$ kpc which have tidal tails
due to impulsive collisions as they pass close to the cluster center.
In our result, a number of our halos with $r_{\rm peri} > 300$ kpc also have
larger $r_{\rm h}$ than $r_{\rm est}$.
We note that halos are not stripped instantly.
The tidal stripping time scale, $t_{\rm st}$, is roughly estimated as follows:
\begin{equation}
\frac{r}{R} \sim \left|\frac{d\Omega(R)}{dR}\right| \ r \ t_{\rm st},
\end{equation}
thus,
\begin{equation}
t_{\rm st} \sim \frac{3}{2}\frac{R}{V_{\rm c}},
\end{equation}
where $r$ is a radius of a halo, $R$ is the distance from the center of the
cluster, and $\Omega(R) = \frac{V_{\rm c}(R)}{R}$ is an angular velocity at $R$. Using this formula,
$t_{\rm st}$ is about 1 Gyr at $R \simeq 1$ Mpc.
Our cluster has formed very recently ($z_{\rm form} \sim 0.15$) due to
its richness, that is, half of the cluster galaxies have accreted in the
latest 3 Gyr.
Therefore, we conclude that the difference between our result and theirs
is due to the difference of the formation epoch of the clusters, and our halos
with $r_{\rm h} > r_{\rm est}$ have not been stripped completely yet.
It is interesting to compare the density profiles of the cluster halos
and the NFW profile. To fit the density profiles of the cluster halos by
eq.(\ref{nfw}), we also use $r_{200}$ as a fitting parameter, because we do
not obtain the $r_{200}$ of them from raw data.
The top row of Fig. \ref{dprg} shows the density profiles of two halos
(which are placed at (-1.6, 0.8) and (-0.8, -0.1) in Fig. \ref{halos},
respectively) with $r_{\rm h} > 2 \times r_{\rm est}$
and $r_{\rm peri} > 500$ kpc.
We expect that the effect of stripping may be small for such halos.
For both halos, the NFW model can produce good fits.
However, most of halos which have $r_{\rm h} \simeq r_{\rm est}$
and $r_{\rm peri} > 300$ kpc have steeper outer density profiles
than the NFW model, as shown in middle and bottom rows in Fig. \ref{dprg}.
Therefore, we can say that most of halos are stripped in some degree and have
steeper outer profiles and some halos which have accreted recently to the
cluster and which have not been stripped very much can retain their original
shapes.
According to the NFW's argument, the concentration parameter $c$ in
eq (\ref{nfw}) should be higher for the cluster halos than that
for the field halos, because halos within denser environments
form at earlier epochs.
Since increasing the numerical resolution causes steeper inner profiles
(Moore et al. 1998), we choose the halos having similar resolution
as those in the NFW simulation in both region and we plot the concentration
parameters as a function of the $M_{200}$ of the halos in Fig. \ref{c}.
It is found that the field halos have almost same values of the concentration
parameter with the NFW's theory (solid line) and the cluster halos are more
concentrated than the field halos.
Two cluster halos having almost same values of $c$ with the NFW's theory
are recently infalled halos (top row of Fig. \ref{dprg}), thus, it is expected
that they formed in the lower density region than other cluster halos.
We should note that there is some ambiguity in determining the concentration
parameters for cluster halos because they have steeper outer profiles due to
tidal stripping than the NFW model and it may lead the higher value of $c$.
\section{Discussion}
We investigate the formation and evolution of galaxy size dark halos
in a cluster environment based on the high resolution N-body simulation.
With our resolution (see Table \ref{data}) we find a number of
galaxy size density peaks (about 300 with $n_{\rm th} = 15$) within the virial
radius of the cluster at $z = 0$.
This result suggests that the overmerging problem can be much reduced
by using high resolution simulation.
However, even with our resolution, a large number of halos cannot survive
, even if they have massive halos at higher $z$.
This makes difficult to trace their merging histories which play important
roles when we investigate evolution of cluster galaxies.
To avoid this problem we trace halo-stripped galaxies as well as galactic
halos by using the particles placed at local density peaks of the halos
as tracers.
This approach enables us to derive merging history trees of galaxies
directly from dissipationless N-body simulations in various kinds of
environments.
We find the following results, which seems to relate to the
evolution of the cluster galaxies, using this merging history tree:
\begin{itemize}
\item The galaxy distribution in the cluster do not show
either spatial or velocity bias except in the
central part of the cluster where the very massive halo
dominates.
\item There is the very massive galactic halo at the center
of the cluster and a large fraction of dark matter particles
in the central part of the cluster are confined
in the local potential well of this halo.
This halo has evolved through merging of the large halos
and accretion of dark matter.
\item At $z \simeq 2$, the halo mass functions both in the field and in
the cluster formation region are well fitted by the PS formula, and
there are massive galaxies in the cluster formation region more than in
the field. The summed-up-mass function of the cluster galaxies at
$z = 0$ has hardly changed from $z \simeq 2$.
\item In the cluster formation region, the number fraction of large galaxies
which have undergone mergers for the last 0.5 Gyr
is higher than that in the field at high redshift
($z > 3$). After $z \simeq 3$,
this fraction in the cluster formation
region rapidly decreases and become
lower than that in the field. In the cluster, merging is the rare
event and only a few massive halos has preferentially merged.
\item A large fraction of the massive halos
($M_{\rm h} > 10^{11} M_{\odot}$)
at high redshift ($z \simeq 2$) have survived in the cluster
at $z \simeq 0.5$. However, after $z \simeq 0.5$ a large fraction of
these halos (more than 60 \% within 0.5 Mpc from the cluster center)
are destroyed by the tidal force of the cluster and
the fraction of the surviving halos has
clear correlation with the distance from the cluster center.
It is also found that the halos which are stripped in some degree
have steeper outer density profiles than the NFW profile and
the halos which have recently accreted into the cluster
have the density profiles
well fitted by the NFW model.
\end{itemize}
The importance of mergers of individual galaxies to their evolution
has been well investigated by numerical simulations (e.g., Burns 1989) and
semi-analytic models (e.g., Kauffmann et al. 1993; Cole et al. 1994).
Our cluster galaxies merged efficiently at high redshift ($z > 3$).
On the other hand, the fraction of the galaxies which have undergone mergers
recently (lower $z$) in the cluster formation region is smaller than that in
the field. This difference of the way of merging may explain the difference
between observed feature of field galaxies and that of cluster galaxies.
Furthermore, merging is still important in the cluster at lower $z$,
because it contributes to the increase of mass of the central dominant halo.
In our results, clearly, the most important process which affects the
evolution of galactic halos in the cluster is the tidal stripping due
to the cluster potential.
Since it diminishes the size of the cluster halos, these halos can hardly
merge. Therefore, the summed-up-mass function of the cluster galaxies has not
change so much since larger size objects (groups and clusters) formed.
A possibility that the tidal stripping leads starbursts and
the morphological tranceformation of galaxies and
it causes the Butcher-Oemler effect and the density-morphology relation was
suggested by Moore, Katz, \& Lake (1998).
Thus, inclusion of hydrodynamical processes and star formation in our
numerical model is very interesting.
For the next step, we will combine our merging history tree of
galaxies derived from N-body simulations with population-synthesis models
in order to make detailed comparison with the observational data and
predictions of semi-analytic models.
The results of this analysis are given in forthcoming paper.
\acknowledgments
The authors wish to thank Prof. M. Fujimoto, M. Nagashima, and the referee
for helpful discussions and comments.
Numerical computation in this work was carried out on the HP Exemplar at the
Yukawa Institute Computer Facility and on the SGI Origin 2000
at the division of physics, graduate school of science, Hokkaido University.
| proofpile-arXiv_065-8411 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |